Integrate vCO

0

vRO and OpenStack thoughts on the Orchestrator

Some of you are maybe playing with OpenStack. Some of you with the native OpenStack of in a Distribution of your choice others with the VMware Integrated OpenStack (VIO) Beta. The VIO Beta is actual available as Beta. Information’s about VIO and other OpenStack Staff in context of VMware can be found here http://blogs.vmware.com/openstack/

Some general information about OpenStack and the different Modules are documented in the OpenStack Documentation. I want to include some Information from here: http://docs.openstack.org/juno/install-guide/install/yum/content/ch_overview.html

About the different modules:

Overview

The OpenStack project is an open source cloud computing platform that supports all types of cloud environments. The project aims for simple implementation, massive scalability, and a rich set of features. Cloud computing experts from around the world contribute to the project.

OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complemental services. Each service offers an application programming interface (API) that facilitates this integration. The following table provides a list of OpenStack services:

Table 1.1. OpenStack services

Service Project name Description
Dashboard Horizon Provides a web-based self-service portal to interact with underlying OpenStack services, such as launching an instance, assigning IP addresses and configuring access controls.
Compute Nova Manages the lifecycle of compute instances in an OpenStack environment. Responsibilities include spawning, scheduling and decommissioning of virtual machines on demand.
Networking Neutron Enables Network-Connectivity-as-a-Service for other OpenStack services, such as OpenStack Compute. Provides an API for users to define networks and the attachments into them. Has a pluggable architecture that supports many popular networking vendors and technologies.

Storage

Object Storage Swift Stores and retrieves arbitrary unstructured data objects via a RESTful, HTTP based API. It is highly fault tolerant with its data replication and scale out architecture. Its implementation is not like a file server with mountable directories.
Block Storage Cinder Provides persistent block storage to running instances. Its pluggable driver architecture facilitates the creation and management of block storage devices.

Shared services

Identity service Keystone Provides an authentication and authorization service for other OpenStack services. Provides a catalog of endpoints for all OpenStack services.
Image Service Glance Stores and retrieves virtual machine disk images. OpenStack Compute makes use of this during instance provisioning.
Telemetry Ceilometer Monitors and meters the OpenStack cloud for billing, benchmarking, scalability, and statistical purposes.

Higher-level services

Orchestration Heat Orchestrates multiple composite cloud applications by using either the native HOT template format or the AWS CloudFormation template format, through both an OpenStack-native REST API and a CloudFormation-compatible Query API.
Database Service Trove Provides scalable and reliable Cloud Database-as-a-Service functionality for both relational and non-relational database engines.

This guide describes how to deploy these services in a functional test environment and, by example, teaches you how to build a production environment. Realistically, you would use automation tools such as Ansible, Chef, and Puppet to deploy and manage a production environment.

 Conceptual architecture“

As graphical view this is a good oversight about the different modules and how they work together. The drawing this from the OpenStack documentation. The original can be found here: http://docs.openstack.org/juno/install-guide/install/yum/content/ch_overview.html

Independent which “Version” of OpenStack you use, the APIs and components are always the same. From automation perspective the Project Heat is interesting.

Quote (https://wiki.openstack.org/wiki/Heat):

OpenStack Orchestration

The mission of the OpenStack Orchestration program is to create a human- and machine-accessible service for managing the entire lifecycle of infrastructure and applications within OpenStack clouds.

Heat

Heat is the main project in the OpenStack Orchestration program. It implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like code. A native Heat template format is evolving, but Heat also endeavours to provide compatibility with the AWS CloudFormation template format, so that many existing CloudFormation templates can be launched on OpenStack. Heat provides both an OpenStack-native ReST API and a CloudFormation-compatible Query API.

Why ‘Heat’? It makes the clouds rise!

How it works

  • A Heat template describes the infrastructure for a cloud application in a text file that is readable and writable by humans, and can be checked into version control, diffed, &c.
  • Infrastructure resources that can be described include: servers, floating ips, volumes, security groups, users, etc.
  • Heat also provides an autoscaling service that integrates with Ceilometer, so you can include a scaling group as a resource in a template.
  • Templates can also specify the relationships between resources (e.g. this volume is connected to this server). This enables Heat to call out to the OpenStack APIs to create all of your infrastructure in the correct order to completely launch your application.
  • Heat manages the whole lifecycle of the application – when you need to change your infrastructure, simply modify the template and use it to update your existing stack. Heat knows how to make the necessary changes. It will delete all of the resources when you are finished with the application, too.
  • Heat primarily manages infrastructure, but the templates integrate well with software configuration management tools such as Puppet and Chef. The Heat team is working on providing even better integration between infrastructure and software.

What does this mean for a VMware Administrator which want to orchestrate and automate his VMware and OpenStack environment?  At the Moment Heat is primary focused on the OpenStack environment. When you want integrate your Automation solutions you have to use other Automation Tools. I would speak from Infrastructure Orchestrator and Application Orchestration.

In this case Heat is the Infrastructure Orchestrator which offers the possibility to create new virtual machines. For the communication from Heat to VMware and API must be used to provide the Information from Heat to vRO. That is a possible way to Automate and Orchestrate the OpenStack Environment. When we take VMware into the Account, the drawing becomes somewhat more complex.

vCO is used to provision virtual Machines on the VMware Site. Heat is used to provision virtual Machines on the OpenStack Site.

I want to keep my Orchestration and Automation part as simple as possible so why we don’t only use vCO for provisioning new VMs?

From OpenStack site, you lose somewhat flexibility because can cannot use Tosca as description Language any more but from Orchestration Site it is simpler.

In my next Blog Post I will show you the way how vCO can talk with OpenStack.

Have fun and Orchestrate the World 😉

0

New Version of OVF Transfer Plugin for vCO released

After a long time of development I am pleased to announce the availability of the New Version of the OVF Transfer Plugin. We reviewed the whole code of the old Plugin and rewrote it. With this Version we also improved the available feature.  The Plugin Description reads as flowing:

The OVA/F Transfer plug-in allows you to import and export virtual machines as OVF/OVA template to/from the VMware vCenter via the VMware vCenter/vRealize Orchestrator.

This plug-in provides actions and workflows to use the OVF/A functionalities directly in your own workflows.

Here are some functions of the new Plugin:

This plug-in includes amongst others following features:

  • [Export]  Export of virtual machines as OVF templates
  • [Import]  Import of virtual machines as OVF or OVA templates
  • [Import]  Support for vSS and vDS PortGroups
  • [Import]  Support for multiple vNics
  • [Import]  Support for OVF properties, e.g. used for virtual appliance imports
  • [Import/Export] Supported sources/destinations: Locale file, HTTP, HTTPS, FTP, CIFS

We included a lot of new functions which some of you required and fixed all Bugs which were reported.

For this Plugin the following types of sources are supported:

–       Local file (local means on vCO Server e.g. vCO appliance or vCO Windows installable)

–       HTTP, HTTPS

–       FTP

CIFS/Network Share (only with vCO Windows installable version

The Plugin is available at the VMware Solution Exchange https://solutionexchange.vmware.com/store/products/vmware-vcenter-orchestrator-ovf-transfer-plug-in  and is free of charge.

Many thanks to my colleague Sascha Bitzer who wrote and maintains the Plugin!

0

Generate VMs based on load with vCO and vCAC

You do remember my video with the topic „Generate VMs based on actual load in a Resource Pool” (http://www.vcoportal.de/2012/11/generate-vms-based-on-actual-load-in-a-resource-pool/)?

In that video I showed a vCO workflow which provisioned new VMs when the load on a specific Resource Pools reached a defined Value. When the load was decreased, the VMs were thrown away. In this video, I worked on a simple vCenter Server base. A colleague of mine, Carsten Schäfer, took my Workflow and modified it to work with vCAC.  They changed the workflow so, that virtual machines will not provisioned as a “Single” VM, instead he used the add components function to deploy them to an existing Multi-Machine Service.

Oh yeah, one might call it some first PoC for auto-scaling in vCAC (yep, that’s just for the robots :mrgreen: )

Here is the video:

As I found great stuff 😉

0

NetApp OnCommand Workflow Automation package for vCO

Jeremy Goodrum (make sure to follow him on www.virtpirate.com and Twitter @virtpirate)  from NetApp has published a powerful package to integrate vCO and NetApp Workflow Automation:

https://communities.netapp.com/docs/DOC-25899

I “interviewed” Jeremy via email about his solution:

Can you introduce Netapp WFA in 2 sentences?
Jeremy –  OnCommand Workflow Automation (WFA) is NetApp’s automation framework that enables architects to create storage automation workflows and to integrate with third party SDK toolkits like VMware’s PowerCLI.  WFA gives NetApp storage administrators the ability to turn traditional scripts into full blown repeatable workflows that can intelligently provision and manage storage.

What drove you to build an integration of WFA and vCO?
Jeremy – I have had a lot customers tell me that they want to leverage vCenter Orchestrator as the orchestration portal, but needed storage automation as well.  People that have seen WFA’s intelligent storage placement and integration desired to add this to vCO’s already robust features.  We had this grand vision of vCO and WFA working together to build more than just traditional VMware Datastores.  We wanted to give vCO the ability to not only integrate with VMware environments, but also manage and provision NetApp storage .  Now vCO admins can build workflows that provision applications to use NetApp storage, manage CIFS home directories and even manage NetApp Snapshot backups.

When building the solution, what were your biggest challenges?
Jeremy – We wanted this to be simple to use and easy to drop in.  The big challenge was in trying to modularize the entire structure to allow end users to drop the package into their environment and just go.  It was important to me that the WFA package be a complete self-contained plug-in type of solution that required very little customization.

When you now (proudly 😀 ) review the solution, what do you like most about it?
Jeremy – Ok, I might be bragging a little here but honestly, I am proud of how simple it is to use.  A NetApp Storage Architect can now create fully intelligent and structured WFA workflows while enforcing NetApp and the customer’s best practices.  The architect can have the workflow build entire tenancies or application stacks and then share the workflow with the VMware team.  Within minutes, the VMware team can literally pull the workflow into vCO and be off the ground.  It really is that simple.

So, watch the videos, download the package, and enjoy the power of automation!

0

vCO and Veeam Backup&Replication a powerful combination

Last week I did a Webinar for Veeam in Germany. My topic in this webinar was Automation and Orchestration. Due the circumstance that the Webinar was in German, I decided to make this post to share the information for the rest of the non-German speaking world.

For those who understand German the Webinar was recorded and can be found here:

http://www.veeam.com/de/videos.html?ad=de-topmenu

Before you start some really important notes on the combination of vCO and Veeam B&R.

Some Veeam B&R commands need a connection to the vCenter Server. When you invoke your commands in a PowerShell Window on your Backup host, the PowerShell uses CredSSP to provide the vCenter Server Login Information’s from Veeam B&R to the vCenter Server. If you do the same in a vCO Workflow, this does not work! The reason why it not work is because the vCO PowerShell plugin only Supports Basic- and Kerberos Authentication. In every Environment I used so far, the Servers where in a Windows AD. This allowed me to use the Kerberos Authentication in the vCO PowerShell Plugin. In the last time, I did many tests with the Basic Authentication and had a lot of problems and errors with that type of Authentication. So my recommendation for the Authentication is “use the Kerberos Authentication to avoid a lot of trouble and problems!”

Prepare the Backup Server

At the moment Veeam Backup&Replication has no SOAP or REST API Interface. The only available interface is PowerShell.  To use the Power Shell from vCO some necessary preparations has to be done.

First of all Veeam Backup&Replication must be installed with the PowerShell Extension. This is done during Installation or if you already installed it without PowerShell  to just start the Installation again and add the PowerShell feature.

After you have installed the PowerShell Extension, you can start it from the Management Console.

This Button starts a PowerShell shell with an already loaded Veeam Extension. The Files for this Veeam PowerShell Extensions reside here:  “C:\Program Files\Veeam\Backup and Replication” in this path the file ”Install-VeeamToolkit.ps1” is important to load the extension automatically. We will use this file later in our vCO Workflows.

The next we have to do is to check if the Veeam Backup Server has the PowerShell in the Version 3.

For the first workflows and test I recommend to change the Host execution Policy to unrestricted. When everything goes fine, you can change the execution Policy to remote-signed

Set ExecutionPolicy (RemoteSigned / Unrestricted )

After that, we need a command Window on the Backup server. Here we have to insert the following commands:

Run the following command to set the default WinRM configuration values.


c:\> winrm quickconfig

(Optional) Run the following command on the WinRM service to check whether a listener is running, and verify the default ports.

c:\> winrm e winrm/config/listener The default ports are 5985 for HTTP, and 5986 for HTTPS.

Enable basic authentication on the WinRM service.

Run the following command to check whether basic authentication is allowed.

c:\> winrm get winrm/config

Run the following command to enable basic authentication.

c:\> winrm set winrm/config/service/auth @{Basic="true"}

Run the following command to allow transfer of unencrypted data on the WinRM service.

c:\> winrm set winrm/config/service @{AllowUnencrypted="true"}

Enable basic authentication on the WinRM client.

Run the following command to check whether basic authentication is allowed.

c:\> winrm get winrm/config

Run the following command to enable basic authentication.

c:\> winrm set winrm/config/service/auth @{Basic="true"}

Run the following command to allow transfer of unencrypted data on the WinRM client.

c:\> winrm set winrm/config/client @{AllowUnencrypted="true"}

Run the following command to enable winrm connections from vCO host.

c:\> winrm set winrm/config/client @{TrustedHosts ="vco_host"}

After we have executed the commands, we are ready with the Backup Server. Let’s now switch to the vCO Server.

Prepare the vCO

From the view of the vCO the first and important thing is, that the PowerShell Plugin is installed and activated in the vCO Server. If you are not familiar with this, the documentation can be found here:

http://pubs.vmware.com/orchestrator-plugins/index.jsp?topic=/com.vmware.using.powershell.plugin.doc_10/GUID-8AE1CFF2-F6F0-4233-BDD9-F318E461AB2F.html

When the PowerShell Plugin is ready, we can start to add the Backup Server to our repository. This could be done be starting the PowerShell Workflow to “Add a new Server”. The needed information’s are self-explaining.

On the second site we have to choose as PowerShell remote host type “WinRM”. As Protocol we use “HTTP” or “HTTPS”. The last point is Authentication. Here we choose “Kerberos”.

On the last page we have to choose if we use a “Shared session” or a “User Session”. When you chose the shared session you have to insert User credentials. When you decide to use “User Session” you have to insert the Authentication Details in every PowerShell call.

After we are finished with the pre requirements we can start with our first Workflow. Let’s us a simple one…..

Develop the vCO Workflows

If we want to figure out which Veeam Jobs exist on our Backup Server we need the command Get-VBRJob.

The easiest way to start is to copy the Workflow “Invoke a PowerShell Script” into a folder of your choice.

There you have to insert a second scripting element and move the host and script Inputs as Attributes.

In this scripting element we put our script which includes the PowerShell code.

To use a Veeam PowerShell command in a vCO Workflow we need somewhat more input then just the command. We have to load the Veeam Extension into our PowerShell Session which we invoke from the vCO Server. Here is the complete code for the call:

script = "# Load Veeam Powershell Extension into the actual session \n"
+ "'C:/\Program Files/\Veeam/\Backup and Replication/\Install-VeeamToolkit.ps1' \n"
+ "add-pssnapin VeeamPSSnapin \n"
+ "# Veeam is loaded \n"
+ "Get-VBRJob";

For us the full example looks like this.

Now you can use this command in your own workflows. Now that command isn’t really useful by now. Let insert a virtual machine into a Backup Job after creation. For that we have to use the Veeam Command Add-VBRJobObject. For this command we need some information which we can collect during the Session. A full command to insert a VM into a workflow looks like this:

Add-VBRJobObject -Job $(get-VBRjob -Name "+ JOBNAME+ ") -Server $(get-VBRServer| Where {$_.Type -eq 'VC'}) -Objects " + VMNAME + " }"

The Values JOBNAME and VMNAME must be specified as vCO Attributes or Inputs.

When you now try to execute this like the command before:

You will get an error like this one:

Failed to login to “vcenter.example.com” by SOAP, port 443, user „root”, proxy srv: port:0 +   CategoryInfo : InvalidOperation: (Veeam.Backup.Po…FindVBRViEntity:FindVBRViEntity)         [Find-VBRViEntity], Exception + FullyQualifiedErrorId :   Backup,Veeam.Backup.PowerShell.Command.FindVBRViEntity

Why this happens?  Here we get into trouble with the Authentication against the vCenter Server. If everything was fine before and you can execute the command from a PowerShell shell the problem is in your workflow. Like described before we have to Authenticate against the vCenter Server from our Workflow.  vCO has no option to do this automatically . We have to change our Workflow to this:

 

script = "invoke-command -session $(New-PSSession <strong>BACKUPSERVER</strong> -Authentication Kerberos -Credential $(new-object -typename System.Management.Automation.PSCredential -argumentlist<strong> USER@DOMAIN</strong>, $(convertto-securestring -string '<strong>PASSWORD</strong>' -asplaintext -force))) -scriptblock{ set-item wsman:localhost\Shell\MaxMemoryPerShellMB 1024"
      + "\n Add-PSSnapin -Name VeeamPSSnapIn -ErrorAction SilentlyContinue"
      + "\n Add-VBRJobObject -Job $(get-VBRjob -Name "+ <strong>JOBNAME</strong> + ") -Server $(get-VBRServer| Where {$_.Type -eq 'VC'}) -Objects " + VMNAME + " }"

This script looks really different then the script before. What do we do here? We generate a new Powershell session on the Backup Server (New-PSSession).  For this session, we define a Username (USER@Domain) and a Passwort (PASSWORD). For the Username it is very important that the user is written as user@domain. Otherwise the Kerberos Authentication will not work and the Workflow will fail! At last we set the Memory for the new Shell to 1024 MB (set-item wsman:localhost\Shell\MaxMemoryPerShellMB 1024) If we doesn’t exceed the Memory the workflow will also fail! At last we load the Veeam Snapin and execute the Script job…..

That’s easy or?

With this Background Knowledge you can start to implement your own Automation Workflows with included Backup of your virtual machines with Veeam. It is also possible to integrate the Replication….you have just to implement the replication command and start your Automation….

In the Veeam Community there is a good PowerShell forum. So if you have trouble with your Veeam PowerShell commands, get a look there:

http://forums.veeam.com/viewforum.php?f=26

Have fun with the Power of vCO 😉