About Author: Christian Strijbos

Posts by Christian Strijbos

0

Automating Veeam with vCO and the Restful API

Do some of you remember my Blog Post about automating Veeam with vCO and Powershell (http://www.vcoportal.de/2013/04/vco-and-veeam-backupreplication-a-powerful-combination/)? The problem at this time was that Veeam only offers a PowerShell API. Now, some time is gone and Veeam released the Version 7 of Backup&Replication.  In this Version, Veeam offers an Enterprise Manager which offers a Restful API for automating.  Please be aware that the Restful API is only available in the Enterprise Plus Version of Veeam B&R.  The Veeam Restful API can be accessed over this URLs and ports:

Veeam Restful API HTTP URL  http://<Enterprise-Manager>:9399/api/

Veeam Restful API HTTPS URL https://<Enterprise-Manager>:9398/api/  

A comparison can be found here: http://www.veeam.com/backup-version-standard-enterprise-editions-comparison.html So, let’s start with the integration. First think what we have to do is to connect the vCO and the Veeam B&R server. Therefore we need the vCO Rest Plugin which must be installed over the vCO Plugin page. Details how to do install a plugin can be found in the vCO Documentation for the Plugin (http://pubs.vmware.com/orchestrator-plugins/index.jsp?topic=/com.vmware.using.http_rest.plugin.doc_10/GUID-D303E545-0BFE-4EEB-BAC3-7776EFCDB516.html)

After we have installed the Plugin, we can connect the vCO and the Veeam Server. Therefore we have a build in Workflow “Add a Rest Host”

In the Workflow we have to give our Restful Host a Name and have to define the URL. The settings for the Connection and Operation timeout doesn’t have to be changed.

The Authentication for the B&R Server has to set to Basic

The next thing is to chose a Session mode. You can chose between a “Shared Session” and a “per User Session”. Which you chose depends on your security settings. I will take a shared Session in the Screen-Shots. For the shared session you have to define a user and a password.

If you use a HTTPS connection, you have to accept the SSL Certificate for the B&R Server.

At the end you start the Workflow and if everything goes well, your Workflow finished without any problems.

 

The next thing you can do, is to import the Veeam Rest Schema. The Schema is located in the Path “C:\Program Files\Veeam\Backup and Replication\Enterprise Manager\schemas\RestAPI.xsd” on you B&R Server. To import the file, you have to transfer it to a Webserver so that the vCO Server can import the schema. To start the import we start the Workflow “Add schema to REST Host”. 

In the Workflow, we have to change the REST Host to which we want import the schema.

In the next screen we have to define the Web URL were the schema file is located.

If everything goes well, your workflow finished without any error and the Schema is available under the Rest Host.  

During my test, I had some problems with the Schema. So I decided to implement my task manual…… To implement the Veeam Restful API functions manually, the Veeam documentation is required. The documentation can be found here: http://helpcenter.veeam.com/backup/70/rest/overview.html

First thing which we have to do is to open a Restful API Connection from the vCO Server to the B&R Server. To connect the API two commands are used.

The first command is:

GET http://EnterpriseManager:9399/api/

and the second is

POST http://EnterpriseManager:9399/api/sessionMngr/

To send the commands to the Enterprise Manager you need authentication. Do the facts, that we configured our vCO Server for authentication, vCO handle this Authentication for us. Now, let’s post this request into a vCO Workflow. Therefore we create a new Workflow. For the Workflow we need one Input Parameter from Type REST:RestHost

In the Workflow itself we place a “Scriptable task” to insert our Code.

In this “Scriptable Task” we include this code:


var PostResponse = BRHost.createRequest("POST", "/api/sessionMngr/", null).execute();

System.log("Connection Successful:  " + PostResponse.contentAsString);

So, what does this code do? In the first line, we open the connection to the Enterprise Manager. The Authentication is done from the vCO Server (the provided Credentials during the Rest Host connection). The Second line just gives us the response of the Connections and the available things we can make. Now let’s expand this Workflow and let us see which Backup Jobs are implemented.


var PostResponse = BRHost.createRequest("POST", "/api/sessionMngr/", null).execute();

var BackupResponse = BRHost.createRequest("GET", "api/jobs?type=job", null).execute();

System.log("Backup Jobs " + BackupResponse.contentAsString);

The interesting Part here is the second line witch gets the Backups Jobs from the Server:

Backup Jobs <?xml version=”1.0″ encoding=”utf-8″?><EntityReferences xmlns=”http://www.veeam.com/ent/v1.0″ xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”><Ref UID=”urn:veeam:Job:a8f5b929-c5c8-4901-b749-d8c7ea9b8462″ Name=”Job1″ Href=”http://192.168.157.207:9399/api/jobs/a8f5b929-c5c8-4901-b749-d8c7ea9b8462″ Type=”JobReference”><Links><Link Href=”http://192.168.157.207:9399/api/backupServers/2b98f7dd-a2e5-4a53-9704-dfec9b30a320″ Name=”192.168.157.207″ Type=”BackupServerReference” Rel=”Up”/><Link Href=”http://192.168.157.207:9399/api/jobs/a8f5b929-c5c8-4901-b749-d8c7ea9b8462?format=Entity” Name=”Job1″ Type=”Job” Rel=”Alternate”/><Link Href=”http://192.168.157.207:9399/api/jobs/a8f5b929-c5c8-4901-b749-d8c7ea9b8462/backupSessions” Type=”BackupJobSessionReferenceList” Rel=”Down”/></Links></Ref></EntityReferences>

As we can see we have a Backup Job with the name “Job1”. Now let’s have a look with VMs are included into the Job. First of all, we need a Input parameter for the Jobname (we will use this to filter the results…)

After we are finished we will extend our code:

var PostResponse = BRHost.createRequest("POST", "/api/sessionMngr/", null).execute();
var BackupResponse = BRHost.createRequest("GET", "api/jobs?type=job", null).execute();
var XMLFile = XMLManager.fromString(BackupResponse.contentAsString);
var XMLelement = XMLFile.documentElement.getElementsByTagName("Ref");

for(i=0;i<XMLelement.getLength();i++){
 var BackupJob = XMLelement.item(i).getAttribute("Name")
if ( BackupJob == BackupJobName){

 var UIDBackup = XMLelement.item(i).getAttribute("UID")
 var UIDBackupJob = UIDBackup.substring(14,50)
 var VMResponse = BRHost.createRequest("GET", "/api/jobs/" + UIDBackupJob + "/includes", null).execute();

System.log("VMs included: " + VMResponse.contentAsString);
}
}

When we make a call to the Restful API we become a XML Response. We call the XMLManager and place the content into the variable XMLFile. On the next row we filter the XML response for the Word “Ref. Within the “Ref” response we search with a loop for the Attribute “Name” and place the response in  a variable. Then we check this name against the Backup Job name an extract the unique identifier for the Job. With this unique identifier we mal a job request and get the list of the included VMs.

The answer from the call is this:

[2014-02-09 23:15:06.281] [I] VMs included: <?xml version=”1.0″ encoding=”utf-8″?><ObjectsInJob xmlns=”http://www.veeam.com/ent/v1.0″ xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”><ObjectInJob Href=”http://192.168.157.207:9399/api/jobs/a8f5b929-c5c8-4901-b749-d8c7ea9b8462/includes/b39bdbf3-8333-4525-b393-110ef3ea9332″ Type=”ObjectInJob”><Links><Link Href=”http://192.168.157.207:9399/api/jobs/a8f5b929-c5c8-4901-b749-d8c7ea9b8462/includes/b39bdbf3-8333-4525-b393-110ef3ea9332″ Name=”WinXP2″ Type=”ObjectInJob” Rel=”Delete”/><Link Href=”http://192.168.157.207:9399/api/jobs/a8f5b929-c5c8-4901-b749-d8c7ea9b8462?format=Entity” Name=”Job1″ Type=”Job” Rel=”Up”/></Links><ObjectInJobId>b39bdbf3-8333-4525-b393-110ef3ea9332</ObjectInJobId><HierarchyObjRef>urn:VMware:Vm:ea294886-2e12-419b-934f-70b371e0f746.vm-15</HierarchyObjRef><Name>WinXP2</Name><DisplayName>WinXP2</DisplayName><Order>0</Order><GuestProcessingOptions><AppAwareProcessingMode>RequireSuccess</AppAwareProcessingMode><FileSystemIndexingMode>ExceptSpecifiedFolders</FileSystemIndexingMode><IncludedIndexingFolders/><ExcludedIndexingFolders><Path>%windir%</Path><Path>%ProgramFiles%</Path><Path>%TEMP%</Path></ExcludedIndexingFolders><CredentialsId/></GuestProcessingOptions></ObjectInJob></ObjectsInJob>

So with the techniques and methods shows in the post you are able to do a lot more interesting stuff with vCO and Veeam Enterprise Manager. A complete list about all the Enterprise Manager Restful API commands can be found here: http://helpcenter.veeam.com/backup/70/rest/em_web_api_reference.html Have fun with our automation and orchestrate the World 😉

0

VMware vCenter Server Heartbeat – Restore on a second node….a journey….

Warning: The following post has absolutely nothing to do with automation or Orchestration! Be aware of this while reading the post 😉

In the last days I had a job for a customer who wanted to implement vCenter Server Heartbeat. For those of you, which are not familiar with vCSHB here is the description from the VMware Website:

“VMware® vCenter™ Server Heartbeat™ protects VMware® vCenter Server™ virtual infrastructure from problems related to applications, configurations, operating systems, networks and hardware.”

If I had to say it I would bring it to the words: vCSHB is a cluster service for different VMware products like the vCenter Server (including the SQL Server if installed on the same Server), VMware Composer and a lot more…..

As picture vCSHB looks like this:

The vCSHB is based on Neverfail with tuning for VMware…… With vCSHB you can build different “models” to protect your vCenter Server. This could be a physical to virtual model or also a physical to physical model. There are many amore options to build your application HA with vCSHB….. Most implementations that I have done so far, were physical to virtual implementations….this model is relative simple to implement and you get the vCHSB easily up and running…..

Now let’s come back to the customer project…. The customer wanted to implement a physical to physical implementation. The Servers where located on a remote site so that’s mean for as that we didn’t have a chance to get a hand on the server. The only possibility for us to work with the server is to log in to the Windows system which was installed on the server via RDP or use the remote connection board.

For the customer we started with the primary server. First thing you have to do, beside the installation of Windows, is the installation of the VMware vCenter Server, a SQL Server and the other VMware Products you want to protect via vCSHB. After you are finished with the installation, you can start with the installation of vCHB. We also did it in that way and everything did it fine. During the vCSHB installation, a Backup of the Primary system ( the system were you installed the vCSHB first) is taken. After the first node is installed, it is necessary to install vCSHB on the second note. The only “prerequisite” on the second node is that you need an installed Windows with the Windows Backup installed. The wording installation is properly wrong in this context because you start the installation and then you use the Backup of the first node.

Here began the Journey with the installation…….

On our first try, the installation failed with an error. The Log provided these information:

Log of files for which recovery failed:

C:\Windows\Logs\WindowsServerBackup\FileRestore_Error-08-11-2013_13-16-29.log

wbadmin 1.0 – Backup command-line tool

(C) Copyright 2012 Microsoft Corporation. All rights reserved.

Starting a system state recovery operation [08.11.2013 14:17].

Processing files for recovery. This might take a few minutes…

Processed (176) files.

Processed (1657) files.

Processed (19373) files.

Processed (27841) files.

Processed (47849) files.

Processed (53873) files.

Processed (74813) files.

Processed (97831) files.

Processed (120041) files.

Processed (120041) files.

Processed (120041) files.

Summary of the recovery operation:

——————–

The recovery of the system state failed [29.11.2013 14:19].

Log of files successfully recovered:

C:\Windows\Logs\WindowsServerBackup\SystemStateRestore-29-11-2013_13-17-40.log

Log of files for which recovery failed: C:\Windows\Logs\WindowsServerBackup\SystemStateRestore_Error-29-11-2013_13-17-40.log

Access is denied.

An exception occurred:

Message: Execution failed with return code -3. Restore was aborted.

See vCSHB-Ref-2273 for resolution procedure.

At:

NTBackupRestoreThread::Run()

wbadmin start systemstaterecovery -backupTarget:”\\Server\share” -version:29/08/2013-11:51 -quiet

The installation cannot continue.

The most interesting Part of the Log was the pointing to the vCSHB reference 2273 for resolution procedure. The Links to an VMware KB Article which can be found here:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004359

Depending on the Information provided in the KB article and the Batch file, we tried a second restore. Also this second restore ends up in the same error message……after a lot of searching we didn’t find any suitable answer for our problem……so we still had to try to solve the problem….

The next we tried was to reinstall the second node and started the Batch file before we tried a next restore…..again we didn’t end in a successful story…..

So we decided to open a VMware SR. We explained our problem and the things we did before…..sadly the VMware Support could help us so much because his only possible solution was the KB article. These was the answer from the support…..

If the steps in the KB do not work then there is no other workaround that we have the problem here is heartbeat using the underlining MS technology to do the backup and clone so if this is failing there is nothing I can do from the VMware side Also he couldn’t help us directly, we could use the information “underlining MS technology”……

So again we started to search but this time we focused on the Microsoft topics for Backup and restore. In the Microsoft forums we found some information from other users which had problems to restore a backup on a physical node witch was made with MS Backup technology. The could restore there nodes if they:

– Deactivated the public (primary) Network Connection

– Didn’t use RDP to connect to the server

– Use the private (second) network connection to access the Backup file

We gave these tips a chance and could install the Backup on the second node……yes 🙂 ….. After that restore was successfully the rest was easy going and we could implement, configure and test the heartbeat installation.

From a discussion with a other vExpert, Mike Schubert, I know that it is possible to restore the vCSHB also with “RAID” Copy (take a Disk from the first node and place it in a second node….)with some manual work…. Hope you can use this information if you get in trouble during your Heartbeat installation….so habe fun a Orchestrate the World 😉

———————————————————————-

Update 09.02.1014

For the German speaking people there is another way provided by Mike Schubert to restore the Heartbeat installation on the second node. Mike’s article can be found here: http://www.die-schubis.de/doku.php/vmware:heartbeat#physical_to_physical_-_ein_anderer_weg

Mike way is to “copy” the Installation via a raid copy with some additional manual task.

0

Generate VMs based on load with vCO and vCAC

You do remember my video with the topic „Generate VMs based on actual load in a Resource Pool” (http://www.vcoportal.de/2012/11/generate-vms-based-on-actual-load-in-a-resource-pool/)?

In that video I showed a vCO workflow which provisioned new VMs when the load on a specific Resource Pools reached a defined Value. When the load was decreased, the VMs were thrown away. In this video, I worked on a simple vCenter Server base. A colleague of mine, Carsten Schäfer, took my Workflow and modified it to work with vCAC.  They changed the workflow so, that virtual machines will not provisioned as a “Single” VM, instead he used the add components function to deploy them to an existing Multi-Machine Service.

Oh yeah, one might call it some first PoC for auto-scaling in vCAC (yep, that’s just for the robots :mrgreen: )

Here is the video:

As I found great stuff 😉

0

Application Onboarding with vCO and WaveMaker

Last month I had a customer, who wanted an onboarding process for his virtual machines. This onboarding should work as follow:
– There is a standard Linux template for all virtual machines
– The users can choose from a web site what machine type is required
– From the website a vCO workflow is started which clones the template and installed the required packages into the VM
– At the end the application configuration is done
To be honest, the customer requirements were quite simple so the usage of Puppet or Chef was not an option.
After everything the customer wanted was clear, I thought about the options and came to the idea to choose the vCO and WaveMaker for the solution.
I don’t like passwords in Workflows so the first think I did, was creating a vCO SSH Key. This could be done with a predefined workflow in vCO.

After I had my SSH public key, I created the Linux template with the public key in the file /root/.ssh/authorized_keys
You can do this manually or also use a predefined vCO Workflow

This allows me, to administer the Linux VM without the need for a password.
The next thing I did, was to create a folder on the vCO Appliance. I created the folder under the root directory with the name ConfigFiles. The vCO has a secure configuration so to allow the vCO Server to access this folder you must allow this explicit. For that, you have to edit the file: js-io-rights.conf on the vCO appliance this file is located under the path:

/opt/vmo/app-server/server/vmo/conf/js-io-rights.conf

I added my created path and after that, the file was looking like this:


-rwx /
+rwx /var/run/orchestrator
+rx ../../configuration/jetty/logs/
+rx ../server/vmo/log/
+rx ../bin/
+rx ./boot.properties
+rx ../server/vmo/conf/
+rx ../server/vmo/conf/plugins/
+rx ../server/vmo/deploy/vmo-server/vmo-ds.xml
+rx ../../apps/
+r ../../version.txt
+rw /ConfigFiles

After you have done this editing, you must restart the vCO Service to take the changes in effect.
For later use, I placed a file with the name “named.conf” into the folder. This file contains the configuration for the bind installation which I will use later in this Blog post.
For our Onboarding, we need the SCP workflow. With the predefined workflow, we have a little problem…..it doesn’t work with ssh Keys. So we have to modify the predefined workflow. Therefore we copy the existing workflow.

I called my workflow “SCP Put command with SSH Key”. After we have copied the workflow, we have to modify it.

We have to change the content in the SCP put file Scriptable task.

try{
var session = new SSHSession(hostName,username);

if(passwordAuthentication){
System.log("Connecting with password");
} else {
if(path == null || path == ""){
System.log("using default");
path = defaultKeyPairPath;
}
System.log("Connecting with key pair ("+path+")");
password = passphrase;
}

session.connectWithPasswordOrIdentity(passwordAuthentication,password,path);
System.log("Connected!");

session.putFile(localFile,remoteFile) ;
output = session.getOutput();
error = session.getError();
exitCode = session.exitCode;

System.log("Output: '"+output+"'");
System.log("Error: '"+error+"'");
System.log("Exit code: '"+exitCode+"'");

session.disconnect();

} catch (e) {
throw "Unable to execute command " + e;
}

After we have made the scripting changes, we need to add an Attribute. I choose defaultKeyPairPath as name. It is from type string and added as value ../server/vmo/conf/vco_key (the path to the vCO SSH Key).

On the input site of the workflow we have to add three Inputs. The first we have to create is passwordAuthentication from type boolean. As default value we choose no. The second Input variable is path from type path. The last one we need is passphrase from type SecureString. The last one is required, if we protect our SSH Key with a passphrase.

After we have created the SCP Put command with SSH Key it is time to build our workflow.
In this blog, we create a simple onboarding workflow to configure a Name Server with bind. The Template I use is a Cent-OS Minimal installation with installed VMwareTools and the SSH Public Key from the vCO Server.
I created a new Workflow with the name “Blog_Configure_DNS”.

In this workflow we go to the schema and add the following Workflows.
The first one is a Workflow with the name “Clone, Linux with single NIC” to clone the template. The second workflow we use is “Run SSH command”. The next one is our “SCP put command with SSH Key” and as last one we also use “Run SSH Command”.

Now we rename the commands. I rename the first “Run SSH….” to “Install named” the “SCP put command….” to “Put configuration named.conf” and the last Workflow to “Restart named”.

After the workflow design, you have to create the needed Attributes, In- and Outputs for every element. I did some examples in my “Little CMDB” series so if you are not familiar to create the needed parameter take a look here:

http://www.vcoportal.de/2012/07/introducing-the-littlecmdb-a-vcenter-orchestrator-wavemaker-demo-project/
and following.

When you’re finished with this, we have to build up the WaveMaker interface.
There are also a lot of good examples available to do this. You will find lot information here:
Using WaveMaker as Web-Fontend for vCO

http://www.vcoportal.de/2011/11/using-wavemaker-as-web-frontend-for-vco/

Off-topic(?): Lessons learned with WaveMaker
http://www.vcoportal.de/2012/02/lessons-learned-with-wavemaker/

Howto setup LDAP-Authentication for Wavemaker (Part 1 & Part2)
http://www.vcoportal.de/2012/05/howto-setup-ldap-authentication-for-wavemaker-part-1/

http://www.vcoportal.de/2012/07/introducing-the-littlecmdb-a-vcenter-orchestrator-wavemaker-demo-project/

To choose the right configuration, you can create a Drop Down field with different options. Here a Screen Shot for the DNS Server.

And here a Screen Shot for the DHCP Server with definition of the Scope.

Beside the option to use WaveMaker, you could also use an automatic provisioning depending on the actual load of a resource pool. I made an example video to show how it could be done with the vCO. You can find the video here:

http://www.vcoportal.de/2012/11/generate-vms-based-on-actual-load-in-a-resource-pool/
So have fun and orchestrate your virtual environment 😉

0

vCO and Veeam Backup&Replication a powerful combination

Last week I did a Webinar for Veeam in Germany. My topic in this webinar was Automation and Orchestration. Due the circumstance that the Webinar was in German, I decided to make this post to share the information for the rest of the non-German speaking world.

For those who understand German the Webinar was recorded and can be found here:

http://www.veeam.com/de/videos.html?ad=de-topmenu

Before you start some really important notes on the combination of vCO and Veeam B&R.

Some Veeam B&R commands need a connection to the vCenter Server. When you invoke your commands in a PowerShell Window on your Backup host, the PowerShell uses CredSSP to provide the vCenter Server Login Information’s from Veeam B&R to the vCenter Server. If you do the same in a vCO Workflow, this does not work! The reason why it not work is because the vCO PowerShell plugin only Supports Basic- and Kerberos Authentication. In every Environment I used so far, the Servers where in a Windows AD. This allowed me to use the Kerberos Authentication in the vCO PowerShell Plugin. In the last time, I did many tests with the Basic Authentication and had a lot of problems and errors with that type of Authentication. So my recommendation for the Authentication is “use the Kerberos Authentication to avoid a lot of trouble and problems!”

Prepare the Backup Server

At the moment Veeam Backup&Replication has no SOAP or REST API Interface. The only available interface is PowerShell.  To use the Power Shell from vCO some necessary preparations has to be done.

First of all Veeam Backup&Replication must be installed with the PowerShell Extension. This is done during Installation or if you already installed it without PowerShell  to just start the Installation again and add the PowerShell feature.

After you have installed the PowerShell Extension, you can start it from the Management Console.

This Button starts a PowerShell shell with an already loaded Veeam Extension. The Files for this Veeam PowerShell Extensions reside here:  “C:\Program Files\Veeam\Backup and Replication” in this path the file ”Install-VeeamToolkit.ps1” is important to load the extension automatically. We will use this file later in our vCO Workflows.

The next we have to do is to check if the Veeam Backup Server has the PowerShell in the Version 3.

For the first workflows and test I recommend to change the Host execution Policy to unrestricted. When everything goes fine, you can change the execution Policy to remote-signed

Set ExecutionPolicy (RemoteSigned / Unrestricted )

After that, we need a command Window on the Backup server. Here we have to insert the following commands:

Run the following command to set the default WinRM configuration values.


c:\> winrm quickconfig

(Optional) Run the following command on the WinRM service to check whether a listener is running, and verify the default ports.

c:\> winrm e winrm/config/listener The default ports are 5985 for HTTP, and 5986 for HTTPS.

Enable basic authentication on the WinRM service.

Run the following command to check whether basic authentication is allowed.

c:\> winrm get winrm/config

Run the following command to enable basic authentication.

c:\> winrm set winrm/config/service/auth @{Basic="true"}

Run the following command to allow transfer of unencrypted data on the WinRM service.

c:\> winrm set winrm/config/service @{AllowUnencrypted="true"}

Enable basic authentication on the WinRM client.

Run the following command to check whether basic authentication is allowed.

c:\> winrm get winrm/config

Run the following command to enable basic authentication.

c:\> winrm set winrm/config/service/auth @{Basic="true"}

Run the following command to allow transfer of unencrypted data on the WinRM client.

c:\> winrm set winrm/config/client @{AllowUnencrypted="true"}

Run the following command to enable winrm connections from vCO host.

c:\> winrm set winrm/config/client @{TrustedHosts ="vco_host"}

After we have executed the commands, we are ready with the Backup Server. Let’s now switch to the vCO Server.

Prepare the vCO

From the view of the vCO the first and important thing is, that the PowerShell Plugin is installed and activated in the vCO Server. If you are not familiar with this, the documentation can be found here:

http://pubs.vmware.com/orchestrator-plugins/index.jsp?topic=/com.vmware.using.powershell.plugin.doc_10/GUID-8AE1CFF2-F6F0-4233-BDD9-F318E461AB2F.html

When the PowerShell Plugin is ready, we can start to add the Backup Server to our repository. This could be done be starting the PowerShell Workflow to “Add a new Server”. The needed information’s are self-explaining.

On the second site we have to choose as PowerShell remote host type “WinRM”. As Protocol we use “HTTP” or “HTTPS”. The last point is Authentication. Here we choose “Kerberos”.

On the last page we have to choose if we use a “Shared session” or a “User Session”. When you chose the shared session you have to insert User credentials. When you decide to use “User Session” you have to insert the Authentication Details in every PowerShell call.

After we are finished with the pre requirements we can start with our first Workflow. Let’s us a simple one…..

Develop the vCO Workflows

If we want to figure out which Veeam Jobs exist on our Backup Server we need the command Get-VBRJob.

The easiest way to start is to copy the Workflow “Invoke a PowerShell Script” into a folder of your choice.

There you have to insert a second scripting element and move the host and script Inputs as Attributes.

In this scripting element we put our script which includes the PowerShell code.

To use a Veeam PowerShell command in a vCO Workflow we need somewhat more input then just the command. We have to load the Veeam Extension into our PowerShell Session which we invoke from the vCO Server. Here is the complete code for the call:

script = "# Load Veeam Powershell Extension into the actual session \n"
+ "'C:/\Program Files/\Veeam/\Backup and Replication/\Install-VeeamToolkit.ps1' \n"
+ "add-pssnapin VeeamPSSnapin \n"
+ "# Veeam is loaded \n"
+ "Get-VBRJob";

For us the full example looks like this.

Now you can use this command in your own workflows. Now that command isn’t really useful by now. Let insert a virtual machine into a Backup Job after creation. For that we have to use the Veeam Command Add-VBRJobObject. For this command we need some information which we can collect during the Session. A full command to insert a VM into a workflow looks like this:

Add-VBRJobObject -Job $(get-VBRjob -Name "+ JOBNAME+ ") -Server $(get-VBRServer| Where {$_.Type -eq 'VC'}) -Objects " + VMNAME + " }"

The Values JOBNAME and VMNAME must be specified as vCO Attributes or Inputs.

When you now try to execute this like the command before:

You will get an error like this one:

Failed to login to “vcenter.example.com” by SOAP, port 443, user „root”, proxy srv: port:0 +   CategoryInfo : InvalidOperation: (Veeam.Backup.Po…FindVBRViEntity:FindVBRViEntity)         [Find-VBRViEntity], Exception + FullyQualifiedErrorId :   Backup,Veeam.Backup.PowerShell.Command.FindVBRViEntity

Why this happens?  Here we get into trouble with the Authentication against the vCenter Server. If everything was fine before and you can execute the command from a PowerShell shell the problem is in your workflow. Like described before we have to Authenticate against the vCenter Server from our Workflow.  vCO has no option to do this automatically . We have to change our Workflow to this:

 

script = "invoke-command -session $(New-PSSession <strong>BACKUPSERVER</strong> -Authentication Kerberos -Credential $(new-object -typename System.Management.Automation.PSCredential -argumentlist<strong> USER@DOMAIN</strong>, $(convertto-securestring -string '<strong>PASSWORD</strong>' -asplaintext -force))) -scriptblock{ set-item wsman:localhost\Shell\MaxMemoryPerShellMB 1024"
      + "\n Add-PSSnapin -Name VeeamPSSnapIn -ErrorAction SilentlyContinue"
      + "\n Add-VBRJobObject -Job $(get-VBRjob -Name "+ <strong>JOBNAME</strong> + ") -Server $(get-VBRServer| Where {$_.Type -eq 'VC'}) -Objects " + VMNAME + " }"

This script looks really different then the script before. What do we do here? We generate a new Powershell session on the Backup Server (New-PSSession).  For this session, we define a Username (USER@Domain) and a Passwort (PASSWORD). For the Username it is very important that the user is written as user@domain. Otherwise the Kerberos Authentication will not work and the Workflow will fail! At last we set the Memory for the new Shell to 1024 MB (set-item wsman:localhost\Shell\MaxMemoryPerShellMB 1024) If we doesn’t exceed the Memory the workflow will also fail! At last we load the Veeam Snapin and execute the Script job…..

That’s easy or?

With this Background Knowledge you can start to implement your own Automation Workflows with included Backup of your virtual machines with Veeam. It is also possible to integrate the Replication….you have just to implement the replication command and start your Automation….

In the Veeam Community there is a good PowerShell forum. So if you have trouble with your Veeam PowerShell commands, get a look there:

http://forums.veeam.com/viewforum.php?f=26

Have fun with the Power of vCO 😉