Call Workflows from outside

0

Honeypot as a Service (HaaS) Part 3

This is the third Post of the Haas Series…..

After we have everything up and running would should take a deeper look at the vRealize Log Insight.  There we get our notifications in the interactive analytics. In normal cases nobody will monitor the whole time the interactive analytics therefore we create a Dashboard. The create dashboard is only for your custom view. The log Insight documentation state it like this “You can add, modify, and delete dashboards in your Custom Dashboards space.”

So, in the first step we create a dashboard for your personal view. The creating is quite easy. So, if you have an alert in the interactive analytics you can create the Dashboard from there.

 

 

Just click in the New Dashboard Icon on the right site. In the Wizard you create a new dashboard. Provide a Name and if you want share the Dashboard with other Users in the environment.

I use the chart graph for my Dashboard. After you are finished you will find your Dashboard in on the Dashboard View under Shared Dashboards.

Quite Easy or? But would It be better to get an Alert Notification from Log Insight?

This is also very easy. When you go back in the Interactive Analytics just take the Alerts Button.

In the Windows we choose “Create Alert from Query”.

There you provide the required Details for the Alert. Be sure that you configured your SMTP Server before when you use E-Mail as alert notification.

That was cool so far, or?

In the last months I made a couple of NSX Implementations. NSX provides us some really cool feature like the possibility to move VMs into Quarantine to isolate them from the communication with other VMs. So why wo should not use these feature, to migrate the VM were the Access violation was made, into a quarantine location to research what is happening on this VM? When the access was done from a physical machine, we can also create a firewall rule which deny the access to the virtual environment? From my point of view this is security in an automated way.

In this blog Post I will not show how to install and configure the NSX Part of this series. There are a lot of useful information available.

To archive these goal, we can use a REST API Call to the vRO Server to call a workflow which we develop to migrate the VMs or create the firewall rule. One of the first things which we need, is the possibility to interact with NSX from the VMware Orchestrator. Therefore, will install the VMware Orchestrator NSX Plugin. The plugin can be found on the VMware Page:

https://my.vmware.com/web/vmware/details?downloadGroup=NSXV_VROPLUGIN_120&productId=417

We also need two additional Plugins The first plugin is needed to decrypt the BASE64 encrypt string from the Shim in vRO. The Plugin can be found here: https://communities.vmware.com/docs/DOC-24991

The next Plugin is not really necessary but from my point of view the JSON Implementation in vRO is not the best in the world…. therefore, I use the jsonPath for Orchestrator Plugin frim Soeldner-Consulting. The documentation and the Link to the VMware Solution Exchange can be found here:

jsonPath for Orchestrator

After we download the all plugins we can install them via the vRO Control center.

After we are finished with that, we have to create our Workflow that must start when an alert is triggered. The triggering can be done via email or REST API Call. I prefer the REST-API Call du the circumstance that this more flexible, but wait…….here we have a problem. The VMware vRealize Log Insight doesn’t offer any option to create a REST API Call. In the Web GUI only Webhook Notification is available. Therefore, we need a translation between the Webhook and the REST API Call which we need for the vRealize Orchestrator. As always there is an solution available…..Steve Flanders create a Shim which can act as Proxy for the Webhook notification in direction of the VMware vRealize Orchestrator. Steve has a GIT Repository with a good documentation about the different available versions and the installation. I will not go further on the installation as already everything is explained here: https://github.com/vmw-loginsight/webhook-shims

Another useful post can you find here:

https://blogs.vmware.com/management/2017/03/webhook-shims-now-available-on-docker-hub.html

After we installed and configured the Shim, we are ready to make our next steps.

I created a vRO Workflow which used as Input parameter the name of the Attacker VM. The workflow checks the Attacker VM against the configured Datacenter. When it is a virtual machine, then an NSX Security Profile is attached to the virtual machine and all communication is denied. When it is not a virtual machine, then an IPSet is created and this IPSet is added to a Security Group

For the workflow it is necessary, that the NSX Manager is configured within vRO. Also, that a security Policy exist which we can use in the workflow. Also, the Datacenter which host our virtual machines in our environment must be provided.

Now let’s have a look at the Workflow:

The Workflow consists from different areas. In the First Part of the Workflow (the yellow box) we have to decode the Base64 string and parse the VM Name from the Log Insight Message.

When we are finished with that, we check if the Attacker Machine is a virtual machine (the red box).  When the machine is a VM, we apply the security Group to the VM (the light brown box) which denies all communication via the Security Policy.

When the Attacker VM is not a virtual machine then the machine will be checked against DNS (the green box) to gather the IP. The DNS checking is done via vRO DNS query so the Orchestrator Appliance must be configured correctly with DNS Server. The DNS Check is done via Name and FQDN. When the name could be resolved and we have the IP Address, then an IPSet is created and the Attacker machine is applied to the security Policy. This only protects the virtual environment!

In all cases a email is created (the blue box) and send to an Administrator.

The Workflow itself has some Attributes which must be filled:

  1. Datacenter –> The Datacenter were to check against the Attacke Machine if it is a VMs
  2. SecurityGroupID –> The Securitygroup ID. The ID is not the name but should look like this: securitygroup-98765. If you don’t know the SecurityGroupID, you can Browse within vRO in the NSX Manager Connection. There you can find the correct name
  3. NSXManager –> Your corresponding NSX Manager (which must be configured before in vRO)
  4. smtpHost –> Your Mail Server
  5. fromName –>  Email Sender Name
  6. fromAddress –>  Email Sender Address
  7. toAddress –> Were the Mail should arrive
  8. DNSDomain –> Your DNS Name to check against FQDN
  9. Optional: SMTPUsername –> The Mail Server Username, if required
  10. Optional: SMTPPassword –> Password for the Mail Server user

For the Workflow itself there is room for optimizing. A loop to check against different Datacenter, or a loop to check against different Domain names……For me and this Demo the Workflow makes his job….

The Workflow can be found here: http://www.vcoportal.de/download/workflow/de.vcoportal.HaaS_.package

 

Honeypot as a Service (HaaS) Part 1 Link: http://wp.me/p7tsEp-C1
Honeypot as a Service (HaaS) Part 2 Link: http://wp.me/p7tsEp-Cb

0

Honeypot as a Service (HaaS) Part 2

In the second Post of this series, we create the Log File Reporting which we will send to vRealize Log Insight.

 

After our samba server is configured (in Part 1) we have to deal with our logfiles and report them to our vRealize Log Insight.

In our Samba Server we already configured the logging which is done in the path /var/log/samba. There files are created for every client which tries to access our server. As long as no client tried to access the file the director should log like this:

When you already accessed your server, you will have a log entry like log.192.168.157.10 or similar.

I will create an event every time a client is accessing the share. Therefore, I will use the inotify-tools. unfortunately, these program is not included the standard CentOS Repository so we have to add an additional repo. I use the fedoraproject repo where we download the Repo configuration and install it.


wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm

rpm -ivh epel-release-7-10.noarch.rpm

After that we can install the inotify-tools with yum.


yum -y install inotify-tools-devel.x86_64

After that we create the string to monitor the samba server for client access.


inotifywait -mrq -e create -e modify -e delete /var/log/samba

In this monitoring we ignore changes to the log.smbd file, due the circumstance that this a a standard file form the samba service. The output of this monitoring locks like this:


/var/log/samba/ MODIFY log log.client1-vcoportal

As we want the information on a remote server, we have to extend our logging string with logger to send the information to a remote server.


inotifywait -mrq -e create -e modify -e delete /var/log/samba  | while read file; do ( logger -n loginsight.vcoportal.de-t securitybreak $file ) done

I send my string to a vRealize Log Insight server and tag the log entry with the word “securitybreak”. In Log Insight we can filter for that word……

So, if you use the interactive analyses you can search for your string.

For me the is still room for improvement. The only relevant information for me is the client name. So, let’s modify the monitoring string.


inotifywait -mrq -e create -e modify -e delete /var/log/samba/host | while read file; do hostname=$(echo $file | cut -d '.' -f2,3,4,5 ); (logger -n loginisght.vcoportal.de -t securitybreak $hostname) done

 

We strip done the string we send to our Log Insight Server to just the Client name. I guess there is a better way to create the string but for me this one work fine.

Now we get only the Client name reported in our Log Insight

How cool is that?

The last thing we have to do is to create a startup script for our monitoring string.

In our case, we will want to run inotifywait as a service and create an init script. First we will create our configuration file:


# specify log file

LOGFILE=/var/log/inotify.log

# specify target directory for monitoring

MONITOR=/var/log/samba

# specify target events for monitoring ( comma separated )

# refer ro man inotifywait for kinds of events

EVENT=create,delete,modify

# Log Insight or SNMP Server

SNMPSERVER=loginsight.vcoportal.de

# Security tag

SECURITYTAG=securitybreak

 

Next, we will create the init script where we use the config file we created before as input for the variables.


#!/bin/bash

# inotifywait: Start/Stop inotifywait

#

# chkconfig: 2345 10 90

# description: inotifywait waits for changes to files using inotify.

#

# processname: inotifywait

. /etc/rc.d/init.d/functions

. /etc/sysconfig/network

. /etc/inotifywait.conf

LOCK=/var/lock/subsys/inotifywait

RETVAL=0

start() {

echo -n $Starting inotifywait:

# Clear Log File

if [ -e $LOGFILE ];then /usr/bin/truncate -s0 $LOGFILE;fi

# Start the Monitoring

/usr/bin/inotifywait -e $EVENT -dmrq $MONITOR --exclude ‘\(log.smbd\)*’ -o $LOGFILE | cut -d '.' -f2,3,4,5 $LOGFILE | while read file; do (/usr/bin/logger -n $SNMPSERVER -t $SECURITYTAG $file) done

RETVAL=$?

echo [ $RETVAL -eq 0 ] && touch $LOCK return $RETVAL

}

stop() {

echo -n $Stopping inotifywait:

killproc inotifywait

RETVAL=$?

echo [ $RETVAL -eq 0 ] && rm -f $LOCK return $RETVAL

}

case $1 in start)

start;;

stop)

stop;;

status)

status inotifywait;;

restart)

stop

start;;

*)

echo $Usage: $0 {start|stop|status|restart}

exit 1

esac

exit $?

 

The permissions need to be 755:


chmod 755 /etc/rc.d/init.d/inotifywait

 

We can start the service now:


/etc/rc.d/init.d/inotifywait start

 

The result will be like this:


[root@backup ~]# /etc/rc.d/init.d/inotifywait start

Starting inotifywait (via systemctl):                      [  OK  ]

 

We need to ensure that the service starts when the server boots, so we need to add it to chkconfig


chkconfig --add inotifywait

chkconfig inotifywait on

 

The log entry is written in the /var/log/inotify.log file and from there they were cut down to post the to the log insight server. We have to use this loop due the circumstance, that in our start script inotiywait is used in daemon mode. There a logfile is mandatory.

That’s it for Part 2

 

Honeypot as a Service (HaaS) Part 1 Link: http://wp.me/p7tsEp-C1
Honeypot as a Service (HaaS) Part 3 Link: http://wp.me/p7tsEp-Cl

0

Honeypot as a Service (HaaS) Part 1

Disclaimer:

This series will show some Ideas which include “homemade” Security Functions. For professional environments, I recommend to talk to the security Vendor of your choice to build up a secure environment. This series will show examples which include different VMware products and their Interaction and teach things which can be done with them.

Now, let’s start.

Today I will start a series which I call Honeypot as a Service (HaaS). What is the Idea behind the HaaS?

In most environments hackers are already a long time present bevor the get localized. When the hacker has access on one of the server, he will start to search for other systems where he can get access and grab sensitive information (credit card data, intellectual property). So, which are system / functions a hacker is locking for?

So, most Hacker are looking for routing information and will start a “slow and silence network scan” for systems which he can attack and get into his hands. When we now start to place virtual machines in our network, which doesn’t have any functions for our network nobody should access them in normal case. From the standpoint of an attacker, he doesn’t know if a system is relevant or not. That’s will be the point for us, to provide the hacker a system with an old infectible SMB Server which he can attack. So, if somebody / somewhat tries to access such an “no functional VM”, we generate a log entry which is forwarded to an Log system. There we can parse the logs and generated “automated” rules to isolate VMs or VM communications.

In this example, I will use a lot of different VMWare products to archive these HaaS environment and the actions we will use.

This series will include:

  • VMware vRealize Log Insight as log host
  • VMware vRealize Orchestrator as “workhorse for a lot of task”
  • VMware NSX for VM isolation and Firewall Rules
  • A small Linux System as Honeypot with an SAMBA Service

So, let start to build up a Linux System which we can use in our vRA Blueprint as Honeypot. I prefer a CentOS Distribution. I choose a 7.x Release of CentOS. The ISO can be downloaded here http://ftp.hosteurope.de/mirror/centos.org/7/isos/x86_64/

I will use the minimal Installation ISO and Install Samba and SSH afterwards.

We will change the Hostname and IP Settings for a later time in vRA so during installation we keep the hostname and network on default settings. We have one important things which we take care about during installation. By default, the NICs are not automatically connected during installation, so that should be changed.

After our Installation has finished we will install the following packages:

  • Perl
  • Samba
  • DNS tools (just for Troubleshooting)
  • Network Manager (Text Version if you need to reconfigure your Config)

This can be done via this command:

yum -y install perl samba system-config-network-tui.noarch dns-utils

When you don’t have a DHCP Server in place, then you need to configure your network settings on Command line: $ sudo ifconfig eth0 192.168.1.50 netmask 255.255.255.0

Also, you need to set your default route:

$ sudo route add default gw 192.168.1.1 eth0

As last step you need some DNS Server entries:

$ sudo vi /etc/resolv.conf

Modify or enter nameserver as follows:

nameserver 8.8.8.8

After we have installed the need packages we can start with the installation from the VMware tools. This is documented well in the VMware documentation so I will not write it down here.

The next Step is to configure our Samba Server for a SMB Share which is available for the attacker.

Now, let us create a fully accessed anonymous share for the users. Everybody can read/write in this share.

We create a directory called /samba/backup and set full permission. The reason (for me) to set the name to backup is, to mark the share as important and a useful target which information or potential interesting thinks in it. I also will create some additional folders to have a interesting target. I use a linux bash script to create the folder and set the required permissions.

#!/bin/bash
echo "This script creates a bunch of folder which were used in a samba share example on vcoportal.de"

dirname=/samba/backup

echo "$dirname"

if [ ! -d "/$dirname" ]

then

echo "Folder doesn't exist. Creating now"

mkdir --p "/$dirname"

echo "Create some other Folders ....."

mkdir --p "/$dirname/HR"

mkdir --p "/$dirname/IT"

mkdir --p "/$dirname/Dev"

mkdir --p "/$dirname/Public"

mkdir --p "/$dirname/Files"

mkdir --p "/$dirname/Userhome"

echo "Set some permissions for the parent Folder"

chmod -R 0775 /samba/backup

echo "Set nobody as Owner for the Folder so that everybody can read see and read it"

chown  nobody:nobody /samba/backup

echo "Change SE Linux Permissions on the Folder"

chcon -t samba_share_t /samba/backup

else

echo "Parent Directory exists"

fi

Just create a file and make it executable with the script content in it. You can change the directory names to whatever you want / need.

After we created the directory we must edit the Samba configuration file.

vi /etc/samba/smb.conf
# See smb.conf.example for a more detailed config file or
# read the smb.conf manpage.
# Run 'testparm' to verify the config is correct after
# you modified it.
 
[global]
        workgroup = vcoportal
        security = user
 
        passdb backend = tdbsam
 
        unix charset = UTF-8
        dos charset = CP932
 
        hosts allow = 127. 192. 172. 10.
 
        max protocol = SMB2
 
 
        map to guest = Bad User
 
        # log files split per-machine (the host logs are stored on a non-default path):
        log file = /var/log/samba/log.%m
Log level = 2
        # maximum size of 50KB per log file, then rotate:
        max log size = 50
 
 
[homes]
        comment = Home Directories
        valid users = %S, %D%w%S
        browseable = No
        read only = No
        inherit acls = Yes
 
 
[Backup]
comment = Backup Share
path = /samba/backup
browsable =yes
writable = yes
guest ok = yes
read only = no
force user = nobody

Now we can test the Samba server configuration

We can test the Samba server configuration syntax errors using the command ‘testparm’.

testparm

When everything looks good you can browse to your samba share.

 

 

 

 

That’s it for Part 1

 

Honeypot as a Service (HaaS) Part 2 Link: http://wp.me/p7tsEp-Cb
Honeypot as a Service (HaaS) Part 3 Link: http://wp.me/p7tsEp-Cl

0

vCO, SNMP Traps and vCOPS

Today I had the request, to generate a vCOPS Demo with an vCO integration. For that, I had a first look on Jörg’s post about is “Self-Healing datacenter” which can be found here: http://www.vcoportal.de/2012/05/integrate-vcops-and-vco/

For me, that was a good starting point but I want to integrate the SNMP Trap with an existing workflow. The creation of a Workflow to deal with snmp traps is not a big problem. The bigger problem is the “automatic” processing via Policy’s because there is no documentation (or I didn’t find it) . So I started to talk with Jörg and made some research in the web.  There I found this excellent post from William Lam about “Automatically Securing Virtual Machines Using vCenter Orchestrator”  http://blogs.vmware.com/vsphere/2012/07/automatically-securing-virtual-machines-using-vcenter-orchestrator.html

This post from William (and his package) has everything inside we need.

First we have to create the preconditions like the SNMP installation in the vCO and the configuration in the vCOPS and the vCO. Jörg has this already documented here: http://www.vcoportal.de/2012/05/integrate-vcops-and-vco/ so I will not repeat this steps here.

Let’s start with the Workflow development. First, we have to create a new workflow. I will call it “Execute Host Maintenance after trap” and I insert the following description “Waits to receive an SNMP trap from a vCenter Server instance, then set the host in maintenance mode or exist the maintenance mode “

After we created the Workflow, we go to the “Schema” tab

Here we insert some Elements. We need:

  • One Scriptable Task
  • One Decsion
  • The Workflow Enter Maintenance Mode
  • The Workflow Exit Maintenance Mode
  • And a Workflow End.

When you are ready your order should look like mine.

Now, let’s start with the Scriptable Task. I name it “Retrieve Host”. We want to extract the information out of the SNMP Message, for that we need some variables and SNMP Trap information. Let’s start with the SNMP Trap Information. The SNMP Messages comes with OID Numbers. Each OID Number has a different meaning and stands for a other component or message. The values of this OID Number can be found in the SNMP MIB Files which can be downloaded from the VMware Website. Every trap data has more than one OID Number

=============

oid: 1.3.6.1.2.1.1.3.0

type: Number

snmp type: Timeticks

value: 1743301911

Element 2:

=============

oid: 1.3.6.1.6.3.1.1.4.1.0

type: String

snmp type: OID

value: 1.3.6.1.4.1.19004.0.25

Element 3:

=============

oid: 1.3.6.1.4.1.19004.2.1

type: String

snmp type: Octet String

value: localhost

Element 4:

=============

oid: 1.3.6.1.4.1.19004.2.2

type: String

snmp type: Octet String

value: 10.10.120.183

Element 5:

=============

oid: 1.3.6.1.4.1.19004.2.3

type: String

snmp type: Octet String

value: Resource

Element 6:

=============

oid: 1.3.6.1.4.1.19004.2.4

type: String

snmp type: Octet String

value: 1346068003943

Element 7:

=============

oid: 1.3.6.1.4.1.19004.2.5

type: String

snmp type: Octet String

value: Critical

Element 8:

=============

oid: 1.3.6.1.4.1.19004.2.6

type: String

snmp type: Octet String

value: New alert by id 36 is generated at Mon Aug 27 11:46:43 UTC 2012; Root Cause : (2 SYMPTOMS)

1. MESSAGE EVENT    (1 OF 3)

33% - FAULT -

33% - CHANGE EVENT -

Element 9:

=============

oid: 1.3.6.1.4.1.19004.2.7

type: String

snmp type: Octet String

value: https://10.10.120.166/vcops-vsphere/?alert=36

Element 10:

=============

oid: 1.3.6.1.4.1.19004.2.8

type: Number

snmp type: Integer

value: 36

Element 11:

=============

oid: 1.3.6.1.4.1.19004.2.9

type: String

snmp type: Octet String

value: Verbindungsstatus der physischen Netzwerkkarte vmnic1 ist nicht bereit.

Element 12:

=============

oid: 1.3.6.1.4.1.19004.2.10

type: String

snmp type: Octet String

value: Health

Element 13:

=============

oid: 1.3.6.1.4.1.19004.2.11

type: String

snmp type: Octet String

value: Faults

For us, we want to search for the OID Number of the hostname and the error Message.

With this background information, we can create our needed In- and Outputs for our Workflow.

Local Parameter Variable Name Module Direction Type Value
trapData trapData Retrieve Host in Array/Properties
HostOID HostOID Retrieve Host in String 1.3.6.1.4.1.19004.2.2
MessageOID MessageOID Retrieve Host in String 1.3.6.1.4.1.19004.2.6
Host Host Retrieve Host out VC:HostSystem
SNMPMessage SNMPMessage Retrieve Host out String
NewAlert NewAlert Retrieve Host out Boolean

One important thing here: The trapData must be defined as Input Parameter not as attribute!

After we have created our variables, we can start with our scripting. I made a lot comments in the script, so everybody should understand what there happens….


// We extract the host name out of the SNMP TrapData
var HostName;
for (var x = 0; x < trapData.length; x++) {
var prop = trapData[x];
if (prop.get("oid") == HostOID) {
HostName = prop.get("value");
break;
}
}

// We compare the extracted Hostname with the Hosts registred in the vCenter Server.
// when we hava a match, we have the managed object ID of the Host
var hosts = VcPlugin.getAllHostSystems();
for (var i = 0; i < trapData.length; i++) {
var tempHost = hosts[i];
if (tempHost.name == HostName) {
Host = tempHost;
break;
}
}

// We search for the OID in the Message to get the Field with the recieved error Message
var SNMPTrap;
for (var y = 0; y < trapData.length; y++) {
var prop = trapData[y];
if (prop.get("oid") == MessageOID) {
SNMPTrap = prop.get("value");
break;
}
}

// Our search values in the Value Field of the Message
// When the Field contains the search string we beome the position. Otherwise be become a -1 value back.
var SNMPAlert = SNMPTrap.indexOf("New alert");
var SNMPCancel = SNMPTrap.indexOf("is cancelled");

// We check if the SNMP Trap has "New Alert" in the value field. Is so, we have a new alert.
if ( SNMPAlert != -1) {
NewAlert = true}
else {
NewAlert = false
};

Then we go to the Decsion. I name it “New Alert”. As Input we only need “NewAlert”. In the Decision field itself we set “NewAlert is true”.

After that, we go to the Enter Maintenance Mode. Here we only need to input variables:

Local Parameter Variable Name Module Direction Type Value
Host Host Enter Maintenance in VC:HostSystem
timeout timeout Enter Maintenance in number 0

Our Visual Binding has to look like this:

The same Variables and the same Visual Binding is required for Exit Maintenance Mode Workflow.

Local Parameter Variable Name Module Direction Type Value
Host Host Exit Maintenance in VC:HostSystem
timeout timeout Exit Maintenance in number 0

At last, we have to connect our Elements.  For the connections we Connect the start to “Retrieve Host” from there we connect the “New Alert”. From the “New Alert” we go with the Green line to “Enter Maintenance Mode” and with the red line to “Exit Maintenance Mode”.

Both Elements were connected to the End.

At last, we validate our Workflow and save our work.

After we have finished our Workflows, we create a Policy Template. For that, go to “policy Template” and create a folder with a name of your choice. I have a folder with the Name “vcoportal.de”.

With a “right click” on the folder you can open a context menu and choose “ Add policy template..:”

First we have to insert a name and a description. I choose “SNMP Trap with data” as name.

Then we go to the Scripting tab

There we have to insert the Device from with we want to catch our data. You can insert the device by clicking on the first button.

In the Dialog we choose SNMP:SnmpDevice

Then we check the SNMP Device and then click on the Second button.

As trigger we choose “onTrap”

At last, we have to insert this Scripting in “Script” field from the “OnTrap” Trigger.


// Author Christian Strijbos ([email protected])
// based on Examples and a Blog Post of  William Lam
// SNMP Policy to Check for Host Maintenance in case of an host error

// Execute Host Maintenace after trap ID
// To catch the Workflow ID, just go to the Workflow, type CTRL-C, open Notepad and Type CTRL-V
// You will get the ID of the Workflow in the id. Field.
var wfId = "83808080808080808080808080808080AA8A808001345464207298aebf2a6a5a5";

// process SNMP trap
var key = event.getValue("key");
var snmpResult = SnmpService.retrievePolicyData(key);
var trapData = System.getModule("com.vmware.library.snmp").processSnmpResult(snmpResult);
runWF(wfId,trapData);

// function to launch WF
function runWF(wfId,trapData) {
var workflowToLaunch = Server.getWorkflowWithId(wfId);
if (workflowToLaunch == null) {
throw "Workflow not found";
}
var workflowParameters = new Properties();
workflowParameters.put("trapData",trapData);
System.log("Launching Execute Host Maintenace after trap WF: " + wfId);
var wfToken = workflowToLaunch.execute(workflowParameters);
}

Some special notes here: In the Policy tab, variables are not highlighted. So if you write scripts on your own, make sure you write your variables well. Also there is no validation available, so there are a lot of possibility’s to make things go wrong 🙁

Save and close the template.

Next we want to apply out policy. Make a “right click” on the Policy Template and choose “Apply Policy..” in the Context menü.

In the menu you have to choose our SNMP Device

You can find your Policy under the “policy” Tab.

Right Click on it and choose “Edit”.

There you can choose our Startup Policy.  I choose “on Server Startup, start the Policy”. Save and  close the Policy.

At last, “right click” on the policy and start it.

After that, your policy is active and wait’s for SNMP traps from the vCOPS.

My setup follows the same principle like the integration with the SNMP Plugin with the vCenter Server like described in this Link:

http://blogs.vmware.com/orchestrator/2011/09/snmp-plug-in-integration-with-vcenter.html

vCO_Package_SNMP_vCOPS
vCO_Package_SNMP_vCOPS
de.vcoportal.snmp.vcops.package
35.8 KiB
Details...
0

LittleCMDB (An Orchestrator and WaveMaker project) – Part 7

Table of Content

In Part1 we start with the SQL DB Plugin and create the required database for our need.

In Part2 we start with the development of our Workflow. We will start with a few elements.

In Part3 we  finish the  collection of the VM information.

In Part4 we insert our data into the database and test our created workflow

In Part5 we create our webview to get a look on our Data in the SQL Database

In Part6 we will make our Workflow smarter to update the DB with actual VM information

In Part7 problems with vAPP located virtual machines are fixed

Part7

Today we will make some more error corrections for our LittleCMDB. Has anyone build the LittleCMDB yet? Do you use vAPPS in your environment? Than you have a problem with the LittleCMDB. The Workflow will stop on the “Extract virtual machine information” workflow.

Why that?  To answer this Question we must take a look at the Workflow and the API. Let’s start with the Workflow. The Workflow exist out of different elements. The first element “Get Folder” grabs the Folder for the virtual machine.

For a “normal” virtual machine this works perfectly. The element uses the VM name and the “parent” object in the API to get the needed information.

Now let’s have a look on a virtual machine which is located under a vAPP.

Did you see the difference? For a virtual machine the parameter “parent” is used and the parameter “parentVApp” is Unset. For a virtual machine under a vApp that’s changed. So, when the Workflow hits a virtual machine located under a vApp the Workflow will fail.

How can we fix this problem? Don’t worry, that easy 😉

First, we have  to copy the Workflow “Extract virtual machine information”. We do so by going to:

Library –> vCenter –> Virtual Machine management –> Others There we can find our Workflow.

Just “right click”  on the Workflow and choose in the menu “Duplicate Workflow”.

Give the Workflow a name, I take the name “Extract virtual machine information_WithChanges” and choose a location for the Workflow. I would recommend  to keep all your customized workflows together. I will insert the Workflow into my vcoportal.de folder and there in a subfolder “Helper” (for me that is better to export my project 😉 ). Copy the version history.

After we have duplicated the Workflow, we go to the chosen Folder and edit it.

First I change the Version history and insert a comment.

Then we go to the Schema Tab and there to the “Get Folder” Element. Here we change the Scripting to:


if (vm.parent != null){
folder = vm.parent;} else
{
folder = vm.parentVApp;}

folderName = folder.name;
folderId = folder.sdkId;

The scripting here is simple. We just look if the parent value is unset, then we use parentVApp. After we are finished validate and save the workflow.

At last, we go to our “GetVMConfig” Workflow. There we delte the “Extract virtual machine information” Workflow and insert our “Extract virtual machine information_WithChanges” workflow.

After that, insert the connections and the vm parameter (VMtoGet). Normally it should be insert automatically. Validate our Workflow and enjoy.

That’s all for Part7.

De Vcoportal Part7
De Vcoportal Part7
de.vcoportal.package_part7.package
126.3 KiB
Details...