Examples

0

Generate VMs based on load with vCO and vCAC

You do remember my video with the topic „Generate VMs based on actual load in a Resource Pool” (http://www.vcoportal.de/2012/11/generate-vms-based-on-actual-load-in-a-resource-pool/)?

In that video I showed a vCO workflow which provisioned new VMs when the load on a specific Resource Pools reached a defined Value. When the load was decreased, the VMs were thrown away. In this video, I worked on a simple vCenter Server base. A colleague of mine, Carsten Schäfer, took my Workflow and modified it to work with vCAC.  They changed the workflow so, that virtual machines will not provisioned as a “Single” VM, instead he used the add components function to deploy them to an existing Multi-Machine Service.

Oh yeah, one might call it some first PoC for auto-scaling in vCAC (yep, that’s just for the robots :mrgreen: )

Here is the video:

As I found great stuff 😉

0

Application Onboarding with vCO and WaveMaker

Last month I had a customer, who wanted an onboarding process for his virtual machines. This onboarding should work as follow:
– There is a standard Linux template for all virtual machines
– The users can choose from a web site what machine type is required
– From the website a vCO workflow is started which clones the template and installed the required packages into the VM
– At the end the application configuration is done
To be honest, the customer requirements were quite simple so the usage of Puppet or Chef was not an option.
After everything the customer wanted was clear, I thought about the options and came to the idea to choose the vCO and WaveMaker for the solution.
I don’t like passwords in Workflows so the first think I did, was creating a vCO SSH Key. This could be done with a predefined workflow in vCO.

After I had my SSH public key, I created the Linux template with the public key in the file /root/.ssh/authorized_keys
You can do this manually or also use a predefined vCO Workflow

This allows me, to administer the Linux VM without the need for a password.
The next thing I did, was to create a folder on the vCO Appliance. I created the folder under the root directory with the name ConfigFiles. The vCO has a secure configuration so to allow the vCO Server to access this folder you must allow this explicit. For that, you have to edit the file: js-io-rights.conf on the vCO appliance this file is located under the path:

/opt/vmo/app-server/server/vmo/conf/js-io-rights.conf

I added my created path and after that, the file was looking like this:


-rwx /
+rwx /var/run/orchestrator
+rx ../../configuration/jetty/logs/
+rx ../server/vmo/log/
+rx ../bin/
+rx ./boot.properties
+rx ../server/vmo/conf/
+rx ../server/vmo/conf/plugins/
+rx ../server/vmo/deploy/vmo-server/vmo-ds.xml
+rx ../../apps/
+r ../../version.txt
+rw /ConfigFiles

After you have done this editing, you must restart the vCO Service to take the changes in effect.
For later use, I placed a file with the name “named.conf” into the folder. This file contains the configuration for the bind installation which I will use later in this Blog post.
For our Onboarding, we need the SCP workflow. With the predefined workflow, we have a little problem…..it doesn’t work with ssh Keys. So we have to modify the predefined workflow. Therefore we copy the existing workflow.

I called my workflow “SCP Put command with SSH Key”. After we have copied the workflow, we have to modify it.

We have to change the content in the SCP put file Scriptable task.

try{
var session = new SSHSession(hostName,username);

if(passwordAuthentication){
System.log("Connecting with password");
} else {
if(path == null || path == ""){
System.log("using default");
path = defaultKeyPairPath;
}
System.log("Connecting with key pair ("+path+")");
password = passphrase;
}

session.connectWithPasswordOrIdentity(passwordAuthentication,password,path);
System.log("Connected!");

session.putFile(localFile,remoteFile) ;
output = session.getOutput();
error = session.getError();
exitCode = session.exitCode;

System.log("Output: '"+output+"'");
System.log("Error: '"+error+"'");
System.log("Exit code: '"+exitCode+"'");

session.disconnect();

} catch (e) {
throw "Unable to execute command " + e;
}

After we have made the scripting changes, we need to add an Attribute. I choose defaultKeyPairPath as name. It is from type string and added as value ../server/vmo/conf/vco_key (the path to the vCO SSH Key).

On the input site of the workflow we have to add three Inputs. The first we have to create is passwordAuthentication from type boolean. As default value we choose no. The second Input variable is path from type path. The last one we need is passphrase from type SecureString. The last one is required, if we protect our SSH Key with a passphrase.

After we have created the SCP Put command with SSH Key it is time to build our workflow.
In this blog, we create a simple onboarding workflow to configure a Name Server with bind. The Template I use is a Cent-OS Minimal installation with installed VMwareTools and the SSH Public Key from the vCO Server.
I created a new Workflow with the name “Blog_Configure_DNS”.

In this workflow we go to the schema and add the following Workflows.
The first one is a Workflow with the name “Clone, Linux with single NIC” to clone the template. The second workflow we use is “Run SSH command”. The next one is our “SCP put command with SSH Key” and as last one we also use “Run SSH Command”.

Now we rename the commands. I rename the first “Run SSH….” to “Install named” the “SCP put command….” to “Put configuration named.conf” and the last Workflow to “Restart named”.

After the workflow design, you have to create the needed Attributes, In- and Outputs for every element. I did some examples in my “Little CMDB” series so if you are not familiar to create the needed parameter take a look here:

http://www.vcoportal.de/2012/07/introducing-the-littlecmdb-a-vcenter-orchestrator-wavemaker-demo-project/
and following.

When you’re finished with this, we have to build up the WaveMaker interface.
There are also a lot of good examples available to do this. You will find lot information here:
Using WaveMaker as Web-Fontend for vCO

http://www.vcoportal.de/2011/11/using-wavemaker-as-web-frontend-for-vco/

Off-topic(?): Lessons learned with WaveMaker
http://www.vcoportal.de/2012/02/lessons-learned-with-wavemaker/

Howto setup LDAP-Authentication for Wavemaker (Part 1 & Part2)
http://www.vcoportal.de/2012/05/howto-setup-ldap-authentication-for-wavemaker-part-1/

http://www.vcoportal.de/2012/07/introducing-the-littlecmdb-a-vcenter-orchestrator-wavemaker-demo-project/

To choose the right configuration, you can create a Drop Down field with different options. Here a Screen Shot for the DNS Server.

And here a Screen Shot for the DHCP Server with definition of the Scope.

Beside the option to use WaveMaker, you could also use an automatic provisioning depending on the actual load of a resource pool. I made an example video to show how it could be done with the vCO. You can find the video here:

http://www.vcoportal.de/2012/11/generate-vms-based-on-actual-load-in-a-resource-pool/
So have fun and orchestrate your virtual environment 😉

0

NetApp OnCommand Workflow Automation package for vCO

Jeremy Goodrum (make sure to follow him on www.virtpirate.com and Twitter @virtpirate)  from NetApp has published a powerful package to integrate vCO and NetApp Workflow Automation:

https://communities.netapp.com/docs/DOC-25899

I “interviewed” Jeremy via email about his solution:

Can you introduce Netapp WFA in 2 sentences?
Jeremy –  OnCommand Workflow Automation (WFA) is NetApp’s automation framework that enables architects to create storage automation workflows and to integrate with third party SDK toolkits like VMware’s PowerCLI.  WFA gives NetApp storage administrators the ability to turn traditional scripts into full blown repeatable workflows that can intelligently provision and manage storage.

What drove you to build an integration of WFA and vCO?
Jeremy – I have had a lot customers tell me that they want to leverage vCenter Orchestrator as the orchestration portal, but needed storage automation as well.  People that have seen WFA’s intelligent storage placement and integration desired to add this to vCO’s already robust features.  We had this grand vision of vCO and WFA working together to build more than just traditional VMware Datastores.  We wanted to give vCO the ability to not only integrate with VMware environments, but also manage and provision NetApp storage .  Now vCO admins can build workflows that provision applications to use NetApp storage, manage CIFS home directories and even manage NetApp Snapshot backups.

When building the solution, what were your biggest challenges?
Jeremy – We wanted this to be simple to use and easy to drop in.  The big challenge was in trying to modularize the entire structure to allow end users to drop the package into their environment and just go.  It was important to me that the WFA package be a complete self-contained plug-in type of solution that required very little customization.

When you now (proudly 😀 ) review the solution, what do you like most about it?
Jeremy – Ok, I might be bragging a little here but honestly, I am proud of how simple it is to use.  A NetApp Storage Architect can now create fully intelligent and structured WFA workflows while enforcing NetApp and the customer’s best practices.  The architect can have the workflow build entire tenancies or application stacks and then share the workflow with the VMware team.  Within minutes, the VMware team can literally pull the workflow into vCO and be off the ground.  It really is that simple.

So, watch the videos, download the package, and enjoy the power of automation!

0

vCO and Veeam Backup&Replication a powerful combination

Last week I did a Webinar for Veeam in Germany. My topic in this webinar was Automation and Orchestration. Due the circumstance that the Webinar was in German, I decided to make this post to share the information for the rest of the non-German speaking world.

For those who understand German the Webinar was recorded and can be found here:

http://www.veeam.com/de/videos.html?ad=de-topmenu

Before you start some really important notes on the combination of vCO and Veeam B&R.

Some Veeam B&R commands need a connection to the vCenter Server. When you invoke your commands in a PowerShell Window on your Backup host, the PowerShell uses CredSSP to provide the vCenter Server Login Information’s from Veeam B&R to the vCenter Server. If you do the same in a vCO Workflow, this does not work! The reason why it not work is because the vCO PowerShell plugin only Supports Basic- and Kerberos Authentication. In every Environment I used so far, the Servers where in a Windows AD. This allowed me to use the Kerberos Authentication in the vCO PowerShell Plugin. In the last time, I did many tests with the Basic Authentication and had a lot of problems and errors with that type of Authentication. So my recommendation for the Authentication is “use the Kerberos Authentication to avoid a lot of trouble and problems!”

Prepare the Backup Server

At the moment Veeam Backup&Replication has no SOAP or REST API Interface. The only available interface is PowerShell.  To use the Power Shell from vCO some necessary preparations has to be done.

First of all Veeam Backup&Replication must be installed with the PowerShell Extension. This is done during Installation or if you already installed it without PowerShell  to just start the Installation again and add the PowerShell feature.

After you have installed the PowerShell Extension, you can start it from the Management Console.

This Button starts a PowerShell shell with an already loaded Veeam Extension. The Files for this Veeam PowerShell Extensions reside here:  “C:\Program Files\Veeam\Backup and Replication” in this path the file ”Install-VeeamToolkit.ps1” is important to load the extension automatically. We will use this file later in our vCO Workflows.

The next we have to do is to check if the Veeam Backup Server has the PowerShell in the Version 3.

For the first workflows and test I recommend to change the Host execution Policy to unrestricted. When everything goes fine, you can change the execution Policy to remote-signed

Set ExecutionPolicy (RemoteSigned / Unrestricted )

After that, we need a command Window on the Backup server. Here we have to insert the following commands:

Run the following command to set the default WinRM configuration values.


c:\> winrm quickconfig

(Optional) Run the following command on the WinRM service to check whether a listener is running, and verify the default ports.

c:\> winrm e winrm/config/listener The default ports are 5985 for HTTP, and 5986 for HTTPS.

Enable basic authentication on the WinRM service.

Run the following command to check whether basic authentication is allowed.

c:\> winrm get winrm/config

Run the following command to enable basic authentication.

c:\> winrm set winrm/config/service/auth @{Basic="true"}

Run the following command to allow transfer of unencrypted data on the WinRM service.

c:\> winrm set winrm/config/service @{AllowUnencrypted="true"}

Enable basic authentication on the WinRM client.

Run the following command to check whether basic authentication is allowed.

c:\> winrm get winrm/config

Run the following command to enable basic authentication.

c:\> winrm set winrm/config/service/auth @{Basic="true"}

Run the following command to allow transfer of unencrypted data on the WinRM client.

c:\> winrm set winrm/config/client @{AllowUnencrypted="true"}

Run the following command to enable winrm connections from vCO host.

c:\> winrm set winrm/config/client @{TrustedHosts ="vco_host"}

After we have executed the commands, we are ready with the Backup Server. Let’s now switch to the vCO Server.

Prepare the vCO

From the view of the vCO the first and important thing is, that the PowerShell Plugin is installed and activated in the vCO Server. If you are not familiar with this, the documentation can be found here:

http://pubs.vmware.com/orchestrator-plugins/index.jsp?topic=/com.vmware.using.powershell.plugin.doc_10/GUID-8AE1CFF2-F6F0-4233-BDD9-F318E461AB2F.html

When the PowerShell Plugin is ready, we can start to add the Backup Server to our repository. This could be done be starting the PowerShell Workflow to “Add a new Server”. The needed information’s are self-explaining.

On the second site we have to choose as PowerShell remote host type “WinRM”. As Protocol we use “HTTP” or “HTTPS”. The last point is Authentication. Here we choose “Kerberos”.

On the last page we have to choose if we use a “Shared session” or a “User Session”. When you chose the shared session you have to insert User credentials. When you decide to use “User Session” you have to insert the Authentication Details in every PowerShell call.

After we are finished with the pre requirements we can start with our first Workflow. Let’s us a simple one…..

Develop the vCO Workflows

If we want to figure out which Veeam Jobs exist on our Backup Server we need the command Get-VBRJob.

The easiest way to start is to copy the Workflow “Invoke a PowerShell Script” into a folder of your choice.

There you have to insert a second scripting element and move the host and script Inputs as Attributes.

In this scripting element we put our script which includes the PowerShell code.

To use a Veeam PowerShell command in a vCO Workflow we need somewhat more input then just the command. We have to load the Veeam Extension into our PowerShell Session which we invoke from the vCO Server. Here is the complete code for the call:

script = "# Load Veeam Powershell Extension into the actual session \n"
+ "'C:/\Program Files/\Veeam/\Backup and Replication/\Install-VeeamToolkit.ps1' \n"
+ "add-pssnapin VeeamPSSnapin \n"
+ "# Veeam is loaded \n"
+ "Get-VBRJob";

For us the full example looks like this.

Now you can use this command in your own workflows. Now that command isn’t really useful by now. Let insert a virtual machine into a Backup Job after creation. For that we have to use the Veeam Command Add-VBRJobObject. For this command we need some information which we can collect during the Session. A full command to insert a VM into a workflow looks like this:

Add-VBRJobObject -Job $(get-VBRjob -Name "+ JOBNAME+ ") -Server $(get-VBRServer| Where {$_.Type -eq 'VC'}) -Objects " + VMNAME + " }"

The Values JOBNAME and VMNAME must be specified as vCO Attributes or Inputs.

When you now try to execute this like the command before:

You will get an error like this one:

Failed to login to “vcenter.example.com” by SOAP, port 443, user „root”, proxy srv: port:0 +   CategoryInfo : InvalidOperation: (Veeam.Backup.Po…FindVBRViEntity:FindVBRViEntity)         [Find-VBRViEntity], Exception + FullyQualifiedErrorId :   Backup,Veeam.Backup.PowerShell.Command.FindVBRViEntity

Why this happens?  Here we get into trouble with the Authentication against the vCenter Server. If everything was fine before and you can execute the command from a PowerShell shell the problem is in your workflow. Like described before we have to Authenticate against the vCenter Server from our Workflow.  vCO has no option to do this automatically . We have to change our Workflow to this:

 

script = "invoke-command -session $(New-PSSession <strong>BACKUPSERVER</strong> -Authentication Kerberos -Credential $(new-object -typename System.Management.Automation.PSCredential -argumentlist<strong> USER@DOMAIN</strong>, $(convertto-securestring -string '<strong>PASSWORD</strong>' -asplaintext -force))) -scriptblock{ set-item wsman:localhost\Shell\MaxMemoryPerShellMB 1024"
      + "\n Add-PSSnapin -Name VeeamPSSnapIn -ErrorAction SilentlyContinue"
      + "\n Add-VBRJobObject -Job $(get-VBRjob -Name "+ <strong>JOBNAME</strong> + ") -Server $(get-VBRServer| Where {$_.Type -eq 'VC'}) -Objects " + VMNAME + " }"

This script looks really different then the script before. What do we do here? We generate a new Powershell session on the Backup Server (New-PSSession).  For this session, we define a Username (USER@Domain) and a Passwort (PASSWORD). For the Username it is very important that the user is written as user@domain. Otherwise the Kerberos Authentication will not work and the Workflow will fail! At last we set the Memory for the new Shell to 1024 MB (set-item wsman:localhost\Shell\MaxMemoryPerShellMB 1024) If we doesn’t exceed the Memory the workflow will also fail! At last we load the Veeam Snapin and execute the Script job…..

That’s easy or?

With this Background Knowledge you can start to implement your own Automation Workflows with included Backup of your virtual machines with Veeam. It is also possible to integrate the Replication….you have just to implement the replication command and start your Automation….

In the Veeam Community there is a good PowerShell forum. So if you have trouble with your Veeam PowerShell commands, get a look there:

http://forums.veeam.com/viewforum.php?f=26

Have fun with the Power of vCO 😉

0

How to integrate Selenium into WaveMaker

What is Selenium?

Short version: Selenium can be used to automate browsers, i.e. to perform 1000 Google searches, or to comment on a friend’s picture on Facebook 10000 times  (I guess so, didn’t try yet ;)).

Why integrating Selenium into WaveMaker?

The reason for me has been: Performance and Load testing of an existing WaveMaker application, by using almost the same conditions as the future customer.

That meant for me: Creating hundreds of new database entries manually by using the graphical user interface…

And that meant for me: FIND ANOTHER WAY!!!

And luckily I did. After reading some tutorials and writing some simple programs I decided to automate a WaveMaker application… and I failed miserably!

I describe the problem later… now it’s your turn 🙂

Preparations

1. Download the Selenium java driver

2. Install the Selenium IDE plugin

3. Install Firebug

4. Create a new project in WaveMaker without using any template. Save and close it.

5. Now open the zip file you have downloaded in step 1 and copy all the 38 jar files to your WaveMaker project’s lib folder.

6. Open your WaveMaker project again.

Part 1 – GUI

Part 2 – Source


dojo.declare("Main", wm.Page, {
"preferredDevice": "desktop",
start: function() {

this.varTheEntries.addItem({item: "demo1", amount: "one"});
this.varTheEntries.addItem({item: "demo2", amount: "two"});
this.varTheEntries.addItem({item: "demo3", amount: "three"});
},

buttonAddItemClick: function(inSender) {
try {
// Add new item to varTheEntries
this.varTheEntries.addItem({item: this.inputItem.dataValue, amount: this.inputAmount.dataValue});
// clear input fields
this.inputItem.clear();
this.inputAmount.clear();

} catch(e) {
console.error('ERROR IN buttonAddItemClick: ' + e);
}
},

buttonRemoveItemClick: function(inSender) {
try {
// Remove selected item from varTheEntries
this.varTheEntries.removeItem(this.dojoGridEntries.selectedItem);
// disable button
this.buttonRemoveItem.disable();

} catch(e) {
console.error('ERROR IN buttonRemoveItemClick: ' + e);
}
},

_end: 0
});

Part 3 – Services

1. Type Definition

To keep it simple, the shopping list will contain entries with just two attributes: item and amount.

To save this data in a single Variable, you need to create a new Type Definition first.

Insert a new Type Definition

Close that confusing window

Type "myEntry" for name

Right-click the new definition to add a new field

Type fieldName "item" and leave fieldType to "String". Then click addField

Type in “amount” for the second field’s name and leave the fieldType to “String”.

A little bit confusing: After typing “amount” and hitting Enter, the field is created but it doesn’t appear in the navigator view on the upper left hand side.

But if you close and open your project again, it will be displayed correctly.

2. Variable

Now you can create a Variable which uses the new Type Definition. Be sure to check “isList”.

Insert a new Variable

Enter "varTheEntries" for name, choose your Type Definition as type and check the "isList" checkbox

Now you can bind the dojo grid’s dataSet to varTheEntries

At this point, the application should be functional, but also quite boring…

Part 4 – Selenium

Create a new JavaService

Replace the created code with the one shown below.


package com.selenium;

import com.wavemaker.runtime.javaservice.JavaServiceSuperClass;
import com.wavemaker.runtime.service.annotations.ExposeToClient;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.openqa.selenium.support.ui.ExpectedConditions;

@ExposeToClient
public class SeleniumTest extends JavaServiceSuperClass {

public SeleniumTest() {
super(INFO);
}

public void createEntries(int numberToCreate) {

WebDriver driver = new FirefoxDriver();
WebDriverWait wait = new WebDriverWait(driver, 20);

driver.get("http://localhost:8094/Selenium_Integration_01/");

for(int i=0; i<numberToCreate; i++)
{
// click the textBox and enter text for item
WebElement textBoxItem = wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//input")));
// alternative
//WebElement textBoxItem = wait.until(ExpectedConditions.elementToBeClickable(By.xpath("/html/body/div/div/div/div[2]/div[2]/div[3]/div/div/div[2]/div/input")));
textBoxItem.click();
textBoxItem.sendKeys("awesomeItem " + i);

// click the textBox and enter text for amount
WebElement textBoxAmount = wait.until(ExpectedConditions.elementToBeClickable(By.xpath("/html/body/div/div/div/div[2]/div[2]/div[3]/div/div[2]/div[2]/div/input")));
textBoxAmount.click();
textBoxAmount.sendKeys("some");

// click button "Add Item"
WebElement buttonAddItem = wait.until(ExpectedConditions.elementToBeClickable(By.id("main_buttonAddItem")));
buttonAddItem.click();
}
}

public void deleteEntries(int numberToDelete) {

WebDriver driver = new FirefoxDriver();
WebDriverWait wait = new WebDriverWait(driver, 20);

driver.get("http://localhost:8094/Selenium_Integration_01/");

try {

for(int j=0; j<numberToDelete; j++) {

// Select first dojoGrid column
WebElement dojoColumnFirst = wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//td")));
dojoColumnFirst.click();

// Click button Remove Item
WebElement buttonRemove = wait.until(ExpectedConditions.elementToBeClickable(By.id("main_buttonRemoveItem")));
buttonRemove.click();

}

} catch (Exception e) {
log(ERROR, "Error in deleteEntries: " + e);
}
}
}

The JavaService now has two methods: createEntries and deleteEntries.
Create two serviceVariables and bind them to those methods.
Don’t forget to bind the inputs. svCreateEntries expects an Integer for numberToCreate, svDeleteEntries expects an Integer for numberToDelete.
So you have to bind those inputs to the corresponding text boxes on the canvas.


Finally, you have to tell the two “Let’s go” buttons what to do when they are clicked.

     

And that's it, the application should now do the following:

You: “Now I’m clicking Let’s go”

WebDriver: “Hey Firefox, give me a new window and open “http://localhost:8094/Selenium_Integration_01/”

Firefox: “Ok!”

WebDriver: “Wow, nice website! There should be an element at position “//input”. Let’s check if I can click it. If not, let’s wait 20 seconds for it. Ah, there it is! Now I click it and send some text.”

WebDriver: “Another element at position “/html/body/div/div/div/div[2]/div[2]/div[3]/div/div[2]/div[2]/div/input”. Waiting… got it! Click, send text.”

WebDriver: “Me again? A button with id “main_buttonAddItem”? There it is. Click it!”

…and a second time…

Quite the same procedure as for createEntries…

But now, we simply tell the WebDriverWait to select the first column of the dojo grid (“//td”) and click the Remove Item button – in this case both actions three times.

If would like to delete column x, you can do the following:

While recording with the Selenium IDE, click the second column of the grid, then the third column. When selecting “xpath:position” as target, you will get:

As you can see, the difference is the number of the first div. To delete column x, you would write something like this:


if(position == 1)
{
WebElement dojoColumnFirst = wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//td")));
dojoColumnFirst.click();
}

else
{
WebElement dojoColumnX = wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//div["+position+"]/table/tbody/tr/td")));
dojoColumnX.click();
}

Problem:The id’s

The text box for the item’s name is called “inputItem” within the project. But if you run the application and analyze the item with Firebug, you will realize, that it’s now called “dijit_form_TextBox_0”.

Wouldn’t be a problem, if it would keep this id all the time, but it doesn’t! Every item, which’s id has a number in it, can change at runtime.

So the best way to identify those elements is using xpath. (I didn’t have problems with button id’s)

1. Using the Selenium IDE to get xpath or id of an element

  • Run the application and hit CTRL + ALT + S to open the Selenium IDE

  • The Selenium IDE should now use the project’s URL and be in record mode
  • Click the Item text box, enter some text and hit Enter
  • The Selenium IDE should now show two commands

  • Click the type command to show it’s details.

  • You see, the type command uses the id “dijit_form_TextBox_0”, which we don’t want to use.
  • Click the Target select menu to show all available methods for identifying the text box element. You will see, except of one, they are all containing “…TextBox_0…”. So let’s trust that single one which doesn’t  😉
  • Code line in Java Service: WebElement textBoxItem = wait.until(ExpectedConditions.elementToBeClickable(By.xpath(“//input“)));

  • Close Selenium IDE

2. Using Firebug to get xpath of an element

  • On the running application’s window, right-click the Item text box and “Inspect Element with Firebug”

  • Now the console will open and highlight the part of code which was generated for the text box’s input field

  • Now just hover your cursor over the text in that highlighted area and you will see the complete xpath expression

  • Unfortunately, I didn’t find a way to copy & paste that expression, so you have to write it down
  • Code line in Java Service: WebElement textBoxItem = wait.until(ExpectedConditions.elementToBeClickable(By.xpath(“/html/body/div/div/div/div[2]/div[2]/div[3]/div/div/div[2]/div/input“)));

Ok, that’s it so far. I will keep trying some other cool stuff and hopefully I soon have something more to share 🙂

Have fun, kindest regards

Tobi

Download the sample project

Selenium Sample Project
Selenium Sample Project
Selenium_Integration_01.1.Alpha1.zip
22.0 MiB
Details...