top of page

Selenium Advanced Concepts

  • Writer: mohdmyyusuf
    mohdmyyusuf
  • Mar 4, 2021
  • 11 min read

Updated: Jul 9, 2021

There are situations where you are perplexed and find to proceed further with the AUT to automate it very difficult. The situations can be any of the following:

  1. The Web application is created using flex Ui.

  2. The application is built using AngularJS.

  3. The elements are not easy to locate due to Shadow dome or shadow root problems.

  4. AUT with Synch issues OR Timeout problems.

  5. Testing things in multiple tabs.

CONTENT IS AVAITED... WILL BE COMPLETED BY 5TH MARCH 2021




While automating the web applications we need to work on different webElements and to work on them we need to find them using different locators. There are two main methods which are used to locate the elements findElement and findElements. Let's see the main difference between findElement and findElements method. We are going to check the differences in different aspects.


1. Usage: If you want to access only one element on a webpage, use findElement method. If there is a possibility of multiple elements having same

locator, use findElements method it captures all the elements which can be tracked by same web locator.

2. Return type: The method findElement returns a single web element on the web page matching the locator. If there are multiple elements having the same locator

findElement method returns the fist matching element only not all elements. The findElement method returns the object of type WebElement

WebElement elementName = driver.findElement(By.LocatorStrategy("LocatorValue"));

The findElements method searches the whole web page for a given locator and returns the list of all webelements matching the locator. It returns a list of web

elements.

List<WebElement> elementName = driver.findElements(By.LocatorStrategy("LocatorValue"));

3. Exception: The method findElement throws "NoSuchElementException" exception if the element matching the locator is not found on the web page. On the other hand method findElements returns an empty list if no matching element is found.



Handling authentication pop-up in automation

Using AutoIT: Follow the steps below:

1. Pause the execution of the test script until window with title "Authentication Required" is active using WinWaitActivate() function.

2. Now enter username and password.

driver.get(URL);

WinWaitActivate("Authentication Required","")

Send("myusuf{TAB}mypass1707{ENTER}")

Runtime.getRuntime().exec("C:\\Download\\AutoItFiles\\ExecutableFiles\\FirefoxBrowser.exe");


By launching the url with user credentials:

A particular syntax is required to send the username and password with the URL. The syntax is: http://+Username+:+Password+@+applicationURL

For example:

String URL = "http://" + myusuf + ":" + mypass1707 + "@" + www.testurl.com

driver.get(URL);


By using Alert interface:

Alert is an interface which provides multiple methods which are used with Selenium Webdriver to handle the operations related to alerts.

For example:

driver.switchTo().alert(); 
//Selenium-WebDriver code using Java to enter username & password… driver.findElement(By.id("username")).sendKeys("myusuf"); driver.findElement(By.id("password")).sendKeys("mypass1707"); driver.switchTo().alert().accept(); driver.switchTo().defaultContent();

How to ignore some test cases

There might be the situation when we do not need to run all the test cases, so some are required to be ignored we are going to see how to do it in different testing tools and technologies. First of all let's see how to do it in Cucumber. To ignore the test cases, one way is not to include the tag of the testcase and another way is use the tag of those test cases with a special character tilde (~) as the attribute value of attribute "Tags" within CucumberOptions. This works for Scenarios and Features both and it can be used along with AND or OR. See the code below:

@RunWith(Cucumber.class)
     @CucumberOptions( 
     feature = “path of feature file\featureFile.feature ”,
            Glue = {“path of the java files”},
            Tags = {“SanityTest”, “~RegressionTest”})

When one test case which is tagged as "RegressionTest", has been marked ignored by tilde (~) rest other non-tagged scenarios are executed. We can do the same thing in TestNG also below is the way how to do it:


In TestNG we can ignore the test cases using the annotation @Test(enabled=false). The test case annotated with @Test(enabled=false) is bypassed and not executed. The code is as follows:

@Test (enabled = false)
 public void testShowMessage() {
 System.out.println("Inside testShowMessage()");
 message = "Yusuf";
 Assert.assertEquals(message, messageUtil.printMessage());
 }

Grouping test cases:

We create the feature file based on the functionality of application or the features of the application. We generally keep all related features in the same feature file, and this is a good practice also which makes the test case management convenient and easy. But in case all the features you want to run are not in the same file we need to group them. We are provided with the functionality of grouping the test cases.


1. Grouping the test cases in Cucumber:

In Cucumber we have the functionality or feature to tag the test cases. If some test cases are tagged to have the same Tag they are grouped and they are executed altogether if the tag is added in the Runner file in CucumberOptions. To tag the test cases we need to give them a name starting with ‘@’ in feature file, this name can be any valid name like “SmokeTest” of “RegressionTest”. To the run the test cases which are grouped or tagged two configurations are done, one in feature file and another in runner file. We can tag the test cases in feature file and they are included in runner file using the attribute “tag” in @CucumberOptions. The tagging can be done at feature level and scenario level. If the tags are applied for a particular scenario then only that scenario is included in that group. If the tags are applied to the feature itself, all the scenarios under the feature will be included to the group. So if the tag is applied to the feature, all scenarios will be executed under that feature. Tag at feature level:

@RegressionTest
Feature: site login feature

Scenario: login with valid credentials
Given Open Application and Enter url
Then user is reached on login page
When enter username and password and submits 
Then user is logged on
Then user name is there on home page

Here the tag @RegressionTest has been used at Feature level it will be applicable to all scenarios under it. We need to make following changes in the runner file to run this scenario:

@RunWith(Cucumber.class)
@CucumberOptions(
features="D:\\Eclipse_Workspace\\Cucumber.bdd.test\\src\\main\\java\\org\\cucumber\\features\\login.feature",
 glue={"org.cucumber.stepdefinitions"},
 format=
 {"pretty",
 "html:target/cucumber-reports/cucumber-pretty",
 "json:target/cucumber-reports/CucumberTestReport.json",
 "rerun:target/cucumber-reports/re-run.txt"},
 monochrome = true,
 strict = true,
 dryRun = false,
 tags = {"@Regression"}
 )
public class SCRunner {

}

2. Group the test cases in testNG:

To group the test cases in testNg we need to parameterize the @Test annotation with the attribute “groups”. The attribute groups is used to group a test case in a particular group. Suppose you want to add a test case to smoke test group or regression group, it can be done by giving the attribute “groups” a value as “Smoke” or “Regression” see the code below:


@Test(groups = {“Smoke”, “regression”})
Public void a(){
//code to be executed….
}

The method a() is added to Smoke and Regression groups. Another setting s required to be done in pom.xml file, we need to use tag <group> as follows:

<group>
    <run> 
        <include = “Smoke”/>
    </run>
</group>

The exact value given to the attribute “groups” in @Test annotation, is given in the tag <include>. The above setting in the pom.xml will execute all the test cases which are grouped under “Smoke”.

3. Dependency in methods in testNg: Sometimes we come cross the situation when we have some methods depending on other methods. In such a situation we need to set the execution flow accordingly like if the method A() depends on method B(), then method A() should be executed after the method B(). If the method B() is failed, the method A() should be skipped etc. The testNg support it with the help of “dependsOnMethods” attribute in @Test annotation. We need to write the code as follows:


@Test(dependsOnMethods = {“B”})
 public void A(){
//Code to execute
}
@Test
 public void B(){
//Code to execute
}

In Cucumber Hooks are used to write the method which are required for other methods as pre-requisite.


Running single test case multiple times with different data using data provider.

Suppose there is a scenario when we have to execute a single test case for multiple set of data, there can be two workarounds. First lets created same method multiple

times and pass different sets of data but this is not the right approach. Second and correct approach is to pass the data to the methods dynamically using testNG data provider.

We just need to tell that a method will provide the data to the test method which is going to use it for different executions. To do this we have to give the name of the

method as a value of attribute "dataprovider" within @Test annotation, see the code below:

@Test(dataProvider = "getData") // Here we have referred the dataProvider method.
	public void testOne(String inp, String outp) {
		
		System.out.println(inp);
		System.out.println(outp);
		System.out.println("Test");
	}
@DataProvider //this is dataprovider method 
	public Object getData() {
		
		Object obj[][] = new Object[2][2];
		obj[0][0] = "Test value 1";
		obj[0][1] = "Test value 2";
		obj[1][0] = "Test value 3";
		obj[1][1] = "Test value 4";
		
		return obj;	
	}

A web application is required to be tested on multiple browsers and OS combinations. To run an application on multiple browsers we can use TestNG. There is an

option in TestNG to pass parameter using <parameter> tag in testng.xml file. In parameter tag the attribute "name" and "value" are there. We can create a key

value pair with the help of this. Create a <parameter tag with attribute name = "browser" and value = "browserName" within <test> tag in testng.xml file. We can

also define the parameters at the <suite> level means within <suite> outside of <test> tags. If parameters are defined at both <suite> and <test> levels, in such

case regular scoping rules will be applied here. The regular scoping rule means that any class inside <test> tag will see the value of parameter defined in <test>,

while the classes in the rest of the testng.xml file will see the value defined in <suite>.


So let's make the settings in testng.xml file first. Add tag <parameter> in testng.xml file at test level means within the tag <test> see the testng.xml below:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="Suite">
	<test thread-count="5" name="Test">
	<parameter name="browser" value="chrome"></parameter>
		<classes>
			<class name="TestNGTest.TestNGOne"></class>
		</classes>
	</test> 
	<test thread-count="5" name="TestOne">
	<parameter name="browser" value="Firefox"></parameter>
		<classes>
			<class name="TestNGTest.TestNGOne">
		</class>	
	</classes>
	</test>
</suite>


I have added the <parameter> tag twice to run the same method with two different values of "browser" ie chrome and Firefox. The notable thing here is that the parameter name "browser"(variable) with be same in the method which is going to use it's value and in testng.xml file. The method should be written as follows:

	@BeforeMethod
	@Parameters("browser")
	public void setUp(String browser) {
		
		System.out.println("Browser is " + browser);
	}

This method will get executed twice for "chrome" as browser value and for "Firefox" as browser value. So in case when we need to do the cross browser testing

using testNG, we can write the code to execute the same method using different browsers.

How to run the failed test cases:

There are the situations when some test cases are failed due to some external issues like the system was down or improper configurations(not a correct test bed).

In that case we need to re-run those failed test cases to test the system properly. So let's see how to do it in TestNG.


First way is to run the tests using testng-failed.xml file which is created under "test-output" folder. Open the file testng-failed.xml and right click then select the option Run as TestNG Suite. The tests will be executed and a fresh report will be generated. It will run all the test cases including the failed ones. But if you want to run the failed tests only, disable or comment out the classes and select only the tests with "(failed)" after their name in tag <test>. Run the file again and check the report for the failed test cases only. The file testng-failed.xml looks as follows:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite guice-stage="DEVELOPMENT" name="Failed suite [Suite]">
  <listeners>
    <listener class-name="Utils.ListenerClass"/>
  </listeners>
  <test thread-count="5" name="Test(failed)">
    <classes>
      <class name="TestCasesGetReqs.GetPosts">
        <methods>
          <include name="getPhotos"/>
        </methods>
      </class> <!-- TestCasesGetReqs.GetPosts -->
    </classes>
  </test> <!-- Test(failed) -->
</suite> <!-- Failed suite [Suite] -->

We can see the name="Test(failed)" in tag <test> in the file which shows the tests which are failed.


Second way is to do some automation and create a class which will tell the system to execute the tests which were failed. So let's see how to do it. Create a TestNG method in the class and create the object of TestNG class and call setTestSuites() method which accepts a list of strings as a parameter. Add the path of testng-failed.xml file to a list of strings and pass that list to setTestSuites() method. and then call method run() of TestNG. It will run the testng-failed.xml file and a fresh report will be generated. See the code below:

import java.util.ArrayList;
import java.util.List;
import org.testng.annotations.Test;
import com.beust.testng.TestNG;

public class FailedTestsExecuter {
	
	@Test
	public void runFailedTests() {
		
		@SuppressWarnings("deprecation")
		TestNG runner  = new TestNG();
		List<String> li = new ArrayList<String>();
		li.add("D:\\codeRepository\\AssignmentCode\\AssignmentProject_RestAssured\\test-output\\testng-failed.xml");
		runner.setTestSuites(li);
		runner.run();
	}

}

The third way is by using the retryAnalyser interface:

Suppose we have a requirement to re-run the test cases as soon as they are failed. There is an interface called IRetryAnalyzer, we need to implement this interface and it has only one method retry() to be overridden. In that method we can write logic to set the iteration and other things. See the code below:

package com.assign.qa.base.reruntests;
import org.testng.IRetryAnalyzer;
import org.testng.ITestResult;

public class RetryAnalyserFTC implements IRetryAnalyzer{

	int counter = 0;
	int retryLimit = 2;
	@Override
	public boolean retry(ITestResult result) {
		if(counter<retryLimit) {
			counter++;
			System.out.println("Running the "+counter + " time");
			return true;			
		}
		else {
			return false;
		}
	}

}

Now we need to set the IRetryAnalyzer for the methods or suite means we can set it on both test and suite level. To set it on test level assign the compplete path of the class implementing IRetryAnalyzer interface to the attribute "retryAnalyzer" in @Test annotation. If this method is failed, it will be re-executed the specified times. See the code below:

@Test(testName = "Test to fetch photos of album id ", retryAnalyzer = com.assign.qa.base.reruntests.RetryAnalyserFTC.class)
	public void getPhotos(){       
given().when().get("https://jsonplaceholder.typicode.com/photos/3")
		.then().statusCode(200).assertThat().body("id", equalTo(1))
		.body("albumId",equalTo(1)).body("url", equalTo("https://via.placeholder.com/600/24f355"))
		.header("Content-Type", "application/json; charset=utf-8");
} 

Pay attention to "retryAnalyzer = com.assign.qa.base.reruntests.RetryAnalyserFTC.class" in @Test annotation. Here I have given the path of the class which implements the IRetryAnalyzer interface. So as soon as the test "getPhotos()" is failed, it will be execute two times and after that it will be shown in the report.


This is the case when we need to re-run some particular method and the methods are less in number so it will be feasible to copy the path of class to "retryAnalyser" in the @Test annotation. But if the methods are more in number, we have to set the retryAnalyzer at suite level. Let's see how to do it. To do this we need to use IAnnotationTransformer interface. So let me explain a bit about it first.


IAnnotationTransformer is a TestNG interface which is having only one method “transform”. The method transform(ITestAnnotation annotation, Class testClass, Constructor testConstructor, Method testMethod) accepts four arguments. The method is invoked by TestNG to modify the behavior of Test annotation method in the test class. Explanation of the parameters:


annotation: The annotation is used to call multiple methods like annotation setRetryAnalyzer() or getRetryAnalyzer() etc.

testClass: If this annotation is found on a class, this parameter would represent that class.

testConstructor: If this annotation is found on the constructor of a class, this parameter would represent that constructor.

testMethod: If this annotation is found on a method, this parameter would represent that method.

The notable thing is that at least one of the parameters should be non-null. To set the retryAnalyser on suite level we need to create a class implementing the interface "IAnnotationTransformer" and the method "transform" of the interface will be implemented. We will call setRetryAnalyzer() method using the parameter "annotation" and the path of the class implementing the interface "IRetryAnalyzer" will be given as a parameter to method setRetryAnalyzer(). Please see the code below:

public class AnnoTranformer implements IAnnotationTransformer {
        
        @Override
	public void transform(ITestAnnotation annotation, Class testClass, Constructor testConstructor, Method testMethod) {
		// TODO Auto-generated method stub
		annotation.setRetryAnalyzer(RetryAnalyserFTC.class);			
	}
}

Now make some changes in testng.xml file. Add a new listener tag and give the name of the class implementing interface "IAnnotationTransformer". The sample xml file is as

follows:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="Suite">
<listeners>
	<listener class-name="Utils.ListenerClass"></listener>
	<listener class-name="com.assign.qa.base.reruntests.AnnoTranformer"></listener>
</listeners>
  <test thread-count="5" name="Test">
    <classes>
      <class name="TestCasesGetReqs.GetPosts"/>
      <class name="com.assign.qa.base.testcases.MainPageTests"/>
    </classes>
  </test> <!-- Test -->
</suite> <!-- Suite -->


I have set the name of the class in <listener class-name="com.assign.qa.base.reruntests.AnnoTranformer"></listener> tags. This is done now, any tests failing the execution will re-run two times after this set up.


Creating a custome report:

It can be done in two ways the first way is by using listeners and second without listeners.


































Ifwe want to execute the test runner in cucumber it should contain "Test" in its name.

































































If there are multiple methods set with some priority using testNG say one is having priority = -1 and another priority = 1 then 0 is missing. As all the test cases have priority = 0 by default so first of all, the test case with priority = -1 will be executed and then all the test cases without having any priority(as by default they have priority = 0) and then the method with priority = 1.






Recent Posts

See All
Jenkins and its Set Up

CI/CD it is a practice which help in a ready to deploy code creation. Jenkins is an open source continuous integration server written in...

 
 
 
Powershell and VMs

The Powershell is used for system administration so it is a command line shell or scripting language. For different operating systems,...

 
 
 
Test Performance with Jmeter

Execution order of jmeter elements: If multiple elements are added to a Test Plan in Jmeter, the execution order will be as follows:...

 
 
 

Comments


Subscribe Form

Thanks for submitting!

8810487385

  • Facebook
  • Twitter
  • LinkedIn

©2021 by testinggeeks. Proudly created with Wix.com

bottom of page