Mobile Automation Gestures Complete Guide (Basics to Advanced)
In the previous article of this Appium tutorial series, we have seen How To Write First Appium Script. Now in this article, we will see Mobile Automation Gestures.
It is always interesting to know what more can be done with a tool like Appium.
Mobile application testing has its own challenges and plus points but what’s more insightful is the advanced automation gestures that can be achieved along with using it for the conventional test execution and coverage.
Some of such advanced mobile automation tasks that can be accomplished using the Appium are as follows,
In-app Authentication
With so many security breaches that we hear about day by day, it has become imperative for the app developers to make the mobile application highly secured.
This directly implies that we as testers have to test the same thoroughly.
But can we automate the in-app authentication as well?
Well, recently there has been a feature from Appium that enables the in-app authentication on iOS simulators using face ID. This comes as an addition to the older authentication tests like the touch ID authentication on older iOS devices.
Even though FaceID currently supports only the iOS simulators and not the real devices and also has some shortages such as the following operations, it still can ease the process to some extend.
The operations that can be handled are as follows,
- Similar to scanning a face during device setup
- Detecting the match of the face.
- Detecting the unmatch of the face.
Custom APIs and Event APIs
Imagine a scenario wherein you need to start a couple of simulators. However, one simulator takes one second and the other simulator takes four seconds. Now if we have multiple simulators up and running, it’s going to be very difficult to keep track of different time lags.
To solve this mess, Appium Events API can be used which has inbuilt events called server events.
There are also custom events, that enable differentiation between Appium zone server events and customized events. For example Appium zone server events like simulator start time, and the customized events specific to the application under test. This in turn gives us an idea about the amount of time a particular event takes in automating.
Streaming across Devices
How would it be if we could stream the application onto browsers?
That is now possible using Appium, where you can actually see those navigations of the screen from your device onto the browser. Using the capability list, you can set the port number to any random and it’s going to stream a particular device screen onto your browser.
Manage the App and Device
When we talk about the management of the App and Device, we are here speaking about how-to automate the system apps like the build-in apps, install, remove, or launch the same. This particularly is required and Appium helps in doing so.
Just like these, there are several other actions that can be automated using Appium like, working with files on the device, handling the orientation or the size of the window screen, and simulating the phone calls as well as SMSs.
Creating a Page Object Model
Just like the usage of the Page object model design, which has vastly conquered the test automation field, we see a similar trend in the mobile application automation field as well. Creating and implementing the Page object model design helps tremendously as by doing so, we separate the business logic from our test cases. This leads to greater flexibility and provides clarity to the test automation suite.
With a dynamic and constantly changing market that covers the mobile apps, we can change the business logic in the page object files and keep our test cases untouched all at the same time.
Generation of Extend Reports
Adding extend reports to the Appium Java projects will only lead to an even better performance of the entire test procedure. After all, when the test execution is commenced, one thing that everyone is truly interested is in the final outcome from the test executions. Here’s where the Extend reports come into the picture, and it essentially plays an important role. Using Appium we can easily integrate the same with the Extend Report.
Combining with CI/CD practices
Appium also helps in setting up continuous integration in the test, which helps us in automating the test application deployment and configuration, creating the test execution that triggers the CI pipeline, and going forward also helps to analyze the test result and drive valuable feedback.
Mobile Automation Gestures
In addition to the above-mentioned abilities of appium, it also supports the following advance gestures:
- To tap on an element.
- To tap on x, y coordinates.
- To press an element for a particular duration.
- To press x, y coordinates for a particular duration.
- To swipe horizontally: Using start and end percentage of the screen height and width.
- To swipe vertically
- To swipe with two/multiple fingers
- To drag(Swipe) one element to another element
The above-mentioned gestures are widely used in day-to-day automation test scripts and its execution and appium support these gestures using the TouchActions class.
The pseudo-code is as:
TouchAction touchAction = new TouchAction(driver);
The pseudo-code to tap on an element is as:
TouchAction touchAction = new TouchAction(driver); touchAction.tap(tapOptions().withElement(element(androidElement))) .perform()
Some of the supported methods are:
Method Name | Purpose |
---|---|
press(PointOption pressOptions) | Press action on the screen. |
longPress(LongPressOptions longPressOptions) | Press and hold the at the center of an element until the context menu event has fired. |
tap(PointOption tapOptions) | Tap on a position. |
moveTo(PointOption moveToOptions) | Moves current touch to a new position. |
cancel() | Moves current touch to a new position. |
perform() | Perform this chain of actions on the performsTouchActions. |
Table source
Before you proceed with any of the following element tappings gestures such as to tap on an element, to tap on x, y coordinates, to press an element for a particular duration, to press x, y coordinates for a particular duration, there are a few things that you should be clear of.
1. Understand the significance of perform() method
We need to give special importance to the perform() method because if this method is missed being called, your test scripts might not run at all. Moreover so because whenever the method is called, it is used to send all the actions and instructions which are converted to JSON to the server of appium’s. Having said this, we can conclude that for any gesture code on mobile automation through appium, the lats method called would be perform().
2. How to get the x,y coordinate
Knowing how to get the x,y coordinates comes very handily when the selectors are not found. Even though appium supports native applications completely and if any mobile application is designed natively then you can find the unique selectors for your test scripts, there can be a scenario when the mobile application that you is built on cross-platform development technologies such as react native, ionic or xamarin framework. In such instances, we find it difficult to locate the elements and start wondering if the same is present or not. This is when you would need the x,y coordinates.
In order to find the x,y coordinates there are different approaches based on the OS of your mobile application.
Getting pointer location in iOS | Getting pointer location in Android |
---|---|
Unfortunately, iOS does not support the pointer location. With no other third-party apps nor tools to serve the purpose, all you have in hand is to calculate the x,y coordinates using screen resolution and prediction. | Go to Settings > Select the Developer options |
Enable the Pointer location. To get the coordinates tap on the location for that place and check at the top of the screen. |
The pseudo-code to tap on x, y coordinates as:
TouchAction touchAction = new TouchAction(driver); touchAction.tap(PointOption.point(1280, 1013)).perform();
Now you would want to know the swiping actions on the mobile application.
It is interesting to know that swiping as a result of a combination of two different activities.
Swiping= tapping + moving actions
To understand the steps in simple steps:
- First press on a particular point ->
- Wait(duration of swiping) ->
- Move to (moveTo()) particular location->
- Call the release() method
The pseudo-code to swipe is as:
TouchAction swipe = new TouchAction(driver)
.press(PointOption.point(972,500)).waitAction(waitOptions(ofMillis(800))).moveTo(PointOption.point(108,500))
.release().perform();
In order to study each of the advanced automation gestures, refer to a special tutorial which will be up soon.