Posts Tagged ‘python’

Appium + Sauce Labs Bootcamp: Chapter 3, Working with Hybrid Apps and Mobile Web

July 6th, 2015 by Isaac Murchie

This is the third in a series of posts that discuss using Appium with Sauce Labs. Chapter 1 covered Language Bindings; Chapter 2 discusses Touch Actions; this chapter covers Testing Hybrid Apps & Mobile Web; and Chapter 4 is about Advanced Desired Capabilities.

Mobile applications can be purely native, or web applications running in mobile browsers, or a hybrid of the two, with a web application running in a particular view or set of views within a native application. Appium is capable of automating all three types of applications, by providing different “contexts” in which commands will be interpreted.


A context specifies how the server interprets commands, and which commands are available to the user. Appium currently supports two contexts: native and webview. Both of these are handled by different parts of the system, and may even proxy commands to another framework (such as webviews on Android, which are actually served by a managed ChromeDriver instance). It is important to know what context you are in, in order to know how you can automate an application.

Native contexts

Native contexts refer to native applications, and to those parts of hybrid apps that are running native views. Commands sent to Appium in the native context execute against the device vendor’s automation API, giving access to views and elements through name, accessibility id, etc. As well, in this context commands to interact directly with the device, to do operations such as changing the wifi connect or setting the location, can be used. These very powerful operations are not available within the context of a webview.

In addition to native and hybrid applications, the native context can be accessed in a mobile web app, in order to have some of the methods only available there. In this case it is important to understand that the commands are not running against the web application running in the browser, but rather are interacting with the device and the browser itself. (more…)

Appium + Sauce Labs Bootcamp: Chapter 2, Touch Actions

June 15th, 2015 by Isaac Murchie

This is the second in a series of posts that discuss using Appium with Sauce Labs. In the first chapter, we covered Language Bindings. This installment discusses Touch Actions; Chapter 3, Testing Hybrid Apps & Mobile Web; and Chapter 4 is about Advanced Desired Capabilities.

One aspect of mobile devices that needs to be automated in order to fully test applications, whether native, hybrid, or web, is utilizing gestures to interact with elements. In Appium this is done through the Touch Action and Multi Touch APIs. These two APIs come from an early draft of the WebDriver W3C Specification, and are an attempt to atomize the individual actions that make up complex actions. That is to say, it provides the building blocks for any particular gesture that might be of interest.

The specification has changed recently and the current implementation will be deprecated in favor of an implementation of the latest specification. That said, the following API will remain for some time within Appium, even as the new API is rapidly adopted in the server.

Touch Actions

The Touch Action API provides the basis of all gestures that can be automated in Appium. At its core is the ability to chain together _ad hoc_ individual actions, which will then be applied to an element in the application on the device. The basic actions that can be used are:

  • press
  • longPress
  • tap
  • moveTo
  • wait
  • release
  • cancel
  • perform

Of these, the last deserves special mention. The action perform actually sends the chain of actions to the server. Before calling perform, the client is simply recording the actions in a local data structure, but nothing is done to the application under test. Once perform is called, the actions are wrapped up in JSON and sent to the server where they are actually performed! (more…)

Appium + Sauce Labs Bootcamp: Chapter 1, Language Bindings

June 1st, 2015 by Isaac Murchie

Appium logo w- tagline {final}-01Welcome to the first in our new series, Appium + Sauce Labs Bootcamp. This first chapter will cover an overview of Appium and its commands, demonstrated with detailed examples of the Java and Python language bindings.  Later we will follow up with examples in Ruby. This series goes from fundamental concepts to advanced techniques using Appium and Sauce Labs. The difficulty is Beginner->Advanced. In Chapter 2 we cover Touch Actions; Chapter 3, Testing Hybrid Apps & Mobile Web; and Chapter 4 is about Advanced Desired Capabilities. (more…)

Guest Post: Cross-Browser Selenium Testing with Robot Framework and Sauce Labs

April 3rd, 2014 by Bill McGee

Robot FrameworkEver wondered how to keep your Selenium tests up-to-date with your ever-changing user interface?

Sauce Labs customer Asko Soukka set out to answer just that in his post, “Cross-Browser Selenium Testing with Robot Framework and Sauce Labs”.

See a snippet below:

Do you try to fix your existing tests, or do you just re-record them over and over again?

In the Plone Community, we have chosen the former approach (Plone is a popular open source CMS written in Python). We use a tool called Robot Framework to write our Selenium acceptance tests as maintainable BDD-style stories. Robot Framework’s extensible test language allows us to describe Plone’s features in a natural language sentences, which can then be expanded into either our domain specific or Selenium WebDriver API based testing language.

-Asko Soukka,, March 20, 2014

Asko walks you through the process of installing Robot Framework, writing and running a Selenium test suite in Robot, to refactoring that suite to run cross-browser on Sauce Labs. Be sure to check out the rest of his excellent post and tutorial here.

Do you have a topic you’d like to share with our community? We’d love to hear from you! Submit topics here, feel free to leave a comment, or tweet at us any time.

Python Virtualenv

July 26th, 2012 by The Sauce Labs Team

Python logo iconA Python virtualenv is a Python interpreter and set of installed Python packages. Python packages are installed separate from the main system so you don’t need to use sudo/su or worry about installing things system-wide. Since the interpreter (the python run) and packages in a virtualenv are separate from other virtualenvs, you can switch between different versions of Python and different versions of installed packages with a single command.

Using virtualenv lets you do things like:

  • Replicate your production Python environment in a dev setup so you can be sure you’re writing code and tests using the same package versions your deployed code will use.
  • Create environments with the same set of Python packages but using different versions of the Python interpreter (e.g., Python 2.5, Python 2.7, and PyPy).
  • Setup experimental environments for trying out new Python package versions or new Python software projects.

The easiest way to get going with virtualenv on Mac and Linux is to use the virtualenv-burrito installer. This one-line command installs virtualenv and virtualenvwrapper (a nice way to use virtualenvs):

curl -s | $SHELL

Once installed, you can make new virtualenvs with mkvirtualenv <name>, install packages with pip install <package>, and switch between virtualenvs using workon <name>.

Let’s make a virtualenv and run the example Sauce Labs test:

mkvirtualenv saucelabs
pip install selenium

Login to (or create a free account on) Go to the Python getting started page, copy your private curl command, and run it like:

curl -s | bash

To install the same Python package versions in another virtualenv we can use pip freeze to get what’s in our current environment, save it to a file, and use that file as an install list:

workon saucelabs
pip freeze > requirements.txt
mkvirtualenv likesaucelabs
pip install -r requirements.txt

By developing, testing, and running production Python code in virtualenvs created using the same requirements files, you greatly reduce the risk of writing bugs which only show up in one environment but not another. We highly recommend using this tool to help ship code faster.

Running your Selenium tests in parallel: Python

September 2nd, 2009 by Santiago Suarez Ordoñez

This is the first post in our series “Running your Selenium tests in parallel”, in which we’re going to explain how to set up a concurrent execution environment and considerably reduce your testing times.

The first client language we’re going to address, as the title says, is Python. To start, let’s get a set of Selenium Python tests to use:

The tests are stored in a public github project. You can see the code there or even download them in a zip file: (28KB)

This set of tests validates our site. It checks basic structure, our login form, our signup form, and our feedback tab (UserVoice powered). The tests are grouped into Python files based on the functionality they address.

Note: These tests are written to run against SauceRC, our own service, which takes care of all the concurrency work on the RC server side and launches multiple concurrent browsers.

If you run these tests in your local environment, you should not send concurrent jobs to a single Selenium RC server. More than 2 tests at the same time will consistently affect performance such that any reduction in test time will not be significant.

Another alternative would be to use Selenium Grid and manage a group of test servers yourself. SauceRC offers a suite of useful extras and eliminates the maintenance headaches of the DIY approach.

First approach: One by one execution

So,  the regular way to run these would be to run each file from the command line, like so:

$ python
$ python
$ python
$ python

As a first approach it’s not that bad; your tests will run, and you’ll get decent output with the results.
Execution time: 16.2 minutes

16.2 minutes isn’t that bad. The problem is that once your test base starts growing, the time and effort it will take you run these tests will increase considerably. We’re talking about hours in some cases.

The next step to take will be to write a small script that runs them automatically, so the only thing the user will have to do is to run a single Python script that takes care of the rest:

$ python

The code in this script is:

import os
import glob

tests = glob.glob('test*.py')
for test in tests:
    os.system('python %s' % test)

Easier to run, but not much faster…
Execution time: 16 minutes

Second approach: A process per each test set

Let’s improve this using subprocess, one of Python’s multiple process libraries.

from subprocess import Popen
import glob

tests = glob.glob('test*.py')
processes = []
for test in tests:
    processes.append(Popen('python %s' % test, shell=True))

for process in processes:

Now our tests run concurrently, using a separate process per python file (4 processes total).
Execution time: 8.3 minutes

This is much faster! It reduces the whole execution time from the time it takes to run all the tests one after another to the time it takes the longest of the four sets of tests to end.

Note: One drawback to this method is that the output we receive isn’t in any particular order. You can change that by setting the stdout parameter in the Popen instantiation and then concatenating the output in order.

The definitive solution: A process per test

We can further reduce the execution time by running all 14 individual test methods in parallel. The easiest way we’ve found to do that in Python is to use nose.

First, install it:

$ easy_install nose==0.11 multiprocessing

Now, place yourself where your tests are, run nose and enjoy:

$ nosetests --processes=14

Much easier to run (no helper script for this), cleaner output…
Execution time?
3 minutes!

Do you have a better way to run your tests concurrently? Tell us about it in the comments.