This blog post was originally posted by Seth Urban on Prototest’s blog here. ProtoTest is a mobile app test lab that blends UX and QA in a holistic way. Based in Denver, the company tests apps and sites for clients that range from startups to the Fortune 500.
As I’ve discussed in my previous blog posts, Appium from Sauce Labs is a great open source tool. Allowing test engineers to quickly set up automated tests for mobile device applications without having to do any modifications to the tested app’s source code is one of the most compelling reasons to use Appium.
However, Appium can be difficult to use when testing iOS applications because Appium runs a file on the iOS device called bootstrap.js. This is a file that takes the application that is running and locates elements on the screen for your test to interact with. The tests created with Appium work well using bootstrap.js to find elements. That is, until a new version of the application is delivered. Then everything needs to be updated.
Difficulty with Locators
The problem arises when using Appium xpath locators to find elements on the screen. When something changes in the application being tested, for example a new button is added or removed, all the xpath locators will change also. The best solution is to recommend to the development team to use names or IDs for all the elements in the application. Application elements that use specific names or ID are consistent across builds and can be found easily by Appium in subsequent versions. That won’t solve the problem of finding every element since specific application elements such as tree views and lists are better suited for xpaths, and what happens if the developers simply say no?
In a perfect world, there would be some other way to identify elements; one that is just as stable and easy as locator IDs. Fortunately, we have something almost as good: locations.
The location of the element we want to interact with can be used to create the element in Appium. Locations for elements rarely change, and should change less frequently as the application gets closer to completion. Appium doesn’t provide support for finding elements by location but does allow us to extract this information from the application frame.
Engineers can get the ‘page’ source by using the following code:
Now we can get the screen source exactly how Appium sees it. Examining the output from that function you will notice that the xpath locators are built from this. Now all we need to do is parse that string and build the proper xpath locator dynamically for the element we want to interact with.
We’ve got the Solution
Fortunately, we have a java class built already that does that for you:
appiumXpathBuilder - /jazzhands
This class, which you can download from the ProtoTest Github repository, will get the page source from whatever screen you want to test, and build xpath locators for any element with a valid location.
Once the class is instantiated on your test script, use the FindByLocation function to return the xpath locator for that element.
Your tests are now easier to maintain and don’t have to rely on developers providing an ID or a name for all the elements in their application — although that really is the best way to find elements.
Keep in mind also that the location locators may change depending on screen size of the device you’re testing on. As long as you’re testing on the same device this shouldn’t be a problem.
This was originally written by Curtis Siemens for the Zillow Engineering blog, find the rest of the post here.
Mobile app UI test automation is still in its infancy. Whereas web testing had around 20 years to bake some good tools such as Selenium, Watir, and many others, Mobile apps in the current incarnation and popularity have only existed for 6 years since iOS & Android launched in mid-2007. Because Mobile apps are quite young there hasn’t been a lot of time for good test tools to be developed, and most that do exist are developer-centric.
Our team was doing a lot of manual app testing which left us open to missing the occasional important regression. We started looking and evaluating automation test tools about 2.5 years ago. Some of the most popular options we encountered along the way were Robotium for Android, Xcode’s Instruments for iOS, and KIF for iOS, all of which we tried with limited success before we landed on Appium as our framework of choice.
Our Robotium framework died within a month – before we had any test cases running on a regular basis. Additions to our KIF test cases plateaued because they require coding ObjectiveC tied directly to our app code. Our Instruments test cases were working but presented a serious maintenance burden.
Had to compile each test case separately. Yes, ant and pom file maintenance, Jenkins compile job creation, getting Eclipse setup properly along with the Android SDK. It felt really heavy compared to most test case frameworks I’ve used. I could just feel my productivity slow to a crawl.
Had to compile a library version of our app so the test case compile could link to it. This worked for a while, but then one day although everything compiled fine, but test run-time broke. Didn’t have the time to investigate why and it just died – real life release schedules cause complicated (read “high maintenance”) test frameworks to die.
Had to code in Java and test code gets intertwined with the app class hierarchy. If you really want quick/agile test case development/maintenance you usually use a scripting language.
Only Static test cases were possible. Test case workflow cannot interact with any other Server/DB validation during the test run. You wind up your little test case mouse, let it run, wait for it to finish, and then you can perform some Server/DB validation in the wrapper script. This does not support test cases where you need to validate the Server/DB between various app test actions, and possibly dynamically drive the next app test actions based on Server/DB state. Nope – all you have is very static test cases along with post test run validation.
Xcode’s Instruments Drawbacks:
Had to record steps in Instruments and then copy out lines to your test script and customize. Your test code can use accessibility labels of the UI which helps, but it still gets its fingers into the view hierarchy. This presented a maintenance hassle with rapid design iterations when working with an agile team.
After struggling with Robotium and Instruments, the Appium experience felt like this:
It took us 2 weeks from trying Appium (http://appium.io & https://github.com/appium/appium) to get our first project running complete with setup and wrapper scripts. Instantly we felt the productivity improvements – the existing test case maintenance was reduced, and new test case development was definitely faster. Additionally all the testers enjoyed writing new test cases because it isn’t drudgery.
Appium is modeled after the well-established web-based test framework: Selenium. Exact fit for latest Agile & CI technologies such as Jenkins, nose, buildout.
This isn’t a “record and playback” UI-oriented test framework, then again your test cases won’t be brittle like those systems tend to be. We use the same test code to drive our app testing on iPhone as well as iPad. And in some cases, we were able to use the same test cases for Android.
It uses the same language for coding iOS GUI automation as Android GUI automation. If your iOS and Android apps had the exact same element tagging and the exact same screens/layout/workflows you could use the same test code for both. This will probably never be a reality for most companies because iOS & Android support different features and it’s hard to keep both your iOS and Android development and releases in lock-step. But this ability has allowed us to share some common automation libraries.
It allows you to create dynamic test cases that validate any service/DB, and drive dynamic test case actions. This allowed us to add Google Analytics validation to our app tests very rapidly.
It allows you to use Appium.app MacOS GUI to discover element names. No need to have the app source code or have your testers dig into the source code or repeatedly sync and recompile the app in Xcode to develop test cases.
It allows you to use xpath, name, tag_name to find elements. Using element ‘name’ search is ideal because it accesses app’s accessibility labels and makes test code less brittle to app changes, but when not everything in your app is tagged then xpath is super powerful. All the search methods are a powerful way to access elements without knowing app class hierarchy.
It allows you to drive automation on iOS/Android devices and iOS/Android Simulators without any code changes (just need to code some slight setup differences).
It boasts low operational and infrastructure costs: we are using 3 Mac Minis to run our Appium automation.
It allows you to upload test cases into Appium cloud and use a provider like SauceLabs to run your tests on a large number of devices and simulators.
It is open source, which means you can add your own features, and get bugs fixed faster.
It can be used to automate your MobileWeb pages on the device’s browser. Again, test code is Selenium just like what you use for DesktopWeb testing. Appium reports that they do MobileWeb testing better because they’ve solved certain issues with using straight Selenium/Webdriver.
Webviews test exactly the same as native app code.
Appium sits on top of Instruments for iOS, and UIAutomator for Android so it can harness the full power of the primary automation platforms.
It runs on Windows/Linux/Mac hosting OS – although Mac supports both iOS/Android, whereas Linux/Windows only support Android.
It can only run 1 test case at a time per hosting machine. Limitations of Instruments (for iOS) force this. You can plug in multiple devices to the hosting machine, but can only harness one of them at a time.
It is hard to get Android automation working for Android OS versions that are < 4.1 Those OS versions are supported through an additional Selendroid library, but it takes extra work and Selendroid does not support xpath search for elements which is a blocker issue for us, so we haven’t tried to go there.
Test cases do take some tuning to get the timing correct so they don’t incorrectly fail. Sometimes the server you app is talking to can delay the response and you have to write polling loops to detect when app is ready for next test step.
Appium documentation is a little weak – you are probably going to need an experienced coder to lay your Appium foundation.
Like most Mobile automation frameworks your test case runs within the app. This means that it is harder to test scenarios outside the app like device Notifications.
Sometimes the test code needs to scroll up or down to find elements that are currently not visible.
For iOS you can’t hook in an automatic Alert handler (for dialog boxes that popup) like you can in Instruments. Instead you have to know everywhere in the test automation code when you expect an alert to come up and then overtly switch over to the handler to handle it.
Testing Webviews in Android are currently difficult to code since discovering the elements is broken (Note: Webviews in iOS work just fine). It looks like this might be fixed when using Android OS 4.4
None of Appium’s drawbacks have been a showstopper for our testing, and usually features that we’ve wanted have been added to Appium with a few months. A side benefit also occurred once we converted over to Appium. Before only experienced coders had the skills to code and maintain our Robotium and Instruments framework and test cases, but now with our Appium wrapper code foundation we’ve been able to get more novice coders to add test cases – and our test case count has started to explode. Our test code is being maintained and we get many 100% Pass test runs.
After working on Appium for 6 months I have one additional thing to be thankful for this Thanksgiving.
Sauce Labs has recently announced Appium support which makes it easier to test mobile apps in the cloud. Appium is a mobile test automation framework for hybrid and native mobile apps. Cucumber is a behaviour driven development a.k.a BDD tool used with different programming languages. The combination of Cucumber and Appium can be used for automating iOS apps in the cloud using Sauce Labs. This is a repost of the original post. In this post, we will see how to set up test automation of our iOS app in the cloud using Sauce Labs.
In order to get started, we need to have the initial setup handy. This includes the following:
Mac OSX 10.7.4 or higher with Xcode installed with command line tools.
Your app source code or a prebuilt .app bundle for your app. Browse wide range of open-source iOS apps
Saucelabs Username and API key. Signup for Saucelabs free account.
Web development environment on Mac OSX for Ruby including Xcode, RVM, HomeBrew, Git and Ruby. Follow Moncef’s blog
It will be now uploaded to “sauce-storage:PlainNote.zip“. Now we are all set for writing tests for the application with Cucumber.
Setup Cucumber project
Now that we have already uploaded our app on SauceLabs temporary storage, we can setup Cucumber project to talk to the mobile app in the cloud. I assume that you are familiar with the BDD Code and code structure for the Cucumber project. Please refer my old post if you are not aware of BDD code.
We need a Gemfile in order to specify all our dependencies
$ mkdir sauce-cucumber-appium
$ cd sauce-cucumber-appium
$ rvm use 1.9.3
$ vim Gemfile
Now insert the following dependencies into the Gemfile
The PlainNote app feature will look something like this
As iOS automation specialist
I want to setup iOS app automation in the cloud using Saucelabs, Appium and cucumber
Scenario: Add new Note using PlainNote App
Given I have App running with appium on Sauce
When click + button using sauce driver
And I enter text "Data" and saved it on sauce
Then I should see "Data" note added on home page in the sauce cloud
This feature is self explanatory, we are going to add a new note and make sure it displayed on the Home page.
Setup Cucumber Environment
Let’s create ‘features/support/env.rb‘ where we can put our support code. We need to add sauce_capabilities mentioned in the Sauce Labs Appium tutorial.
Now that we have a created ‘sauce’ driver with all required desired capabilities, ee will using the ‘sauce’ object in our step_definitions
Write Step definitions using Selenium-Webdriver JSON Wire Protocol
At this point if you run the ‘bundle exec cucumber’ command it will tell you steps that are not implemented yet. We need to implement these step definitions using Selenium-Webdriver JSON Wire Protocol for Appium. Now we will create a step definition file and implement it
$ vim features/step_definitions/plain_note.rb
Now add these step definitions to the file.
Given(/^I have App running with appium on Sauce$/) do
When(/^click \+ button using sauce driver$/) do
When(/^I enter text "(.*?)" and saved it on sauce$/) do |data|
sauce.find_element(:xpath, "//window/scrollview/textview").send_keys data
Then(/^I should see "(.*?)" note added on home page in the sauce cloud$/) do |text|
note = sauce.find_element(:xpath, "//window/tableview/cell/text")
note.attribute("value").should match text
Appium Inspector is a feature of the Appium OSX app which allows you to inspect elements on your mobile app. You can also record tests in the different languages. Writing the Ruby code is easy if you have used Appium Inspector locally to record tests. Watch this video to know ‘How to use Appium Inspector‘.
Now, we are all set to run cucumber to execute tests on the SauceLabs
Now you will see the tests running on Sauce Labs and in your terminal you will see something like this
As iOS automation specialist
I want to setup iOS app automation in the cloud using Saucelabs, Appium and cucumber
Scenario: Add new Note using PlainNote App # features/plain_note_sauce.feature:5
Given I have App running with appium on Sauce # features/step_definitions/plain_note_sauce.rb:1
When click + button using sauce driver # features/step_definitions/plain_note_sauce.rb:4
And I enter text "Data" and saved it on sauce # features/step_definitions/plain_note_sauce.rb:8
Then I should see "Data" note added on home page in the sauce cloud # features/step_definitions/plain_note_sauce.rb:14
1 scenario (1 passed)
4 steps (4 passed)
Savings.com is the leading coupon and deal site on the web. We were recently ranked by Inc Magazine as one of the fastest growing companies in the US, and our US and UK businesses have have seen phenomenal growth every year. In the past 6 months alone, we have localized into another 7 countries and we are adding new services and features all the time.
The Savings.com development team needed to implement new features and ship these to the production site right when they are completed and checked in. One or more features or bug fixes can define a new release to production. To accomplish this, we would have to implement continuous integration and deployment to our production sites multiple time per day. And when using an agile release cycle, there is only time to manually test the new features being rolled out per release. All regression testing of existing features would have to be automated using Selenium.
We had already been running automated browser tests on our own servers and virtual machines using an in-house Selenium Grid implementation for a few months. This was working “somewhat ok” when we were on a less agile development release cycle (about every 3 weeks). But when we adopted continuous deployment, many pain points that already existed with our locally maintained Selenium implementation became all the more painful:
- We weren’t testing all major the browsers. Especially with IE, you need to test every version your site supports.
- Due to our limited hardware availability, a maximum test concurrency could not be achieved. As a result, running all the Selenium test suites would take up to 1 hour. Double that time if the tests needed to be rerun due to a regression test failure.
- It was difficult to debug and fix flaky tests that needed to be re-written due to unidentified race conditions. Selenium WebDriver didn’t have a nice command logging interface out of the box. Also, screenshots were only being taken on a test failure, which sometimes missed a key step that would help identify a problem.
- An aborted or canceled test run on our own Selenium installation usually required some manual intervention to close running browsers and make sure the selenium grid pool was idle.
- The maintenance overhead to keep up to date with the latest OS/browser combinations as well as maintain the selenium grid was taking too much time.
I initially came across Sauce Labs via their blog while searching for Selenium tips and coding best practices. While looking at all the services and support they offered, I thought I would give them a try. I quickly realized Sauce Labs had already identified and resolved all of the pain points listed above:
- An exhaustive list of OS/browser combinations to test on that is updated when new releases are out.
- We could finally achieve a maximum test concurrency and reduce the total test run time from 1 hour to 10 minutes. This scalability is very crucial to a
- Plenty of support for test debugging: Sauce Breakpoints, detailed command logging, video recording and screenshots at every step of your test.
- Sauce Labs deploys a clean virtual machine for every test.
- We no longer have flaky tests. A failed test run is either a bug or an unidentified change to an existing feature that requires a test to be updated.
Also, excellent support for running in a CI environment, ours being Atlassian Bamboo, Selenium/Java using Sauce Connect.
Sauce Labs has really become an important component to our success in adopting a continuous integration and deployment release cycle. I am currently working on implementing additional features they have to offer such as the Sauce Rest API and the Bamboo OnDemand Sauce plugin.
This post comes from our friends at Gilt, who are using Appium to automate their mobile testing. Check out the original post on the Gilt Tech blog!
Just a few years ago, mobile purchases made barely a dent in Gilt’s revenues. Today, mobile represents more than 40 percent of our sales, and will soon reach 50 percent. With so many of our customers interacting with us through their mobile devices, it’s imperative that we offer them a stable and enjoyable shopping experience. One bad bug can drive away customers for good.
Given the increasing importance of mobile to our business, and therefore the need to expand the number of teams that can contribute to our mobile applications, the Gilt mobile team has been hard at work improving and streamlining our testing processes. This post will describe our automated testing efforts, the technologies we use, and what lies on the horizon—both for us, and for the automation tools we use.
Testing at Gilt: A Brief Overview
In the early days of Gilt Mobile, none of our testing was automated. This was workable at the time because only one team—the mobile team—worked on the applications. We followed a fairly simple development cycle, as follows:
Our crew of mobile engineers would implement a series of new features, as determined by product management.
Known issues would be prioritized by severity, and fixed accordingly.
We would internally release versions of the application for testing purposes on a regular basis.
An overseas team of testers performed high level feature testing, regression testing, and other testing not covered by engineering during development.
The QA team would be responsible for the testing of new features, stress-testing the app in an effort to find new issues and performing of a suit of sanity tests to make sure that basic app functionality remained intact.
Once QA gave the go-ahead, we’d sign the build appropriately and submit to the iOS App Store.
As mobile has become more critical to the business and more teams have started to contribute to our mobile applications, QA has become increasingly important—particularly in iOS, where we concentrate most of our development efforts. With an increasing install base, more contributors, and more features, comes increased complexity. Ensuring that the app is issue-free when we submit to the App Store for approval has become more important than ever.
Understandably, the performance on mobile has generated a lot of excitement within Gilt. This has led to increased emphasis on mobile development within Gilt tech in general. We’re starting to see increased involvement from engineers on other teams, and having a loosely defined development process makes it difficult for newcomers to get up to speed and contribute to our efforts. How can we help them? With well defined process and a dash of automation!
Our Development Process Today
We’re gradually migrating to more of a test-driven workflow. We still depend very heavily on manual QA, but this is now supplemented by a suite of automated tests. Our development process is increasingly starting to look like this:
Developers are encouraged to write tests for their current features and fixes.
Instead of hand-building test releases at random, we now have Jenkins performing nightly builds.
Builds are followed by a run of functional tests.
Generated test reports are delivered to all team members, and can give detailed information on exactly where and how a failure occurred.
The intent is to free up QA from having to do repetitive and time-consuming sanity tests, which allows them to focus on testing new features and find issues before our users do.
The automated test framework we started with was KIF: Keep It Functional. Maintained by Square, KIF is quite mature. Using KIF was something of a proof of concept for us—more of a first step toward getting our automated testing situation under control. As such, we didn’t go through the exercise of writing an entire sanity test suite, but instead produced a couple short tests of basic functionality.
What we like about KIF: Tests run in the same process as the app, and so have access to notifications. This is pretty handy when testing asynchronous parts of your app. What we don’t like: it’s heavy on private and undocumented accessibility APIs. There may be some disagreement on how big a deal this is, but Apple’s under no obligation to keep these APIs consistent, and can do away with KIF dependencies without notifying anyone—which would make things pretty difficult to fix. Setting up KIF with Continuous Integration—while not impossible—could be easier.
Lately we’ve been trying out Appium, a tool we started exploring after one of our engineers learned about it at this year’s Selenium conference in Boston. Appium is built on top of UIAutomation: a framework, provided by Apple as part of Instruments, that enables you to interact with apps programmatically. We’ve used UIAutomation quite a bit in test prototyping and debugging.
So, Appium doesn’t use any private APIs or resort to any cloak-and-dagger hackery to get the job done. Great! But that’s only the start of how Appium captured our attention. Appium is built on the idea that testing native apps shouldn’t require including SDKs or recompiling your app. You should be able to use your prefered test practices, frameworks, and tools.
Appium is able to achieve all this by implementing a large portion of the Selenium JSON wire protocol, and essentially translating these calls into sets of native framework commands—UIAutomation and uiautomator, for iOS and Android respectively. It’s really this aspect alone that has us hooked. Our Web team has been down this path before, and has already built out a testing infrastructure revolving around Selenium, Scala, and ScalaTest. Using Appium has allowed us to take advantage of large chunks of our preexisting work. No reinventing the wheel, and no learning the hard way. This also provides us with a nice entry point for other Gilt engineers interested in working on mobile.
While it didn’t take long for us to get up and running with Appium, I still can’t say that it suits the needs of everyone out there building apps. Smaller teams with no Selenium experience or existing infrastructure might feel a little more comfortable sticking with something like KIF or Calabash.
Can we do better? (Always)
Like everything else, Appium isn’t perfect. Areas where Appium could benefit from significant improvement:
For us, the Appium XPath engine is quite limited, and can only evaluate simple XPath expressions. It might be nice to see Appium use something like Cameron McCormack/Yaron Naveh’s XPath parsing package for node.
Another, more minor gripe is that tag names used on the web don’t correspond with their mobile equivalents in Appium. For example, a text field on the web has the tag name “ input.” Appium calls these tags either “textfield” or “UIATextfield,” which brings us to another issue…
I say “either,” because doing something like driver.findElementsByTagName(“textfield”).getTagName() returns “UITextField.” Nothing shocking here, but perhaps the tag name we search for and the tag name returned should be the same thing?
The good news is that development activity on Appium is really high, and its (notably friendly) community is rapidly addressing its shortcomings. You developers out there who are looking for projects can always get involved in fixing some of this stuff. Some of us on Gilt’s mobile team have recently put some fixes into Appium.
At Gilt, we’re trying to create a culture that promotes a proactive approach to testing. For now we’re focused on taking a load off of the QA team by automating UI testing. Gradually we’ll move on to integration testing. Long-term, we’d like to adopt a TDD-centric workflow, with developers creating tests from the outset, and taking responsibility for test maintenance.
While frameworks like OCUnit and UIAutomation are relatively well documented, it doesn’t seem like any heavy emphasis has been placed on testing as a part of the development cycle. The tools are provided, but not evangelized. Fortunately, this is changing. Xcode 5 will feature some terrific test and automation centric enhancements such as XCTest, and the Bots Continuous Integration system. KIF is revamping its API with KIF-Next to get in line withXcode 5 and take advantage of its new features. And Selenium 3, which is in the spec stages, appears set to become a tool for user-focused automation of mobile and web apps. All-round, the future for automation and testing native mobile apps is looking brighter.
Unmesh Gundecha, tester extraordinaire and author of Selenium Testing Tools Cookbookwrote an awesome post about Appium earlier this year, and he kindly agreed to write us an extremely detailed and in-depth series of posts about iOS testing with Appium and Sauce. We’ll be posting this 3-part series of posts every Wednesday, so make sure to check back every week to read more!
Agile development projects are adopting practices such as automated acceptance testing, BDD, continuous integration, and others. There is a lot written about these approaches for developing software, their benefits and limitations etc. However, one of the fundamental benefits these practices offers is enhanced communication between project stakeholders including users, product owner, and the development team. It requires all the project participants to come together, discuss, and elicit the behavior of the application in agreed upon format of features, user stories and acceptance criteria and shared definition of Done.
Cucumber-JVM is a pure Java implementation of original Cucumber BDD/ATDD framework. It supports the most popular programming languages on the JVM. It’s already been used by various teams along with Selenium WebDriver for testing Web applications. Cucumber support creating features files which are written in a ubiquitous language understood by the whole team. These feature file describe the expected behavior of the application and are used as tests to run against the application. Cucumber can be used for API, integration and functional testing of the application.
In this example we will see functional testing of a sample native iOS App using Cucumber-JVM and Appium.
The Sample App
This example is based on a sample BMI Calculator application which is used by Health & Nutrition specialists to calculate the Body Mass Index of patients by submitting Height and Weight values to the App.
Let’s work on the main feature of this App as described below
Feature: Calculating Body Mass Index
As a health specialist
I want a BMI Calculator
So that I can calculate patient’s Body Mass Index
Setting up the test project
Let’s setup a new project using with IntelliJ IDEA using Maven & Cucumber-JVM with following steps:
Create a new Project as Maven module, provide the appropriate values for GroupId and ArtifactId. In this example GroupId is set with org.bmicalc.test value and ArtifactId as bmicalculator.test
Once the Intellij IDEA creates the project with appropriate folder structure, locate and modify pom.xml file. Add the highlighted dependencies to pom.xml
This will add the Cucumber-JVM and Selenium dependencies to the project.
Writing Feature file
In Cucumber-JVM the specifications or requirements are expressed in a plain text, Given/When/Then kind of syntax known as Gherkin language (https://github.com/cucumber/cucumber/wiki/Gherkin) which is understood by the whole team. So let’s create a feature file for the above feature in the project.
Add a new package bmicalculator.test under src/test/resources as shown in below screenshot and add a new file with name bmi_calculator.feature
Add the following feature to bmi_calculator.feature file
Feature: Calculating Body Mass Index
As a health specialist
I want a BMI Calculator
So that I can calculate patient’s Body Mass Index
Scenario Outline: Calculate Body Mass Index
Given I enter “<Height>” as height
And I enter “<Weight>” as weight
And I press the Calculate button
Then I should see “<BMI>” as bmi and “<Category>” as category
|Height |Weight |BMI |Category |
|170 |50 |17.30|Underweight|
|181 |80 |24.42|Normal |
|180 |90 |27.78|Overweight |
|175 |100 |32.65|Obese |
Every feature file contains a single feature. A feature usually contains a list of scenarios. Every scenario consists of a list of steps, which must start with one of the keywords Given, When, Then, But or And. Scenarios express the expected behavior of the system under given conditions.
In addition to a scenario, a feature may contain a background, scenario outline and examples. Our example scenario contains Scenario background and examples for number of BMI calculations representing each category.
This scenario outlines allow us to more concisely express these examples through the use of a template with placeholders. In this example Calculate Body Mass Index is run once for each row in the Examples section beneath it (not counting the first row which is a header). This is similar to data driven testing.
Next week, we’ll see how to run features with Maven, and enter step definitions.
Today’s guest post comes from Matthew Edwards, who wrote a bit about how he uses Sauce and Appium to run his mobile tests in the cloud. Matthew leads the mobile automation team at Aquent. If you’re interested in automated testing, they’re hiring.
Google presented at GTAC 2013 how they’re “catching 99% of all bugs” using rooted x86 emulator images with hardware acceleration. The emulators are run in a datacenter and they’ve completed “82 million Android tests in March.” A person in the audience asked, how can I use this? The response was if you’re at Google then you can use it. If you’re not “well, that sucks.”
I wanted a solution I could use that didn’t involve working for Google. Almost all the Google testing tools are open source with the exception of Espresso. In addition, I test on iOS and I’d like to apply the same methodology of testing in data centers to that platform. Each test can run on both platforms by using Appium due to the standardised WebDriver protocol. Jonathan demonstrated this in his GTAC 2013 presentation using the app I work on (Woven).
There are three key areas of the automation strategy. First, having a properly configured emulator and test runner. This is important for capturing all logging information and measuring flakiness. Google was able to achieve 0.15% flaky tests. In my testing with Calabash Android, I encountered the same flakiness issue Google mentioned in the presentation about Robotium. Fortunately Appium allows using uiautomator which is Google’s newest testing technology and it’s worked well.
Second is running in the cloud at scale. I’ve maintained a small internal physical device cloud and emulator cloud. It’s not fun. I’d much rather pay a service provider to take care of scaling for me. Sauce Labs offers Android emulators and iOS simulators as a service. I have been running Woven iOS and Android tests on Sauce since it was announced. It’s the most effective service I’ve used by far. Most providers focus exclusively on physical devices because they can charge outrageous amounts of money. It’s also the worst way to test. As Google says “We’re not saying there is no place for device-specific testing at all. There is still a place for it. But, you know, you need to do it when you have done everything else.”
The third is testing technology. If Google releases Espresso as open source, then I’ll be adding support to Appium. The existing instrumentation based testing technologies have significant issues with flakiness as mentioned in the presentation. Standardising on the WebDriver protocol enables swapping out the underlying testing technology without having to rewrite tests. This enables Appium to use best of breed testing technology from Apple (UI Automation) and Google (UI Automator) while retaining the flexibility to add new backends in the future.
For testing Woven, I use Ruby on OS X with the gems appium_lib and appium_console. The appium_lib tests for iOS and Android are very similar to the production tests I write for Woven. First, I’ll open up a console using arc to interactively write a test. Then I’ll run the test locally using Rake. Next, I’ll commit the change to GitHub. Finally, Jenkins will build the latest version of the application from source, run the tests on Sauce Labs, and email me if there are any problems.
I hope you’ll watch the GTAC presentations, try out Appium, and start using emulators and simulators in your mobile testing strategy.
Every Web app developer feels the pressure to keep up with the constant sea of change that surrounds the Web platform today. Every six weeks, someone releases a new browser version. Every six months, someone releases a new Web-enabled phone, tablet, TV, or toaster. Without exceptional testing tools, properly supporting all the platforms and devices that people use today is impossible. As an open-source software company that works closely with a wide variety of companies, it was extremely important for us at SitePen to be able to provide consistently high-quality, well-tested code.
Sauce Labs was a natural fit for this project from the start for many reasons. Everyone at SitePen is extremely dedicated to the open Web, and Sauce Labs was the only cloud testing company we found that used the W3C’s WebDriver standard instead of relying on proprietary systems and protocols. By fully embracing open Web technology, Intern and Sauce Labs offer long-term interoperability and flexibility that other testing tools and cloud testing providers simply do not. Sauce is also, to our knowledge, the first and only cloud testing provider to publicly offer free accounts to open-source projects like Intern, dgrid, and the Dojo Toolkit (thanks!).
From a logistical perspective, Sauce Labs gives us access to a complete & reliable server farm for testing that we would otherwise need to buy, configure, and maintain ourselves. Bonus Sauce features like automatic video and WebDriver command log capturing have made it incredibly easy for us to identify and reproduce test failures after-the-fact, and the ability to watch video and break into live tests has saved time and reduced confusion on numerous occasions. The few times we’ve had an integration problem, the Sauce support team has been incredibly responsive and helpful.
Since its public release, the level of interest and feedback we’ve received about Intern has been amazingly positive. People have told us that getting up and running using Intern with Sauce is just as easy as we’d hoped it would be. Our developers’ level of happiness writing tests has gone way up, our clients are extremely satisfied with the quality and ease with which they can maintain the code we deliver to them, and we’ve saved lots of time and money that would have been spent on manual quality assurance testing and re-testing. Intern with Sauce is now an integral part of our successful development strategy, and we’re delighted to be able to share it openly so you can also “make the Intern do the testing”.
Our friends at The Able Few have been working on an exciting project, and they wanted to share how they’ve been using Sauce to test the product they are developing in partnership with Click with Me Now. Read on to hear more about how they integrated Sauce into their development process, and the open source GruntJS tool they built to help them use Sauce!
For the some time now, we have been developing an application over at The Able Few, a St. Louis & Indianapolis based product and software development company, called Click With Me Now. CWMN, is a no-download, co-browsing solution that allows users to share a browsing session with others in a single click.
In the early stages of development, we built a series of prototypes to serve as a proof-of-concept for the application when demoed in a controlled environment. We focused most of our initial efforts around Chrome/Webkit, obviously, which allowed us to cover an impressive amount of ground in a short amount of time. When it came time to start the full build of the application, however, we had to start backfilling support for other older browsers and make sure that this didn’t impair our existing codebase or slow us to a complete halt with testing.
After weeks devoted to dealing with countless browser compatibility issues, mostly in IE, and many profanity laced insults hurled at the computer gods, we had a working prototype that functioned in at least the latest version of every browser. Of course that wasn’t good enough and going forward we were going to need to be able to test the app in every other browser known to man.
We started writing out some Selenium and Capybara tests. This allowed us to do things like disable Websockets and Flash, in order to make sure that the application didn’t crash and burn, which we could test in Chrome, Firefox, and Safari without a hitch — but not IE! Also, what about mobile? Oh, and what would happen if the Host was using an old version of Firefox and the Guest was using a Webkit nightly? As these questions began to pile up, our aspirations of adequately testing our application began to sour. We played around with VirtualBox VMs, but it quickly became apparent that the number of OS and browser variants we needed would become a nightmare to manage, not to mention the licensing costs. We also needed to think about mobile devices, older versions of OSX, Linux, and myriad other combinations that we had yet to consider.
It was a lot to deal with and we certainly felt the pressure of needing to accomplish this in a timely manner. Then by a stroke of luck we came across Sauce Labs.
The Guest, in general, has a lower set of requirements. The relationship between the host and one or more guests make testing this application a real chore with significantly more points of failure than a typical web application.
Since we already had a good amount of test coverage in Selenium using Ruby, the effort required to integrate our existing tests with Sauce Labs was pretty minimal. Obviously, it defeats the purpose if all the testing isn’t automated, so we made it possible to pass arguments to our tests and instead of just running `ruby spec/test.rb` we are running `ruby spec/test.rb windows8firefox17`, and so on.
To further automate this, we created a JSON file that contained all of the browser/OS combinations that we wanted to test, along with the data that the Sauce Labs API needed. Of course, a couple of weeks later, we found out that Sauce Labs already has an up-to-date list available — one day we may update our code to use their version, but for now, our method does what we need it to do.
By updating `BrowserList.json` to point to your local JSON file from above you can test this code out, too — well, as long as your environment is setup and ready to run Sauce Labs tests with Selenium and Capybara. Just a couple updates to your Gemfile and a quick read through Sauce Labs’ ‘Getting Started’ guide in their Documents should get you started
Currently, there are no official methods of using Grunt with Sauce Labs so we decided to build our own solution and open source it. Qettlhup, Klingon for “sauce”, is an automated tool built on GruntJS that facilitates the testing of browsers in Sauce Labs using a JSON object.
Once you’re familiar with GruntJS — which you should be, because it is awesome — you will quickly see how it works. You simply pass it the language of the tests that you want to run, the path to the file that contains the tests, and the JSON file that lists all of the browsers that you want to run the tests against. In the case of CWMN we have two sets of tests, one for the Host and the other for the Guest, which both have their own requirements, so we pass Qettlhup another task in the same object and we’re ready to go. You can see examples of setting up single and multiple tasks on the Qettlhup README.
Now, we just type a single command — `grunt qettlhup` — and away it goes, running all of our tests for us and giving feedback on each set of tests as they run. The process will stop as soon as it hits an error, which gives us the opportunity to fix the problem and to test that fix against a specific browser outside of Qettlhup. Once the error has been resolved, we can go back and run our battery of tests again.
All of that said, even with our huge library of tests, custom GruntJS automation tool, and knowledge of Selenium and Capybara, none of this would have been possible without Sauce Labs. It is a utility that has saved us countless hours of manual processing and testing and has provided us with fantastic reporting and feedback. Each test even includes archives of image and video references. There is no telling where we would be if Sauce Labs wasn’t around! We couldn’t recommend their testing platform more and are looking at it with every build we do in the future.
The following is a post by Dylan Lacey. Kinda. He’s chosen to do it in interpretive dance easily digestible image format.
This is San Francisco.
I was recently there…
…because I am the new Ruby Developer Evangelist at Sauce Labs.
My job involves taking a fair few of these;
And sharing a lot of these. It’s a burden.
I’m here to help get Ruby developers up and running, helping them to Drink the Sauce.
It’s SO awesome.
I’m insanely thrilled to be working with such smart people on an amazing product!
My bailiwick is to make it better for Ruby developers to use Sauce Labs’ stuff, including improving the gems, writing better documentation and building the community. Plus, it’s a unique opportunity to use my personality to insult people all over the world! If you want to offend your manager, disrupt your office and get kicked out of your favorite bar for getting shouty about whether RSpec is better than Test::Unit, let me know. And if you want help with the Sauce Gem or tests from Ruby land, hit me up.