Announcing Support for Mac OS X Yosemite

November 7th, 2014 by lauren nguyen

selenium testingWe’ve just released support for Mac OS X Yosemite on the Sauce cloud. Browsers supported include the latest two versions of Firefox and Chrome, Safari 8.0, and iOS 7.0, 7.1, and 8.0. Yosemite sees Apple bringing OS X closer to iOS, with features like Handoff to switch between devices and the ability to initiate Instant Hotspots from your iPhone. As always, we’ll continue to add platform support so you can ensure your app works for all your users. (more…)

Open Source Stories: The Selenium Project & Sauce

March 21st, 2014 by lauren nguyen

selenium testing & sauceWe LOVE open source. So much so that we created Open Sauce, to give open source projects the ability to test their projects for free on our cloud. And in just a year, we now have over 800 OSS projects testing on Sauce! To celebrate, we’ll be exploring some of the different projects test on Sauce and what makes them awesome.

Naturally, the first project we wanted to cover was The Selenium Project. You know you can run your Selenium tests on the Sauce cloud, but beyond that, the Selenium Project and Sauce have a long history of working together, and they were one of the first OSS projects to ever test on Sauce. So where did it all begin?

The history

Years ago, the Selenium Project was using Bamboo for CI on one machine, with a VM for Windows and a VM for Mac. The project wanted to move toward releasing weekly updates to Selenium. For them, it meant that they could be more iterative and only needed to fix a week’s worth of problems at a time, rather than months worth of issues. But before being able to release, the project had to vet each release for different browser/OS versions. But maintaining all the supported browser/OS versions for testing was a pain. A classic problem if there ever was one! The project also wanted to get a better CI process into place and integrate reliable tests into their process.

They began by switching from Bamboo to Jenkins with Google Compute Engine to see what they could get running. Then, they began to work to get their tests running on Sauce. On the Sauce side, Santi made some modifications to the Sauce cloud that would allow the Selenium Project to test a custom version of Selenium, since they were testing unreleased versions. The project modified their tests to be able to run in parallel. And, with that, the Selenium Project had instant access to all the browser/OS  versions they needed, whenever they needed them. Magic!

Hooked on Sauce

Fast forward to today, and the Selenium Project now tests about 30 different browser/OS configurations in parallel with every commit. According to the project, testing that took a week now takes an hour, and weekly releases have become easy. They also run JS unit tests on Sauce, and have much less manual testing to do now that much of the testing is automated. The ability to consistently run tests is the greatest benefit Sauce has given the project, as well as having access to log files and videos that are shareable to different contributors. As with most people, they deal with some flaky tests, but rarely have a Sauce-related issue.

Selenium wisdom

Even if you’re the Selenium Project, you can almost always improve the way you’re testing, so we asked the project to give us tips on automated testing. Here’s what they had to say:

  • Try to keep your testing hermetic – for example, have a bunch of test pages, and serve them individually for every test run that happens
  • Don’t rely on other services for your tests. The Facebook API or Twitter API may be down, which can cause unnecessary failures and cause you to lose the connection between the code you’re testing and what you see.
  • Be smart and think through the browsers you test on. Have the right conversations about what needs to be tested.
  • Don’t test on every browser, only the necessary ones. The Selenium Project only tests on 1 version of Chrome; running in 10 different versions is wasteful. Tests will always have some amount of flakiness, and if you test unneeded browsers you might get a lot more failures.
  • Do test functionally different browser versions. The project tests both IE 8 and 9 because there are major differences between them.

Finally, try not to need Selenium tests. Sound crazy coming from the Selenium Project, doesn’t it? But the key here is that the more you try to be efficient, economical, and thoughtful about your tests, the better off your remaining tests will be. You’ll probably never be able to not test at all, but it’s a good exercise in restraint. Make sure there’s good reasoning behind every test in your suite. Otherwise, it’s easy for your test suite to balloon until its unmanageable.

Want to get involved with the Selenium Project?

According to the project, the best way to get involved is to find something you want to fix. There are hundreds of open issues, find one that gets you fired up. Another great way to get involved is to look through the wiki and how-to-build code and write a small failing unit test for the thing you want to improve. That’ll motivate people to get involved. Finally, hang out in the IRC channel: #selenium.

What’s next for the project?

There are some great things for the Selenium Project on the horizon, including the release of Selenium 3, RCA APIs being ripped out, and a W3C spec. Learn more on Selenium HQ.

JavaScript Unit Testing API Revamped

February 19th, 2014 by Jonah Stiennon

Sauce provides a shortcut API for running Javascript unit tests (like Jasmine, Qunit, Mocha, and YUI Test) on all browsers using our cloud.

The old way of doing things:

Before the unit test API was added, running frontend javascript tests using Selenium was pretty messy. One had to point the Selenium browser at the page which ran/reported the tests then inspect DOM elements on the page looking for the test results.

DOM unit testing

See how we’re getting the number of passing tests from the element with id=”mocha-stats”?

This was pretty dependent on the styling of the page, and can be an intensive amount of logic to put into a Selenium test, especially if you want to parse all the individual assertions.

The new way of doing things:

Now you can let Sauce Labs take care of all the tedium!

Instead of setting up a Webdriver and sending Selenium commands to our servers, just fire off a single HTTP request:

curl -X POST$SAUCE_USERNAME/js-tests         
     -u $SAUCE_USERNAME:$SAUCE_ACCESS_KEY -H 'Content-Type: application/json'
     --data '{
        "platforms": [["Windows 7", "firefox", "20"],
                      ["Linux", "googlechrome", ""]],        
        "url": "",
        "framework": "jasmine"}'

The Sauce servers point a browser at the test page and get the results. We parse the results depending on the framework you’re using and display them in a friendly manner on the Job Details page.

Failing Mocha and Qunit test reports on Sauce Labs

Failing Mocha and Qunit test reports on Sauce Labs

We now report the specific tests which fail, no more hunting through screenshots/videos.

The bad news:

Sauce doesn’t inspect DOM elements to get your test results, it’s much more robust. Buuuuuuut, it relies on you making the test results available in the global scope of the Javascript on the page. Once you add the code appropriate for your framework, our servers gather the data, parse it, and display it.

An extra feature we get from this is support for an arbitrary “custom” unit test report. If you set `window.global_test_restults` to an object that looks like this:

  "passed": 4,
  "failed": 0,
  "total": 4,
  "duration": 4321,
  "tests": [
      "name": "foo test",
      "result": true,
      "message": "so foo",
      "duration": 4000
      "name": "bar test",
      "result": true,
      "message": "passed the bar exam",
      "duration": 300
      "name": "baz test",
      "result": true,
      "message": "passed",
      "duration": 20
      "name": "qux test",
      "result": true,
      "message": "past",
      "duration": 1

We’ll display the results and report the test status automatically.


Enjoy the new reporting, if this gets enough use we can expand support to more frameworks and see if we can inject the reporting code into test pages when we test them, lessening the work for the developer.

Remote file uploads with Selenium & PHP

January 8th, 2014 by Isaac Murchie

When testing file uploading functionality we have to consider two scenarios… running browsers locally and running them remotely. Locally there is no problem: test browsers have access to the files the tests specify. When running the same tests remotely, with the browser running on a different machine than the test programs, we run into a significant problem: the files we specify to upload are not available to the browser to upload!

So, simply sending the name of the file to upload will not work:

    public function testFileUpload() {
        $filebox = $this->byName('myfile');
        $this->sendKeys($filebox, "./kirk.jpg");

        $this->assertTextPresent("kirk.jpg (image/jpeg)");

The remote browser will not be able to do whatever it does with the file, in this case printing out the file name and content type.

Selenium 2 solves this problem by providing a method to upload the file to the server, and then giving the page the remote path the the file, and it does this transparently. For local tests you just use the sendKeys method to enter text into the file upload form element. For remote tests you do the same, but while setting up the tests you call the fileDetector method in order to let the WebDriver system know that a local file is being added rather than just the name of the file.

While all the Selenium WebDriver language bindings developed by the Selenium project have this functionality exposed (on Java and Ruby, see this tutorial, and for more information on Ruby, see this discussion), the PHP bindings do not. In order to facilitate remote testing in the Sauce Labs Selenium 2 Cloud, this functionality has been added to Sausage, allowing you to use PHPUnit to run your tests both locally and in the Sauce Cloud.

The crux of the solution is to give the system a way to know that a file might be uploaded, and how to discern this, by passing it a function that will be called when trying to send keys to form elements. This function takes a string and returns something truthy if it should be interpreted as the name of a file, however that is to be determined on your system. The most basic form, obviously, simply tests for the existence of the string as a file on the local file system. This function is then sent to the test library through a call to fileDetector:

    $this->fileDetector(function($filename) {
        if(file_exists($filename)) {
            return $filename;
        } else {
            return NULL;

If, when sending a string to a form element, the supplied function returns true, the system will, before changing the value of the function, transparently read the file, encode it in Base64, and send it to the Selenium server. It then puts the remote path, rather than the local one, into the file upload form element. This ensures the file contents to be available to the remote browser just as if it were running locally, allowing tests to proceed without trouble!

A full example (also available on GitHub):

  require_once 'vendor/autoload.php';

  class WebDriverDemo extends Sauce\Sausage\WebDriverTestCase {
      public static $browsers = array(
          // run FF15 on Windows 8 on Sauce
              'browserName' => 'firefox',
              'desiredCapabilities' => array(
                  'version' => '15',
                  'platform' => 'Windows 2012',

      public function setUpPage() {

          // set the method which knows if this is a file we're trying to upload
          $this->fileDetector(function($filename) {
              if(file_exists($filename)) {
                  return $filename;
              } else {
                  return NULL;

      public function testFileUpload() {
          $filebox = $this->byName('myfile');
          $this->sendKeys($filebox, "./kirk.jpg");

          $this->assertTextPresent("kirk.jpg (image/jpeg)");

Online Workshop – Learn How Gilt Groupe Uses Appium to Automate Their Mobile Testing

December 19th, 2013 by Bill McGee

1/14/2014 Update: We’ll be posting a link to the recorded webinar in several days.

We’re teaming up with the Gilt Groupe for our next online workshop! Join us on Tuesday, January 14th, 2014 at 11:00am PST to learn about how Gilt uses Appium to automate their mobile testing. Today, mobile purchases make up more than 40% of Gilt’s sales, and the mobile e-commerce market continues to rage – in the U.S. alone, mobile commerce will grow an estimated 63% to around $34.2 billion, up from $21 billion in 2012.

Find out how Gilt has streamlined testing of this vital part of their business, and what they are doing today on the testing front.

In the workshop, Matt Isaacs, Engineer at Gilt Groupe will cover how mobile testing at Gilt has changed since the early days of Gilt mobile. Then Mike Redman, Director of Sales Engineering at Sauce, will explain how to run your Appium tests on the Sauce cloud. This will be followed by a live Q&A. Register today!


Appium Meetup Slides

December 17th, 2013 by lauren nguyen

Missed last week’s Appium meetup? Check out Jonathan’s slides that highlight the history of Appium and its upcoming roadmap!

Keep an eye on the Appium meetup page to hear about upcoming Appium meetups.

The Most Unique Use Case for Sauce Ever!

December 16th, 2013 by lauren nguyen

We see people using Sauce to test all kinds of things. But the prize for most unique use case for Sauce ever goes to DocPop. When he decided to get married spontaneously in Vegas, he used Sauce to spin up a Windows machine running IE6 on his Macbook to fill out Nevada’s marriage license form. The form wasn’t compatible with anything but older versions of IE, and they wanted to get everything squared away in a hurry.

most unique use case for Sauce ever

We love helping people test their apps, but we’re especially tickled to be able to help this lovely couple on their special day! Congrats to you both! Read all about their wedding on DocPop’s blog.

Robot Hackathon Tonight! Build Your Own Mobile Testing Robot and Other November Happenings

November 6th, 2013 by Bill McGee

Now that you’ve gained an hour of sleep how are you going to get the most out of that new-found time? We can think of several slick events happening this month and there’s still plenty of time to join in. Tonight is Sauce’s 2nd mobile testing robot hackathon. Come build your own Tapster robot and learn how to use it to test your iOS or Android mobile apps. The hackathon will be held at GitHub/SF from 5:30 to 11:30 and there are still tickets available for builders and spectators.

Then on Thursday, November 7th there’s the first-ever Appium San Francisco meetup at Lookout HQ in SF. Dan Cuellar, original author of Appium and Lead Software Engineer for the Test team at Zoosk, will give a quick intro, history, and glimpse into what’s coming next for Appium, and then present how to use it to automate Mac OS X applications. Felix Rodriguez and Dan Patey from Bleacher Report will then present how to integrate native app testing for iOS and Android devices into current test suites using CucumberSauceium. Did we mention pizza and drinks? The meetup will run from 6:30 to 8:00.

Switch into glide and take a jaunt down highway 101 next week to check out AnDevCon 2013, the “largest dedicated Android conference in the universe!” The conference runs November 12-15 and Sauce will be demo’ing Appium and Tapster in booth 305 on Thursday the 14th and Friday the 15th. Might be a bottle of hot sauce in it for you!

Later in the month is the San Francisco Selenium meetup at Facebook in Menlo Park. Held on Tuesday, November 19th, the topic will be “Scaling Selenium at FB”. Can’t put it any better than this: “Facebook’s billion users have no idea the site gets released twice a day with no user impact. Achieving this put high demands on automated testing, and Selenium is a key part of making it all possible.” Come listen and learn how Facebook scaled Selenium to their size, FB’s new open-source PHP Selenium API, and how the company is making it easier for their engineers to write and debug Selenium tests

The Sauce Ecosystem and Integrations Approach

October 28th, 2013 by Jonathan Lipps

At Sauce Labs, we love open source software. We love the freedom, the efficiencies and the possibilities it provides. That’s why we built a platform leveraging the leading open source functional testing tools: Selenium and Appium, since it’s important to us to earn our customers’ business every day and not attempt to make them beholden to a proprietary solution. We’ve even open-sourced our tutorials, so that we have an easy way for others to notify us of issues or contribute a tweak to the tutorial code (see our PHP Tutorial, for example).

We don’t just love open source as a company, we love open source as developers too! We’re constantly trading tips and tricks for refining our open source dev setups, and we think of ourselves as building the kind of testing platform that we would want to use ourselves as open source aficionados. We’re active contributors to and participants in many different open source projects. Besides Selenium and Appium themselves, our devs have contributed to various Selenium/WebDriver client libraries (like Wd.js and Sausage), Selenium Builder, the Monocle async framework for Python (and Javascript), and more. The list goes on!

Sauce Development Teams

Sauce Development Teams

We recognize the importance of the open source ecosystem to our business. After all, running a test on Sauce involves at least one and usually a whole stack of open source technologies, from the Selenium/WebDriver client libraries to CI servers like Jenkins or Travis-CI. This of course massively impacts customers’ experience of Sauce. It doesn’t matter how blazing fast our cloud of browsers and mobile devices is if someone can’t run a test because some version of their language binding is broken! For this reason we are proud to announce that Sauce is building out a new development team: the Ecosystem and Integrations team. This team complements the existing Infrastructure and Web teams, focusing on opportunities to make our open source ecosystem better and opportunities to provide helpful integrations with other projects and services. We know that navigating an open source ecosystem is complex, but to provide the best possible experience to all our users, Sauce wants to be a Node company to the Node community, a PHP company to the PHP community, a Java company to the Java community, and so on.

It’ll be a challenge to find and solve the issues that exist in various projects, contributing our resources congruently with the structures already in place for each project. It’ll be a challenge to write idiomatic, intuitive code for a variety of communities, and prioritize well-organized documentation over a simple drive-by fix. It’ll be a challenge to communicate all the awesome integrations we’re building at Sauce to each community in the manner best received by that community. And that’s why we felt it was essential to have a team focused directly on these goals. So wish us luck, and make sure to let us know if you see an opportunity for a certain language or framework to work more seamlessly with Sauce!

Oh, yeah, Sauce is growing fast, and we are actively looking for developers to join this team and help build out its vision! So if working on open source projects in a variety of languages and frameworks sounds interesting to you, and if you’re excited about the idea of making it easier for developers to deliver higher quality faster than ever, head on over to our careers page and check it out.

Repost: Automating Mobile Testing at Gilt

September 23rd, 2013 by lauren nguyen

This post comes from our friends at Gilt, who are using Appium to automate their mobile testing. Check out the original post on the Gilt Tech blog!

Just a few years ago, mobile purchases made barely a dent in Gilt’s revenues. Today, mobile represents more than 40 percent of our sales, and will soon reach 50 percent. With so many of our customers interacting with us through their mobile devices, it’s imperative that we offer them a stable and enjoyable shopping experience. One bad bug can drive away customers for good.

Given the increasing importance of mobile to our business, and therefore the need to expand the number of teams that can contribute to our mobile applications, the Gilt mobile team has been hard at work improving and streamlining our testing processes. This post will describe our automated testing efforts, the technologies we use, and what lies on the horizon—both for us, and for the automation tools we use.

Testing at Gilt: A Brief Overview

In the early days of Gilt Mobile, none of our testing was automated. This was workable at the time because only one team—the mobile team—worked on the applications. We followed a fairly simple development cycle, as follows:

  • Our crew of mobile engineers would implement a series of new features, as determined by product management.
  • Known issues would be prioritized by severity, and fixed accordingly.
  • We would internally release versions of the application for testing purposes on a regular basis.
  • An overseas team of testers performed high level feature testing, regression testing, and other testing not covered by engineering during development.
  • The QA team would be responsible for the testing of new features, stress-testing the app in an effort to find new issues and performing of a suit of sanity tests to make sure that basic app functionality remained intact.
  • Once QA gave the go-ahead, we’d sign the build appropriately and submit to the iOS App Store.

As mobile has become more critical to the business and more teams have started to contribute to our mobile applications, QA has become increasingly important—particularly in iOS, where we concentrate most of our development efforts. With an increasing install base, more contributors, and more features, comes increased complexity. Ensuring that the app is issue-free when we submit to the App Store for approval has become more important than ever.

Understandably, the performance on mobile has generated a lot of excitement within Gilt. This has led to increased emphasis on mobile development within Gilt tech in general. We’re starting to see increased involvement from engineers on other teams, and having a loosely defined development process makes it difficult for newcomers to get up to speed and contribute to our efforts. How can we help them? With well defined process and a dash of automation!

Our Development Process Today

We’re gradually migrating to more of a test-driven workflow. We still depend very heavily on manual QA, but this is now supplemented by a suite of automated tests. Our development process is increasingly starting to look like this:

  • Developers are encouraged to write tests for their current features and fixes.
  • Instead of hand-building test releases at random, we now have Jenkins performing nightly builds.
  • Builds are followed by a run of functional tests.
  • Generated test reports are delivered to all team members, and can give detailed information on exactly where and how a failure occurred.

The intent is to free up QA from having to do repetitive and time-consuming sanity tests, which allows them to focus on testing new features and find issues before our users do.

The automated test framework we started with was KIF: Keep It Functional. Maintained by Square, KIF is quite mature. Using KIF was something of a proof of concept for us—more of a first step toward getting our automated testing situation under control. As such, we didn’t go through the exercise of writing an entire sanity test suite, but instead produced a couple short tests of basic functionality.

What we like about KIF: Tests run in the same process as the app, and so have access to notifications. This is pretty handy when testing asynchronous parts of your app. What we don’t like: it’s heavy on private and undocumented accessibility APIs. There may be some disagreement on how big a deal this is, but Apple’s under no obligation to keep these APIs consistent, and can do away with KIF dependencies without notifying anyone—which would make things pretty difficult to fix. Setting up KIF with Continuous Integration—while not impossible—could be easier.

Lately we’ve been trying out Appium, a tool we started exploring after one of our engineers learned about it at this year’s Selenium conference in Boston. Appium is built on top of UIAutomation: a framework, provided by Apple as part of Instruments, that enables you to interact with apps programmatically. We’ve used UIAutomation quite a bit in test prototyping and debugging.

So, Appium doesn’t use any private APIs or resort to any cloak-and-dagger hackery to get the job done. Great! But that’s only the start of how Appium captured our attention. Appium is built on the idea that testing native apps shouldn’t require including SDKs or recompiling your app. You should be able to use your prefered test practices, frameworks, and tools.

Appium is able to achieve all this by implementing a large portion of the Selenium JSON wire protocol, and essentially translating these calls into sets of native framework commands—UIAutomation and uiautomator, for iOS and Android respectively. It’s really this aspect alone that has us hooked. Our Web team has been down this path before, and has already built out a testing infrastructure revolving around Selenium, Scala, and ScalaTest. Using Appium has allowed us to take advantage of large chunks of our preexisting work. No reinventing the wheel, and no learning the hard way. This also provides us with a nice entry point for other Gilt engineers interested in working on mobile.

While it didn’t take long for us to get up and running with Appium, I still can’t say that it suits the needs of  everyone out there building apps. Smaller teams with no Selenium experience or existing infrastructure might feel a little more comfortable sticking with something like KIF or Calabash.

Can we do better? (Always)

Like everything else, Appium isn’t perfect. Areas where Appium could benefit from significant improvement:

  • For us, the Appium XPath engine is quite limited, and can only evaluate simple XPath expressions. It might be nice to see Appium use something like Cameron McCormack/Yaron Naveh’s  XPath parsing package for node.
  • Another, more minor gripe is that tag names used on the web don’t correspond with their mobile equivalents in Appium. For example, a text field on the web has the tag name “ input.” Appium calls these tags either “textfield” or “UIATextfield,” which brings us to another issue…
  • I say “either,” because doing something like driver.findElementsByTagName(“textfield”).getTagName() returns “UITextField.” Nothing shocking here, but perhaps the tag name we search for and the tag name returned should be the same thing?

The good news is that development activity on Appium is really high, and its (notably friendly) community is rapidly addressing its shortcomings. You developers out there who are looking for projects can always get involved in fixing some of this stuff. Some of us on Gilt’s mobile team have recently put some fixes into Appium.

Final Thoughts

At Gilt, we’re trying to create a culture that promotes a proactive approach to testing. For now we’re focused on taking a load off of the QA team by automating UI testing. Gradually we’ll move on to integration testing. Long-term, we’d like to adopt a TDD-centric workflow, with developers creating tests from the outset, and taking responsibility for test maintenance.

While frameworks like OCUnit and UIAutomation are relatively well documented, it doesn’t seem like any heavy emphasis has been placed on testing as a part of the development cycle. The tools are provided, but not evangelized. Fortunately, this is changing. Xcode 5 will feature some terrific test and automation centric enhancements such as XCTest, and the Bots Continuous Integration system. KIF is revamping its API with KIF-Next to get in line withXcode 5 and take advantage of its new features. And Selenium 3, which is in the spec stages, appears set to become a tool for user-focused automation of mobile and web apps. All-round, the future for automation and testing native mobile apps is looking brighter.