Campus Explorer Reduces Testing Time From 72 Hours to 72 Minutes Using Sauce Labs

July 18th, 2014 by Amber Kaplan

We sat down with Senior QA Manager Sage Rimal to hear how they use Sauce Labs at Campus Explorer.  Sage shared how they’ve automated their tests on Sauce, and have since reduced their testing time from 72 hours to 72 minutes.

Watch the video below to get the latest!

Want to share your story? We want to hear from you! Submit a request here.

Appium Bootcamp – Chapter 1: Get Started with Appium Testing

July 16th, 2014 by Amber Kaplan

appium_logoThis is the first post in a series called Appium Bootcamp by noted Selenium expert Dave Haeffner. Dave recently immersed himself in the open source Appium project and collaborated with leading Appium contributor Matthew Edwards to bring us this material. Appium Bootcamp is for those who are brand new to mobile test automation with Appium. No familiarity with Selenium is required, although it may be useful. This is the first of eight posts; a new post will be released each week.

Before You Get Started

Appium is built to test mobile apps of all kinds (native, hybrid, and mobile web), has client libraries written in every popular programming language, it’s open source, works on every prominent operating system, and best of all — it works for iOS and Android.

But before you jump in with both feet, know that there is a bit of setup in order to get things up and running on your local machine.

A brief Appium primer

Appium is architected similarly to Selenium — there is a server which receives commands and executes them on a desired node. But instead of a desktop browser, the desired node is running a mobile app on a mobile device (which can be either a simulator, an emulator, or a physical device).

So in order for Appium to work, we will need to install the dependent libraries for each device we care about.

Initial Setup

Testing an iOS app? Here’s what you’ll need:

+ Install Java (version 7 of the JDK or higher)
+ Install Xcode
+ Install Xcode Command-line Build Tools

For more info on supported Xcode versions, read this.

Testing an Android app? Here’s what you’ll need:

+ Install Java (version 7 of the JDK or higher)
+ Install the Android SDK (version 17 or higher)
+ Install the necessary packages for your Android platform version in the Android SDK Manager
+ Configure an Android Virtual Device (AVD) that mimics the device you want to test against

For more info on setting up the Android SDK and configuring an AVD, read this.

Next, you’ll need to install Appium. Luckily, there’s a handy binary for it Appium.app for OSX and Appium.exe for Windows. This binary also happens to be a GUI wrapper for Appium.

Alternatively, if you want the absolute latest version of Appium and aren’t afraid to get your hands dirty, then you can install Appium from source and run it from the command line.

But if you’re new to mobile testing, then the one-click installer is a better place to start.

An Appium GUI primer

The Appium GUI is a one-click installer for the Appium server that enables easy configuration of your app and Appium.

Aside from the easy install, it adds a key piece of functionality — an inspector. The inspector enables a host of functionality, most notably:

+ Shows you all of the elements in your app
+ Enables you to record and playback user actions

inspector_android

While the inspector works well for iOS, there are some problem areas with it in Android on Appium at the moment. To that end, the Appium team encourages the use of uiautomatorviewer (which is an inspector tool provided by Google that provides similar functionality to the Appium inspector tool). For more info on how to set that up, read this.

uiautomatorviewer

More on inspectors and how to use them in a later post.

It’s worth noting that while we can configure our app within the Appium GUI, it’s not necessary since we will be able to do it more flexibly in code. More on that in the next post.

Making Sure Appium Is Setup

After you have your Appium one-click installer up and running, you can verify your setup by using it’s Doctor functionality. It is the button on the left of the `Launch` button. It is the one that looks like a doctor’s stethoscope.

Click on that, and it should output information in the center console window of the Appium GUI.

appium_doctor

If you don’t see anything outputted, refer to this open issue.

What About A Programming Language?

Before you do a victory lap, you’ll also want to have chosen a programming language to write your tests in, installed said programming language, and installed it’s client bindings for Appium.

Currently, Appium has client bindings for JavaJavaScriptObjective C.NETPHPPython, and Ruby.

The examples in this series will be written in Ruby. You can use version 1.9.3 or later, but it’s advisable to use the latest stable version. For instructions on installing Ruby and the necessary client libraries (a.k.a. “gems”), read this.

Outro

Now that you have Appium setup with all of it’s requisite dependencies, along with a programming language and Appium client bindings, we’re ready to load up a test app and step through it.

About Dave Haeffner: Dave is a recent Appium convert and the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by thousands of testing professionals) as well as The Selenium Guidebook (a step-by-step guide on how to use Selenium Successfully). He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Follow Dave on Twitter – @tourdedave

 

 

AngularJS Data Models: $http VS $resource VS Restangular

July 15th, 2014 by Amber Kaplan

Sauce Labs software developer Alan Christopher Thomas and his team have been hard at work updating our stack. He shared with us some insight into their dev process, so we thought we’d show off what he’s done. Read his post below.

Over the past few months, the Sauce Labs web team has fixed its crosshairs on several bits of our stack that needed to be refreshed. One of those bits is the list of jobs all customers see when they first log into their account. It looks like this:

stack

Our current app is built in Backbone.js. We vetted lots of options for frontend MVC frameworks and data binding that could replace and simplify the existing Backbone code: Ember, Angular, React, Rivets, Stapes, etc.

After lots of research, building some stuff, and personal preference, our team decided we were most comfortable with Angular.

We had one more thing we wanted to verify, though, before settling.

How complicated will it be to model our data?

This was the first question on most of our minds, and it was the one question about Angular that Google fell into a void of silence. Backbone has models and collections. Ember.js has Ember Data and ember-model. Stapes has extensible observable objects that can function as collections. But what about Angular? Most examples we found were extremely thin on the data layer, just returning simple JavaScript objects and assigning them directly to a $scope model.

So, we built a small proof of concept using three different AngularJS data modeling techniques. This is a dumbed down version of our Jobs page, which only displays a list of jobs and their results. Our only basic requirement was that we kept business logic out of our controllers so they wouldn’t become bloated.

We gave ourselves some flexibility with the API responses and allowed them to be wrapped with an object or not wrapped to emphasize the strengths of each approach. However, all calls require limit and full parameters to be passed in the GET query string.

Here’s what we wanted the resulting DOM template to look like:

{{ job.getResult() }} {{ job.name }}

Note that each resulting job should be able to have a getResult() method that displays a human-readable outcome in a badge. The rendered page looks like this:

jobs


The Code: $http vs $resource vs Restangular

So, here’s the resulting code for all three approaches, each implementing a getResult() method on every job.

$http

In this approach, we created a service that made the API calls and wrapped each result as a Job() object with a getResult() method defined on the prototype.

API Response Format:

{
  "meta": {}, 
  "objects": [
    {
      "breakpointed": null, 
      "browser": "android", 
      "browser_short_version": "4.3",
      ...
    },
    {
      ...
    },
    ...
  ]
}

models.js:

angular.module('job.models', [])
    .service('JobManager', ['$q', '$http', 'Job', function($q, $http, Job) {
        return {
            getAll: function(limit) {
                var deferred = $q.defer();

                $http.get('/api/jobs?limit=' + limit + '&full=true').success(function(data) {
                    var jobs = [];
                    for (var i = 0; i < data.objects.length; i ++) {
                        jobs.push(new Job(data.objects[i]));
                    }
                    deferred.resolve(jobs);
                });

                return deferred.promise;
            }
        };
    }])
    .factory('Job', function() {
        function Job(data) {
            for (attr in data) {
                if (data.hasOwnProperty(attr))
                    this[attr] = data[attr];
            }
        }

        Job.prototype.getResult = function() {
            if (this.status == 'complete') {
                if (this.passed === null) return "Finished";
                else if (this.passed === true) return "Pass";
                else if (this.passed === false) return "Fail";
            }
            else return "Running";
        };

        return Job;
    });

controllers.js:

angular.module('job.controllers', [])
    .controller('jobsController', ['$scope', 'JobManager', function($scope, JobManager) {
        var limit = 20;
        $scope.loadJobs = function() {
            JobManager.getAll(limit).then(function(jobs) {
                $scope.jobs = jobs;
                limit += 10;
            });
        };

        $scope.loadJobs();
    }]);

This approach made for a pretty simple controller, but since we needed a custom method on the model, our services and factories quickly became verbose. Also, if we were to abstract away this behavior to apply to other data types (sub-accounts, tunnels, etc.), we might end up writing a whole lot of boilerplate.

$resource

UPDATE: Per Micke’s suggestion in the comments section below, we’ve posted a follow-up with a cleaner implementation of the $resource version of the Job model. It parses an API response similar to the one shown in the Restangular scenario and allows for much cleaner method declaration usingangular.extend.

Angular provides its own $resource factory, which has to be included in your project as a separate dependency. It takes away some of the pain we felt in writing our JobManager service boilerplate code and allows us to apply our custom method directly to the $resource prototype, then transform responses to be wrapped in itself.

API Response Format:

{
  "items": [
    {
      "breakpointed": null, 
      "browser": "android", 
      "browser_short_version": "4.3", 
      ...
    }, 
    {
      ...
    }
    ...
  ]
}

models.js:

angular.module('job.models', [])
    .factory('Job', ['$resource', function($resource) {
        var Job = $resource('/api/jobs/:jobId', { full: 'true', jobId: '@id' }, {
            query: {
                method: 'GET',
                isArray: false,
                transformResponse: function(data, header) {
                    var wrapped = angular.fromJson(data);
                    angular.forEach(wrapped.items, function(item, idx) {
                        wrapped.items[idx] = new Job(item);
                    });
                    return wrapped;
                }
            }
        });

        Job.prototype.getResult = function() {
            if (this.status == 'complete') {
                if (this.passed === null) return "Finished";
                else if (this.passed === true) return "Pass";
                else if (this.passed === false) return "Fail";
            }
            else return "Running";
        };

        return Job;
    }]);

controllers.js:

angular.module('job.controllers', [])
    .controller('jobsController', ['$scope', 'Job', function($scope, Job) {
        var limit = 20;
        $scope.loadJobs = function() {
            var jobs = Job.query({ limit: limit }, function(jobs) {
                $scope.jobs = jobs.items;
                limit += 10;
            });
        };

        $scope.loadJobs();
    }]);

This approach also makes for a pretty elegant controller, except we really didn’t like that the query() methodtook a callback instead of giving us a promise didn’t return a promise directly, but gave us an object with the promise in a $promise attribute (thanks Louis!). It felt pretty un-Angular a little ugly. Also, the process of transforming result objects and wrapping them felt like a strange dance to achieve some simple behavior (UPDATE: see this post). We’d probably end up writing more boilerplate to abstract that part away.

Restangular

Last, but not least, we gave Restangular a shot. Restangular is a third-party library that attempts to abstract away pain points of dealing with API responses, reduce boilerplate, and do it in the most Angular-y way possible.

API Response Format:

[
  {
    "breakpointed": null, 
    "browser": "android", 
    "browser_short_version": "4.3", 
    ...
  }, 
  {
    ...
  }
  ...
]

models.js:

angular.module('job.models', [])
  .service('Job', ['Restangular', function(Restangular) {
    var Job = Restangular.service('jobs');

    Restangular.extendModel('jobs', function(model) {
      model.getResult = function() {
        if (this.status == 'complete') {
          if (this.passed === null) return "Finished";
          else if (this.passed === true) return "Pass";
          else if (this.passed === false) return "Fail";
        }
        else return "Running";
      };

      return model;
    });

    return Job;
  }]);

controllers.js:

angular.module('job.controllers', [])
  .controller('jobsController', ['$scope', 'Job', function($scope, Job) {
    var limit = 20;
    $scope.loadJobs = function() {
      Job.getList({ full: true, limit: limit }).then(function(jobs) {
        $scope.jobs = jobs;
        limit += 10;
      });
    };
$scope.loadJobs();
}]);

In this one, we got to cheat and use Restangular.service(), which provides all the RESTful goodies for us. It even abstracted away writing out full URLs for our API calls. Restangular.extendModel() gives us an elegant way to attach methods to each of our model results, making getResult() straightforward and readable. Lastly, the call in our controller returns a promise! This let us write the controller logic a bit more cleanly and allows us to be more flexible with the response in the future.

tldr; Concluding Thoughts

Each of the three approaches have their appropriate use cases, but I think in ours we’re leaning toward Restangular.

$http - $http is built into Angular, so there’s no need for the extra overhead of loading in an external dependency. $http is good for quick retrieval of server-side data that doesn’t really need any specific structure or complex behaviors. It’s probably best injected directly into your controllers for simplicity’s sake.

$resource - $resouce is good for situations that are slightly more complex than $http. It’s good when you have pretty structured data, but you plan to do most of your crunching, relationships, and other operations on the server side before delivering the API response. $resource doesn’t let you do much once you get the data into your JavaScript app, so you should deliver it to the app in its final state and make more REST calls when you need to manipulate or look at it from a different angle. Any custom behavior on the client side will need a lot of boilerplate.

Restangular - Restangular is a perfect option for complex operations on the client side. It lets you easily attach custom behaviors and interact with your data in much the same way as other model paradigms you’ve used in the past. It’s promise-based, clean, and feature-rich. However, it might be overkill if your needs are basic, and it carries along with it any extra implications that come with bringing in additional third-party dependencies.

Restangular seems to be a decently active project with the prospect of a 2.0 that’s compatible with Angular 2.0, currently a private repository. However, a lot of the project’s progress seem to be dependent on the work of a single developer for the time being.

We’re looking forward to seeing how Restangular progresses and whether or not it seems like a good fit for us at Sauce!

- Alan Christopher Thomas, Software Developer, Sauce Labs

Register Today: Fearless Browser Test Automation [WEBINAR]

July 14th, 2014 by Amber Kaplan

john_david_daltonWe’re thrilled to be working with our friends at O’Reilly to present our next webinar, Fearless Browser Test Automation with John-David Dalton, on August 5 at 10:00 AM Pacific Time.

Browser test automation can be intimidating, leaving developers to spend their time manually testing browsers (many times in VMs) or opting to simply not test a range of browsers. Join John-David Dalton as he discusses browser test automation, removes the roadblocks/gotchas, and shows lots of awesome things you can do (code coverage, perf testing, tagging, & more).

Visit this link to sign up today!

About John-David Dalton, Program Manager, Microsoft

John-David Dalton is a Program Manager at Microsoft, working on the Chakra JavaScript engine, helping make your web applications, animations, and games run faster and smoother in Internet Explorer. He’s also the creator of Lo-Dash, co-maintainer of jsPerf and Benchmark.js, and contributes to other open source projects to help developers be more productive & write better JavaScript.

How HotelTonight.com Leverages Appium for Mobile Test Automation

July 1st, 2014 by Amber Kaplan

We love this blog post written by Quentin Thomas at HotelTonight! In it, he explains how they use Appium to automate their mobile tests. He also walks readers through specifics, such as the RSpec config helper. Read a snippet below.

Thanks to the engineers at Sauce Labs, it is now possible to tackle the mobile automation world with precision and consistency.

Appium, one of the newest automation frameworks introduced to the open source community, has become a valuable test tool for us at HotelTonight. The reason we chose this tool boils down to Appium’s philosophy.

“Appium is built on the idea that testing native apps shouldn’t require including an SDK or recompiling your app. And that you should be able to use your preferred test practices, frameworks, and tools”.

-Quentin Thomas, HotelTonight, June 17, 2014

To read the full post with code, click here. You can follow Quentin on Twitter at @TheQuengineer.

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Recap: Continuous Testing in the Cloud [WEBINAR]

June 30th, 2014 by Amber Kaplan

Last week our own Michael Sage discussed the topic of  Continuous Testing in the Cloud for our latest webinar.

In it, he went over how to take advantage of cloud-hosted development resources to help increase release time and improve application quality as a best practice. He also talked about how to use Sauce Labs to securely execute your Selenium tests in parallel and reduce the time it takes to run your critical integration and acceptance tests.

Missed the webinar? You can listen to the recording here, or check out the slides below.

VelocityConf Santa Clara 2014 Recap: Talk by Jonah, Interview with Sebastian [VIDEOS]

June 27th, 2014 by Amber Kaplan

It was a busy week for Sauce Labs! We were out in full effect at VelocityConf Santa Clara.

Tapsterbot was showing off its sweet mobile testing moves at the Sauce booth. Check out the Vine video below to see it in action (courtesy of Jon Johns).

Our own Jonah Stiennon, Ecosystems and Integrations Developer, gave a Lightning Demo Keynote as well. Watch his speech, Test Driven Mobile Development with Appium, Just Like Selenium below.

Lastly, be sure to check out this interview with Mike Hedrickson and Sebastian Tiedtke, Director of Web Development at Sauce. Lots of great gems in here about best practices, testing “to the left”, and the state of mobile. Enjoy!

Want to see us at a conference near you? Let us know where we should go next in the comments.

Jason Huggins Takes a Leave of Absence at Sauce Labs to Work on HealthCare.gov 2.0

June 24th, 2014 by Amber Kaplan

nerds healthcare

Last year Sauce Labs cofounder Jason Huggins was part of the tech surge team of advisors who worked to fix the site and home of “Obamacare” through the power of test automation. He’s decided to take another leave of absence from Sauce to help the Ad Hoc and the Marketplace 2.0 teams to further this great cause.

Here’s what Jason had to say about the state of the government and automation:

Long term, government needs to get better at software development, including test automation. For now, the best way to fight that fight is by example. If the Marketplace 2.0 team can help the government avoid last year’s drama when open enrollment begins again in October, then we’ll have a good story to share with other departments and agencies. But if we get a repeat of last year, then it’ll be a lot harder to convince government officials to change their ways.

-Jason Huggins, June 23, 2014

Read Jason’s inspiring, original post here. You can follow his adventures on Twitter at @hugs.

While we’ll miss him around the office, we’re confident this is a worthy cause.

Bleacher Report’s Continuous Integration & Delivery Methodology: Test Analytics

June 24th, 2014 by Amber Kaplan

This is the final post in a three part series highlighting Bleacher Report’s continuous integration and delivery methodology by Felix Rodriguez.  Read the first post here and the second here.

Last week we discussed setting up an integration testing server that allows us to post, which then kicks off a suite of tests. Now that we are storing all of our suite runs and individual tests in a postgres database, we can do some interesting things – like track trends over time. At Bleacher Report we like to use a tool named Librato to store our metrics, create sweet graphs, and display pretty dashboards. One of the metrics that we record on every test run is our PageSpeed Insights score.

PageSpeed Insights

PageSpeed insights is a tool provided by Google developers that analyzes your web or mobile page and gives you an overall rating. You can use the website to get a score manually, but instead we hooked into their api in order to submit our page visit score to Liberato. Each staging environment is recorded separately so that if any of them return measurements that are off, we can attribute this to a server issue.

average page speeds

Any server that shows an extremely high rating is probably only loading a 500 error page. A server that shows an extremely low rating is probably some new, untested JS/CSS code we are running on that server.

Below is an example of how we submit a metric using Cukebot:

generic_steps.rb

require_relative 'lib/pagespeed'
Given(/^I navigate to "(.*?)"$/) do |path|
  visit path
  pagespeed = PageSpeed.new(current_url)
  ps = pagespeed.get_results
  score = ps["score"]
  puts "Page Speed Score is: #{score}"
  metric = host.gsub(/http\:\/\//i,"").gsub(/\.com\//,"") + "_speed"
  begin
    pagespeed.submit(metric,score)
  rescue
    puts "Could not send metric"
  end
end

lib/pagespeed.rb

require 'net/https'
require 'json'
require 'uri'
require 'librato/metrics'

class PageSpeed
  def initialize(domain,strategy='desktop',key=ENV['PAGESPEED_API_TOKEN'])
    @domain = domain
    @strategy = strategy
    @key = key
    @url = "https://www.googleapis.com/pagespeedonline/v1/runPagespeed?url=" + \
      URI.encode(@domain) + \
      "&key=#{@key}&strategy=#{@strategy}"
  end

  def get_results
    uri = URI.parse(@url)
    http = Net::HTTP.new(uri.host, uri.port)
    http.use_ssl = true
    http.verify_mode = OpenSSL::SSL::VERIFY_NONE
    request = Net::HTTP::Get.new(uri.request_uri)
    response = http.request(request)
    JSON.parse(response.body)
  end

  def submit(name, value)
    Librato::Metrics.authenticate "ops@bleacherreport.com", ENV['LIBRATO_TOKEN']
    Librato::Metrics.submit name.to_sym  => {:type => :gauge, :value => value, :source => 'cukebot'}
  end
end

 

Google’s PageSpeed Insights return relatively fast, but as you start recording more metrics on each visit command to get results on both desktop and mobile, we suggest building a separate service that will run a desired performance test as a post – or at least in its own thread. This will stop the test from continuing its run or causing a test that runs long. Which brings us to our next topic.

Tracking Run Time

With Sauce Labs, you are able to quickly spot a test that takes a long time to run. But when you’re running hundreds of tests in parallel, all the time, it’s hard to keep track of the ones that normally take a long time to run versus the ones that have only recently started to take an abnormally long time to run. This is why our Cukebot service is so important to us.

Now that each test run is stored in our database, we grab the information Sauce stores for run time length and store it with the rest of the details from that test. We then submit that metric to Librato and track over time in an instrument. Once again, if all of our tests take substantially longer to run on a specific environment, we can use that data to investigate issues with that server.

To do this, we take advantage of Cucumber’s before/after hooks to grab the time it took for the test to run in Sauce (or track it ourselves) and submit to Librato. We use the on_exit hook to record the total time of the suite and submit that as well.

Test Pass/Fail Analytics

To see trends over time, we’d also like to measure our pass/fail percentage for each individual test on each separate staging environment as well as our entire suite pass/fail percentage. This would allow us to notify Ops about any servers that need to get “beefed up” if we run into a lot of timeout issues on that particular setup. This would also allow us to quickly make a decision about whether we should proceed with a deploy or not when there are failed tests that pass over 90% of the time and are currently failing.

The easiest way to achieve this is to use the Cucumber after-hook to query the postgres database for total passed test runs on the current environment in the last X amount of days, and divide that by the total test runs on the current environment in the same period to generate a percentage, store it, then track it over time to analyze trends.

Summary:

Adding tools like these will allow you to look at a dashboard after each build and give your team the confidence to know that your code is ready to be released to the wild.

Running integration tests continuously used to be our biggest challenge.  Now that we’ve finally arrived to the party, we’ve noticed that there are many other things we can automate. As our company strives for better product quality, this pushes our team’s standards with regard to what we choose to ship.

One tool we have been experimenting with and would like to add to our arsenal of automation is Blitz.io. So far we have seen great things from them and have caught a lot of traffic-related issues we would have missed otherwise.

Most of what I’ve talked about in this series has been done, but some is right around the corner from completion. If you believe we can enhance this process in anyway, I would greatly appreciate any constructive criticism via my twitter handle @feelobot. As Sauce says, “Automate all the Things!”

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Microsoft Launches IE Developer Channel; Win For WebDriver

June 20th, 2014 by Amber Kaplan

In light of the recent news about Microsoft  launching an IE Developer Channel that features WebDriver support, we asked our own VP of Engineering to comment on the story. His post is below – enjoy!

I think all web developers can identify with that fateful moment when you think your front-end project is finished, and then it dawns on you: time to start the painful process of cross browser testing and debugging. You may get up and walk around the office, procrastinate, look for conversations to start — but eventually you sit down and dive in. Historically Firebug was your main crutch; over time Chrome dev tools started to take hold - but eventually you find yourself waiting for your Windows VM to boot, so you can dive into IE’s latest and wait for the fateful popup stating something like, line 66, “Error: Object expected” – let the games begin.

The moment I saw Selenium IDE zooming through functional tests, I was more excited than I had ever been about development tools. The debug loop required to squash JavaScript bugs in IE from a Mac was so painful, and finally there was some automated hope in sight. I think we can all agree that Selenium RC left something to be desired — but trying to drive a browser against its will (and protective security measures) using JS was ultimately a losing battle for anything reliability.

However, reading the recent news from Microsoft announcing the IE developer channel made the DOM hacker inside me from 5 years ago breathe a sigh of relief. I think it’s fair to say that it’s better late than never — and the tools they are starting to ship look pretty cool. But what’s more — the new process and way they are looking at shipping features through a dev channel shows some promising potential for a better future relationship between dev’s and IE. On the Sauce Labs cloud we are still seeing ~19 percent of our jobs running against IE, and we don’t expect that to disappear anytime soon – as the IE market share still makes up for a pretty substantial amount of users, especially in regards to enterprise customers.

What’s even more exciting is the inclusion of the ability to enable built in WebDriver. At this point ~86 percent of Sauce Cloud usage is WebDriver instead of Selenium RC. The browser vendor support of for the WebDriver JSONWireProtocol specification, has driven the continued and significant growth of the community around client libraries and tools. The acknowledgement and delivery of a clean implementation of this spec for IE and inferred ongoing maintenance is a step in the right direction in order to make functional testing for IE straight forward and reliable. The ultimate stated goal of the Selenium project, is to move the entire implementation of the specification into the browsers – and this, in combination with the W3C working draft, are exciting steps in that direction.

At the moment, to take advantage of all this new goodness, you must be running a consumer grade version of Windows (not a server version), but they are aware of that and understandably wanted to get some progress out the door. We are looking forward to deploying this to the Sauce Cloud, so that our users can quickly get access to the latest and greatest.

I think the Selenium and developer communities should see this as a great sign that we are being heard, and supported, and to continue pushing hard to make development and testing tools first class citizens.

-Adam Christian, VP Engineering, Sauce Labs

You can read the announcement and detail breakdown here. Get directions on how to enable and play with the new functionality in their documentation here. Sauce Labs is working to support this upon availability on server windows versions; stay tuned for updates upon release.