How To Add Visual Testing To Existing Selenium Tests

February 27th, 2015 by Dave Haeffner

Thanks again to those of you who attended our recent webinar with Applitools on automated visual testing.  If you want to share it or if you happened to miss it, you can catch the audio and slides hereWe also worked with Selenium expert Dave Haeffner to provide the how-to on the subject. Enjoy his post below.

 

The Problem

In previous write-ups I covered what automated visual testing is and how to do it. Unfortunately, based on the examples demonstrated, it may be unclear how automated visual testing fits into your existing automated testing practice.

Do you need to write and maintain a separate set of tests? What about your existing Selenium tests? What do you do if there isn’t a sufficient library for the programming language you’re currently using?

A Solution

You can rest easy knowing that you can build automated visual testing checks into your existing Selenium tests. By leveraging a third-party platform like Applitools Eyes, this is a simple feat.

And when coupled with Sauce Labs, you can quickly add coverage for those hard to reach browser, device, and platform combinations.

Let’s step through an example.

An Example

NOTE: This example is written in Java with the JUnit testing framework.

Let’s start with an existing Selenium test. A simple one that logs into a website.

// filename: Login.java

import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;

public class Login {

    private WebDriver driver;

    @Before
    public void setup() {
        driver =  new FirefoxDriver();
    }

    @Test
    public void succeeded() {
        driver.get("http://the-internet.herokuapp.com/login");
        driver.findElement(By.id("username")).sendKeys("tomsmith");
        driver.findElement(By.id("password")).sendKeys("SuperSecretPassword!");
        driver.findElement(By.id("login")).submit();
        Assert.assertTrue("success message should be present after logging in",
                driver.findElement(By.cssSelector(".flash.success")).isDisplayed());
    }

    @After
    public void teardown() {
        driver.quit();
    }
}

In it we’re loading an instance of Firefox, visiting the login page on the-internet, inputting the username & password, submitting the form, asserting that we reached a logged in state, and closing the browser.

Now let’s add in Applitools Eyes support.

If you haven’t already done so, you’ll need to create a free Applitools Eyes account (no credit-card required). You’ll then need to install the Applitools Eyes Java SDK and import it into the test.

// filename: pom.xml

<dependency>
  <groupId>com.applitools</groupId>
  <artifactId>eyes-selenium-java</artifactId>
  <version>RELEASE</version>
</dependency>
// filename: Login.java

import com.applitools.eyes.Eyes;
...

Next, we’ll need to add a variable (to store the instance of Applitools Eyes) and modify our test setup.

// filename: Login.java
...
public class Login {

    private WebDriver driver;
    private Eyes eyes;

    @Before
    public void setup() {
        WebDriver browser =  new FirefoxDriver();
        eyes = new Eyes();
        eyes.setApiKey("YOUR_APPLITOOLS_API_KEY");
        driver = eyes.open(browser, "the-internet", "Login succeeded");
    }
...

Rather than storing the Selenium instance in the driver variable, we’re now storing it in a localbrowser variable and passing it into eyes.open — storing the WebDriver object that eyes.openreturns in the driver variable instead.

This way the Eyes platform will be able to capture what our test is doing when we ask it to capture a screenshot. The Selenium actions in our test will not need to be modified.

Before calling eyes.open we provide the API key (which can be found on your Account Details page in Applitools). When calling eyes.open, we pass it the Selenium instance, the name of the app we’re testing (e.g., "the-internet"), and the name of the test (e.g., "Login succeeded").

Now we’re ready to add some visual checks to our test.

// filename: Login.java
...
    @Test
    public void succeeded() {
        driver.get("http://the-internet.herokuapp.com/login");
        eyes.checkWindow("Login");
        driver.findElement(By.id("username")).sendKeys("tomsmith");
        driver.findElement(By.id("password")).sendKeys("SuperSecretPassword!");
        driver.findElement(By.id("login")).submit();
        eyes.checkWindow("Logged In");
        Assert.assertTrue("success message should be present after logging in",
                driver.findElement(By.cssSelector(".flash.success")).isDisplayed());
        eyes.close();
    }
...

With eyes.checkWindow(); we are specifying when in the test’s workflow we’d like Applitools Eyes to capture a screenshot (along with some description text). For this test we want to check the page before logging in, and then the screen just after logging in — so we use eyes.checkWindow(); two times.

NOTE: These visual checks are effectively doing the same work as the pre-existing assertion (e.g., where we’re asking Selenium if a success notification is displayed and asserting on the Boolean result) — in addition to reviewing other visual aspects of the page. So once we verify that our test is working correctly we can remove this assertion and still be covered.

We end the test with eyes.close. You may feel the urge to place this in teardown, but in addition to closing the session with Eyes, it acts like an assertion. If Eyes finds a failure in the app (or if a baseline image approval is required), then eyes.close will throw an exception; failing the test. So it’s best suited to live in the test.

NOTE: An exceptions from eyes.close will include a URL to the Applitools Eyes job in your test output. The job will include screenshots from each test step and enable you to play back the keystrokes and mouse movements from your Selenium tests.

When an exception gets thrown by eyes.close, the Eyes session will close. But if an exception occurs before eyes.close can fire, the session will remain open. To handle that, we’ll need to add an additional command to our teardown.

// filename: Login.java
...
    @After
    public void teardown() {
        eyes.abortIfNotClosed();
        driver.quit();
    }
}

eyes.abortIfNotClosed(); will make sure the Eyes session terminates properly regardless of what happens in the test.

Now when we run the test, it will execute locally while also performing visual checks in Applitools Eyes.

What About Other Browsers?

If we want to run our test with it’s newly added visual checks against other browsers and operating systems, it’s simple enough to add in Sauce Labs support.

NOTE: If you don’t already have a Sauce Labs account, sign up for a free trial account here.

First we’ll need to import the relevant classes.

// filename: Login.java
...
import org.openqa.selenium.Platform;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
import java.net.URL;
...

We’ll then need to modify the test setup to load a Sauce browser instance (via Selenium Remote) instead of a local Firefox one.

// filename: Login.java
...
    @Before
    public void setup() throws Exception {
        DesiredCapabilities capabilities = DesiredCapabilities.internetExplorer();
        capabilities.setCapability("platform", Platform.XP);
        capabilities.setCapability("version", "8");
        capabilities.setCapability("name", "Login succeeded");
        String sauceUrl = String.format(
                "http://%s:%s@ondemand.saucelabs.com:80/wd/hub",
                "YOUR_SAUCE_USERNAME",
                "YOUR_SAUCE_ACCESS_KEY");
        WebDriver browser = new RemoteWebDriver(new URL(sauceUrl), capabilities);
        eyes = new Eyes();
        eyes.setApiKey(System.getenv("APPLITOOLS_API_KEY"));
        driver = eyes.open(browser, "the-internet", "Login succeeded");
    }
...

We tell Sauce what we want in our test instance through DesiredCapabilities. The main things we want to specify are the browser, browser version, operating system (OS), and name of the test. You can see a full list of the available browser and OS combinations here.

In order to connect to Sauce, we need to provide an account username and access key. The access key can be found on your account page. These values get concatenated into a URL that points to Sauce’s on-demand Grid.

Once we have the DesiredCapabilities and concatenated URL, we create a Selenium Remote instance with them and store it in a local browser variable. Just like in our previous example, we feedbrowser to eyes.open and store the return object in the driver variable.

Now when we run this test, it will execute against Internet Explorer 8 on Windows XP. You can see the test while it’s running in your Sauce Labs account dashboard. And you can see the images captured on your Applitools account dashboard.

A Small Bit of Cleanup

Both Applitools and Sauce Labs require you to specify a test name. Up until now, we’ve been hard-coding a value. Let’s change it so it gets set automatically.

We can do this by leveraging a JUnit TestWatcher and a public variable.

// filename: Login.java
...
import org.junit.rules.TestRule;
import org.junit.rules.TestWatcher;
import org.junit.runner.Description;
...
public class Login {

    private WebDriver driver;
    private Eyes eyes;
    public String testName;

    @Rule
    public TestRule watcher = new TestWatcher() {
        protected void starting(Description description) {
            testName = description.getDisplayName();
        }
    };
...

Each time a test starts, the TestWatcher starting function will grab the display name of the test and store it in the testName variable.

Let’s clean up our setup to use this variable instead of a hard-coded value.

// filename: Login.java
...
    @Before
    public void setup() throws Exception {
        DesiredCapabilities capabilities = DesiredCapabilities.internetExplorer();
        capabilities.setCapability("platform", Platform.XP);
        capabilities.setCapability("version", "8");
        capabilities.setCapability("name", testName);
        String sauceUrl = String.format(
                "http://%s:%s@ondemand.saucelabs.com:80/wd/hub",
                System.getenv("SAUCE_USERNAME"),
                System.getenv("SAUCE_ACCESS_KEY"));
        WebDriver browser = new RemoteWebDriver(new URL(sauceUrl), capabilities);
        eyes = new Eyes();
        eyes.setApiKey(System.getenv("APPLITOOLS_API_KEY"));
        driver = eyes.open(browser, "the-internet", testName);
    }
...

Now when we run our test, the name will automatically appear. This will come in handy with additional tests.

One More Thing

When a job fails in Applitools Eyes, it automatically returns a URL for it in the test output. It would be nice if we could also get the Sauce Labs job URL in the output. So let’s add it.

First, we’ll need a public variable to store the session ID of the Selenium job.

// filename: Login.java
...
public class Login {

    private WebDriver driver;
    private Eyes eyes;
    public String testName;
    public String sessionId;
...

Next we’ll add an additional function to TestWatcher that will trigger when there’s a failure. In it, we’ll display the Sauce job URL in standard output.

// filename: Login.java
...
    @Rule
    public TestRule watcher = new TestWatcher() {
        protected void starting(Description description) {
            testName = description.getDisplayName();
        }

        @Override
        protected void failed(Throwable e, Description description) {
            System.out.println(String.format("https://saucelabs.com/tests/%s", sessionId));
        }
    };
...

Lastly, we’ll grab the session ID from the Sauce browser instance just after it’s created.

// filename: Login.java
...
        WebDriver browser = new RemoteWebDriver(new URL(sauceUrl), capabilities);
        sessionId = ((RemoteWebDriver) browser).getSessionId().toString();
...

Now when we run our test, if there’s a Selenium failure, a URL to the Sauce job will be returned in the test output.

Expected Outcome

  • Connect to Applitools Eyes
  • Load an instance of Selenium in Sauce Labs
  • Run the test, performing visual checks at specified points
  • Close the Applitools session
  • Close the Sauce Labs session
  • Return a URL to a failed job in either Applitools Eyes or Sauce Labs

Outro

Happy Testing!

 

About Dave Haeffner: Dave is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Stop Being A Language Snob: Debunking The ‘But Our Application Is Written In X’ Myth [Guest Post]

February 6th, 2015 by Adam Goucher

If there is one myth in the [browser] automation world that drives me crazy it is that browser automation scripts need to be written in the same language as the application is written in. It seems like that should be a Good Idea; in principle, but in reality it is actually responsible for a lot of ‘failed’ automation efforts.

Let’s choose a language to pick on. How about C# using ASP MVC; has a large user base (especially in the enterprise space) and pretty mature stack to use. (We could have picked any language…)

So now we have a nice ASP MVC application that we think is going to solve some customer’s burning needs and of course it’s nicely unit tested because you are doing some variant of TDD/BDD. Your browser automation scripts should naturally be written in C#, right?

No.

Well, actually, ‘maybe’. (more…)

Application Security Testing Gets Tasty With Sauce Labs And NT OBJECTives

December 15th, 2014 by Amber Kaplan

Finally, a win-win-win for development, QA, and security! If your development team is looking for easier ways to incorporate security earlier in a way that’s simple, easy and that your team to understand, we may have a solution for you. Security defects are like any other defect. Finding them early saves money and time. There are tools that execute security tests for security professionals – like NT OBJECTives’ NTOSpider. NTOSpider can use the application knowledge defined Selenium scripts to execute a better, more comprehensive security test on an application. (more…)

Weekend Reading: Becoming The Leader You Aspire To Be [Re-Blog]

December 12th, 2014 by Amber Kaplan

Congrats to our VP of Engineering, Adam Christian! His Velocity Conference presentation, “The Black Magic of Leadership,” was featured on the Slideshare blog as one of three best leadership decks.  Check out the original post and presentations here or below. (more…)

Re-Blog: CI & CD With Docker, Beanstalk, CircleCI, Slack, & Gantree

December 10th, 2014 by Amber Kaplan

Bleacher-report-logo

This is a follow-up post to a series highlighting Bleacher Report’s continuous integration and delivery methodology by Felix Rodriguez. To find out how they were previously handling their stack, visit the first, second, and third posts from June 2014. 

There is definitely a huge Docker movement going on in the dev world right now and not many QA Engineers have gotten their hands dirty with the technology yet. What makes Docker so awesome is the ability to ship a container and almost guarantee its functionality. (more…)

Re-Blog: Add Some Sauce To Your IE Tests

September 4th, 2014 by Amber Kaplan

Sauce Labs hearts ThoughtWorks! And apparently the feeling’s mutual. Check out this great blog post mentioning Sauce Labs by Tom Clement Oketch.

See an excerpt below:

(more…)

Appium Bootcamp – Chapter 6: Run Your Tests

August 7th, 2014 by Amber Kaplan

appium_logoThis is the sixth post in a series called Appium Bootcamp by noted Selenium expert Dave HaeffnerRead:  Chapter 1 Chapter 2 | Chapter 3 | Chapter 4 | Chapter 5 | Chapter 6 | Chapter 7 | Chapter 8

Dave recently immersed himself in the open source Appium project and collaborated with leading Appium contributor Matthew Edwards to bring us this material. Appium Bootcamp is for those who are brand new to mobile test automation with Appium. No familiarity with Selenium is required, although it may be useful. This is the sixth of eight posts; two new posts will be released each week.

Now that we have our tests written, refactored, and running locally it’s time to make them simple to launch by wrapping them with a command-line executor. After that, we’ll be able to easily add in the ability to run them in the cloud.

Quick Setup

appium_lib comes pre-wired with the ability to run our tests in Sauce Labs, but we’re still going to need two additional libraries to accomplish everything; rake for command-line execution, and sauce_whisk for some additional tasks not covered by appium_lib.

Let’s add these to our Gemfile and run bundle install.

# filename: Gemfile

source 'https://rubygems.org'

gem 'rspec', '~> 3.0.0'
gem 'appium_lib', '~> 4.0.0'
gem 'appium_console', '~> 1.0.1'
gem 'rake', '~> 10.3.2'
gem 'sauce_whisk', '~> 0.0.13'

Simple Rake Tasks

Now that we have our requisite libraries let’s create a new file in the project root called Rakefile and add tasks to launch our tests.

# filename: Rakefile

desc 'Run iOS tests'
task :ios do
  Dir.chdir 'ios'
  exec 'rspec'
end

desc 'Run Android tests'
task :android do
  Dir.chdir 'android'
  exec 'rspec'
end

Notice that the syntax in this file reads a lot like Ruby — that’s because it is (along with some Rake specific syntax). For a primer on Rake, read this.

In this file we’ve created two tasks. One to run our iOS tests, and another for the Android tests. Each task changes directories into the correct device folder (e.g., Dir.chdir) and then launches the tests (e.g., exec 'rspec').

If we save this file and run rake -T from the command-line, we will see these tasks listed along with their descriptions.

> rake -T
rake android  # Run Android tests
rake ios      # Run iOS tests

If we run either of these tasks (e.g., rake android or rake ios), they will execute the tests locally for each of the devices.

Running Your Tests In Sauce

As I mentioned before, appium_lib comes with the ability to run Appium tests in Sauce Labs. We just need to specify a Sauce account username and access key. To obtain an access key, you first need to have an account (if you don’t have one you can create a free trial one here). After that, log into the account and go to the bottom left of your dashboard; your access key will be listed there.

We’ll also need to make our apps available to Sauce. This can be accomplished by either uploading the app to Sauce, or, making the app available from a publicly available URL. The prior approach is easy enough to accomplish with the help of sauce_whisk.

Let’s go ahead and update our spec_helper.rb to add in this new upload capability (along with a couple of other bits).

# filename: common/spec_helper.rb

require 'rspec'
require 'appium_lib'
require 'sauce_whisk'

def using_sauce
  user = ENV['SAUCE_USERNAME']
  key  = ENV['SAUCE_ACCESS_KEY']
  user && !user.empty? && key && !key.empty?
end

def upload_app
  storage = SauceWhisk::Storage.new
  app = @caps[:caps][:app]
  storage.upload app

  @caps[:caps][:app] = "sauce-storage:#{File.basename(app)}"
end

def setup_driver
  return if $driver
  @caps = Appium.load_appium_txt file: File.join(Dir.pwd, 'appium.txt')
  if using_sauce
    upload_app
    @caps[:caps].delete :avd # re: https://github.com/appium/ruby_lib/issues/241
  end
  Appium::Driver.new @caps
end

def promote_methods
  Appium.promote_singleton_appium_methods Pages
  Appium.promote_appium_methods RSpec::Core::ExampleGroup
end

setup_driver
promote_methods

RSpec.configure do |config|

  config.before(:each) do
    $driver.start_driver
  end

  config.after(:each) do
    driver_quit
  end

end

Near the top of the file we pull in sauce_whisk. We then add in a couple of helper methods (using_sauce and upload_app). using_sauce checks to see if Sauce credentials have been set properly. upload_app uploads the application from local disk and then updates the capabilities to reference the path to the app on Sauce’s storage.

We put these to use in setup_driver by wrapping them in a conditional to see if we are using Sauce. If so, we upload the app. We’re also removing the avd capability since it will cause issues with our Sauce run if we keep it in.

Next we’ll need to update our appium.txt files so they’ll play nice with Sauce.

 

# filename: android/appium.txt

[caps]
appium-version = "1.2.0"
deviceName = "Android"
platformName = "Android"
platformVersion = "4.3"
app = "../../../apps/api.apk"
avd = "training"

[appium_lib]
require = ["./spec/requires.rb"]
# filename: ios/appium.txt

[caps]
appium-version = "1.2.0"
deviceName = "iPhone Simulator"
platformName = "ios"
platformVersion = "7.1"
app = "../../../apps/UICatalog.app.zip"

[appium_lib]
require = ["./spec/requires.rb"]

In order to work with Sauce we need to specify the appium-version and the platformVersion. Everything else stays the same. You can see a full list of Sauce’s supported platforms and configuration options here.

Now let’s update our Rake tasks to be cloud aware. That way we can specify at run time whether to run things locally or in Sauce.

desc 'Run iOS tests'
task :ios, :location do |t, args|
  location_helper args[:location]
  Dir.chdir 'ios'
  exec 'rspec'
end

desc 'Run Android tests'
task :android, :location do |t, args|
  location_helper args[:location]
  Dir.chdir 'android'
  exec 'rspec'
end

def location_helper(location)
  if location != 'sauce'
    ENV['SAUCE_USERNAME'], ENV['SAUCE_ACCESS_KEY'] = nil, nil
  end
end

We’ve updated our Rake tasks so they can take an argument for the location. We then use this argument value and pass it to location_helper. The location_helper looks at the location value — if it is not set to 'sauce'then the Sauce credentials get set to nil. This helps us ensure that we really do want to run our tests on Sauce (e.g., we have to specify both the Sauce credentials AND the location).

Now we can launch our tests locally just like before (e.g., rake ios) or in Sauce by specifying it as a location (e.g., rake ios['sauce'])

But in order for the tests to fire in Sauce Labs, we need to specify our credentials somehow. We’ve opted to keep them out of our Rakefile (and our test code) so that we can maintain future flexibility by not having them hard-coded; which is also more secure since we won’t be committing them to our repository.

Specifying Sauce Credentials

There are a few ways we can go about specifying our credentials.

Specify them at run-time

SAUCE_USERNAME=your-username SAUCE_ACCESS_KEY=your-access-key rake ios['sauce']

Export the values into the current command-line session

export SAUCE_USERNAME=your-username
export SAUCE_ACCESS_KEY=your-access-key

Set the values in your bash profile (recommended)

# filename: ~/*.bash_profile

...
export SAUCE_USERNAME=your-username
export SAUCE_ACCESS_KEY=your-access-key

After choosing a method for specifying your credentials, run your tests with one of the Rake task and specify 'sauce' for the location. Then log into your Sauce Account to see the test results and a video of the execution.

Making Your Sauce Runs Descriptive

It’s great that our tests are now running in Sauce. But it’s tough to sift through the test results since the name and test status are nondescript and all the same. Let’s fix that.

Fortunately, we can dynamically set the Sauce Labs job name and test status in our test code. We just need to provide this information before and after our test runs. To do that we’ll need to update the RSpec configuration incommon/spec_helper.rb.

 

# filename: common/spec_helper.rb

...
RSpec.configure do |config|

  config.before(:each) do |example|
    $driver.caps[:name] = example.metadata[:full_description] if using_sauce
    $driver.start_driver
  end

  config.after(:each) do |example|
    if using_sauce
      SauceWhisk::Jobs.change_status $driver.driver.session_id, example.exception.nil?
    end
    driver_quit
  end

end

In before(:each) we update the name attribute of our capabilities (e.g., caps[:name]) with the name of the test. We get this name by tapping into the test’s metadata (e.g., example.metadata[:full_description]). And since we only want this to run if we’re using Sauce we wrap it in a conditional.

In after(:each) we leverage sauce_whisk to set the job status based on the test result, which we get by checking to see if any exceptions were raised. Again, we only want this to run if we’re using Sauce, so we wrap it in a conditional too.

Now if we run our tests in Sauce we will see them execute with the correct name and job status.

Outro

Now that we have local and cloud execution covered, it’s time to automate our test runs by plugging them into a Continuous Integration (CI) server.

Read:  Chapter 1 Chapter 2 | Chapter 3 | Chapter 4 | Chapter 5 | Chapter 6 | Chapter 7 | Chapter 8

About Dave Haeffner: Dave is a recent Appium convert and the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by thousands of testing professionals) as well as The Selenium Guidebook (a step-by-step guide on how to use Selenium Successfully). He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Follow Dave on Twitter – @tourdedave

[Re-Blog] Dev Chat: Vlad Filippov of Mozilla

July 28th, 2014 by Amber Kaplan

Last week Sauce Labs’ Chris Wren took a moment to chat with Vlad Filippov of Mozilla on his blog. Topics covered all things open source and front-end web development, so we thought we’d share. Click the image below to read the full interview, or just click here.

Dev Chat: Vlad Filippov of Mozilla

 

How HotelTonight.com Leverages Appium for Mobile Test Automation

July 1st, 2014 by Amber Kaplan

We love this blog post written by Quentin Thomas at HotelTonight! In it, he explains how they use Appium to automate their mobile tests. He also walks readers through specifics, such as the RSpec config helper. Read a snippet below.

Thanks to the engineers at Sauce Labs, it is now possible to tackle the mobile automation world with precision and consistency.

Appium, one of the newest automation frameworks introduced to the open source community, has become a valuable test tool for us at HotelTonight. The reason we chose this tool boils down to Appium’s philosophy.

“Appium is built on the idea that testing native apps shouldn’t require including an SDK or recompiling your app. And that you should be able to use your preferred test practices, frameworks, and tools”.

-Quentin Thomas, HotelTonight, June 17, 2014

To read the full post with code, click here. You can follow Quentin on Twitter at @TheQuengineer.

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Bleacher Report’s Continuous Integration & Delivery Methodology: Test Analytics

June 24th, 2014 by Amber Kaplan

This is the final post in a three part series highlighting Bleacher Report’s continuous integration and delivery methodology by Felix Rodriguez.  Read the first post here and the second here.

Last week we discussed setting up an integration testing server that allows us to post, which then kicks off a suite of tests. Now that we are storing all of our suite runs and individual tests in a postgres database, we can do some interesting things – like track trends over time. At Bleacher Report we like to use a tool named Librato to store our metrics, create sweet graphs, and display pretty dashboards. One of the metrics that we record on every test run is our PageSpeed Insights score.

PageSpeed Insights

PageSpeed insights is a tool provided by Google developers that analyzes your web or mobile page and gives you an overall rating. You can use the website to get a score manually, but instead we hooked into their api in order to submit our page visit score to Liberato. Each staging environment is recorded separately so that if any of them return measurements that are off, we can attribute this to a server issue.

average page speeds

Any server that shows an extremely high rating is probably only loading a 500 error page. A server that shows an extremely low rating is probably some new, untested JS/CSS code we are running on that server.

Below is an example of how we submit a metric using Cukebot:

generic_steps.rb

require_relative 'lib/pagespeed'
Given(/^I navigate to "(.*?)"$/) do |path|
  visit path
  pagespeed = PageSpeed.new(current_url)
  ps = pagespeed.get_results
  score = ps["score"]
  puts "Page Speed Score is: #{score}"
  metric = host.gsub(/http\:\/\//i,"").gsub(/\.com\//,"") + "_speed"
  begin
    pagespeed.submit(metric,score)
  rescue
    puts "Could not send metric"
  end
end

lib/pagespeed.rb

require 'net/https'
require 'json'
require 'uri'
require 'librato/metrics'

class PageSpeed
  def initialize(domain,strategy='desktop',key=ENV['PAGESPEED_API_TOKEN'])
    @domain = domain
    @strategy = strategy
    @key = key
    @url = "https://www.googleapis.com/pagespeedonline/v1/runPagespeed?url=" + \
      URI.encode(@domain) + \
      "&key=#{@key}&strategy=#{@strategy}"
  end

  def get_results
    uri = URI.parse(@url)
    http = Net::HTTP.new(uri.host, uri.port)
    http.use_ssl = true
    http.verify_mode = OpenSSL::SSL::VERIFY_NONE
    request = Net::HTTP::Get.new(uri.request_uri)
    response = http.request(request)
    JSON.parse(response.body)
  end

  def submit(name, value)
    Librato::Metrics.authenticate "ops@bleacherreport.com", ENV['LIBRATO_TOKEN']
    Librato::Metrics.submit name.to_sym  => {:type => :gauge, :value => value, :source => 'cukebot'}
  end
end

 

Google’s PageSpeed Insights return relatively fast, but as you start recording more metrics on each visit command to get results on both desktop and mobile, we suggest building a separate service that will run a desired performance test as a post – or at least in its own thread. This will stop the test from continuing its run or causing a test that runs long. Which brings us to our next topic.

Tracking Run Time

With Sauce Labs, you are able to quickly spot a test that takes a long time to run. But when you’re running hundreds of tests in parallel, all the time, it’s hard to keep track of the ones that normally take a long time to run versus the ones that have only recently started to take an abnormally long time to run. This is why our Cukebot service is so important to us.

Now that each test run is stored in our database, we grab the information Sauce stores for run time length and store it with the rest of the details from that test. We then submit that metric to Librato and track over time in an instrument. Once again, if all of our tests take substantially longer to run on a specific environment, we can use that data to investigate issues with that server.

To do this, we take advantage of Cucumber’s before/after hooks to grab the time it took for the test to run in Sauce (or track it ourselves) and submit to Librato. We use the on_exit hook to record the total time of the suite and submit that as well.

Test Pass/Fail Analytics

To see trends over time, we’d also like to measure our pass/fail percentage for each individual test on each separate staging environment as well as our entire suite pass/fail percentage. This would allow us to notify Ops about any servers that need to get “beefed up” if we run into a lot of timeout issues on that particular setup. This would also allow us to quickly make a decision about whether we should proceed with a deploy or not when there are failed tests that pass over 90% of the time and are currently failing.

The easiest way to achieve this is to use the Cucumber after-hook to query the postgres database for total passed test runs on the current environment in the last X amount of days, and divide that by the total test runs on the current environment in the same period to generate a percentage, store it, then track it over time to analyze trends.

Summary:

Adding tools like these will allow you to look at a dashboard after each build and give your team the confidence to know that your code is ready to be released to the wild.

Running integration tests continuously used to be our biggest challenge.  Now that we’ve finally arrived to the party, we’ve noticed that there are many other things we can automate. As our company strives for better product quality, this pushes our team’s standards with regard to what we choose to ship.

One tool we have been experimenting with and would like to add to our arsenal of automation is Blitz.io. So far we have seen great things from them and have caught a lot of traffic-related issues we would have missed otherwise.

Most of what I’ve talked about in this series has been done, but some is right around the corner from completion. If you believe we can enhance this process in anyway, I would greatly appreciate any constructive criticism via my twitter handle @feelobot. As Sauce says, “Automate all the Things!”

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.