[Live Panel] Mobile Testing with Appium: Recap

June 17th, 2014 by Amber Kaplan

‘Appy Tuesday! As you may remember, battled-tested Appium 1.0 [Orion] was released on May 2 right here at Sauce Labs HQ. Following the release, we held a live panel-style webinar with core Appium committers Jonathan Lipps, Matthew Edwards, and Dan Cuellar so that they could answer questions about Appium’s philosophy,  the roadmap ahead, best practices for automating mobile tests, and deliver an overview of the release.

To listen to the webinar recording, click here. You can also check out the webinar slides below.

For a quick tour of what’s new with Orion (version 1.o), click here. We’ve also included some follow up questions with answers below, from the Appium team.

Happy Testing!

appium_logo_final

Follow-up Q&A


Why Appium? What distinguishes it from the competition?

The main advantage of Appium is that it’s cross-platform, meaning it will work on iOS, Android, Firefox, OS, etc. You can test in any language; and you don’t have to modify your app. Plus, you can use the selenium protocol that you’re already familiar with from web testing. And on the nice-to-have spectrum, Appium has a great community around it; plus it’s free and open source.

Here’s a quick look at the competition currently:

  • Instruments – Disadvantage? One language. All done in advance. Can’t talk to outside frameworks.
  • uiautomator – This works well; it also has instruments. But the language must be Java. tightly integration with app. Appium has more advanced features such as changing emulator language or xpath locators, however.
  • iOS driver – main disadvantage, small community. not widely used.
  • Selendroid – great project that appium supports. based on instruments, works on older android devices. more limited capabilities compared to uiautomator.
  • KIF – modifying your app, writing objective c into your app. rule #1 is don’t mess with the code. takes it one degree of separation from what the customers are going to see. there are some advantages to doing that. more tight integration, able to do some types of testing that appium can’t
  • Monkey Talk – low level. not the richness of appium

Does Appium work with mobile web and native apps?

Appium’s vision is to help automate everything mobile – including mobile web apps and native apps. It will continue to support Safari on iOS and Chrome on Android to use Appium to test mobile web and native apps. We are open to supporting other browsers.

Does it work with Sauce Labs? How to get started with Appium? This is the first seminar I am attending about this

Appium works on Sauce Labs. To get started checkout the documentationtutorials, and sample code.

How should you handle flaky network connection simulation?

There definitely are tools. Proxies to change what your computer is doing.

Linux/OS X – ipfw – firewall rule tool
Windows – fiddler – javascript for writing rules

Appium also talks over TCP so don’t apply rules to 4444 or 4723.

Automating pre installed apps?

Android works well. UiAutomator allows you to test all apps on the device, on emulators and real devices. You can even go back and forth between apps.

On iOS, you can test in built in apps such as settings only on the simulator. On real devices, apple limits you to testing only your app.

Emulators vs real devices: which is better for testing?

Watch the GTAC presentation here
Check out how Google does their mobile testing here

Then ask yourself, what are you writing your automation for?

  • If it’s for business logic, simulators and emulators are probably fine.
  • If it’s for performance and crashes, consider real devices.

Overall, however, simulators and emulators are a highly effective way to save a lot of money.

I’m joining a new company. How can I convince them to buy into Appium? 

You can use any language and test framework. It’s very easy to take existing selenium test infrastructure and use it for a proof of concept appium test cases. Fastest way to get started with mobile automation. Leverage existing languages and test frameworks that you use. Write a few small test cases then show it off as a demo. When people see appium running for the first time, it’s a pretty amazing experience.

What are the desired capabilities of Appium (versus Selenium)?

The capabilities were updated to match the mobile version of JSWP. Selenium wasn’t designed for mobile, so we worked with the Selenium project to define a new set of capabilities that support mobile.

  • platformName – ios/android
  • platformVersion – 7.1/4.4
  • deviceName – what kind of device we’re talking about iPhone Simulator/Nexus S

Documented well in the migration guide for appium 1.0.

If the same app is written for both platforms, how can I ensure a 100% cross-platform test?

The idea of writing one test for both platforms is the holy grail; but it requires the app be extremely similar. Even with xamarin and the same language for both, it depends on how similar the app is on both platforms.

It’s not about having 100%. Instead,  have a small core that’s different, and then reuse the entire ecosystem about the test code. Everything can be the same except for a single definition file that encapsulates the differences.

Does Appium have significant advantages over webdriver for testing web apps on mobile devices?

Yes! Webdriver doesn’t provide drivers for mobile web apps or native apps. A tool like Appium is the only way you can automate mobile web apps currently.

Interestingly, the Selenium project recommends that you use a project like Appium in order to do this kind of automation. When you’re automating web apps using Appium, it’s just as though you’re using Selenium since there’s no difference in the code you write.  Appium is simply providing the backend instead of the Selenium jar.

What is a short explanation as to what happens when an external application/activity is launched on Android/iOS?

It’s different on each platform. On Android, it’ll be as if the user launched it.

iOS has sandboxed enviornment, if your app opens a 3rd party app then the automation session will be shut down on you. That’s something that only Apple will be able to fix.

What is the release schedule for Appium?

The release process is becoming more structured. The github milestone tracker is being used with estimated dates. They will change.

The project planning is based around this list of issues assigned to a milestone. Feedback is used to prioritize.

How do new versions of Android/iOS impact Appium?

We’re trying to get better at using the beta. For example, we were using iOS 7.1 beta months before it was officially released. We’re able to anticipate issues before release and quickly add official support.

If you try Appium on a new release of Android/iOS, make sure to report issues.

What are Mavericks issues with Appium?

The latest OS X and Xcode release works well with Appium. If there are issues, make sure to open issues on GitHub.

How do I execute parallelization with Appium?

Use Sauce Labs to run in parallel on iOS. Sauce handles the virtualization of OS X for you. You can either maintain a bunch of OS X machines or let Sauce handle it for you.

With Android, it’s easier to run multiple emulators on your own system. If you’re looking to scale, then Sauce Labs is a big win.

Can you have multiple devices connected to one Appium server?

You can run one Appium server per device, on different ports. On iOS, it’s only one device per system due to Apple limitations.

Does Appium support Ruby, Python, C#, … etc?

Yes.

Appium is an open source source tool, but is there a support team that we can contact directly contact in case of issues?

Appium support is similar to how Selenium is supported. There’s a discussion group for questions and an issue tracker on GitHub for reporting bugs. If you have a commercial contract with a company that supports appium, such as Sauce Labs, then they have their own support channels.

Does Appium have the same approach as WebDriver/page objects?

Yes exactly. If you know how to write webdriver code and use page objects, then they’re applied in the same way. All of the concepts transfer.

Are there any hard button clicks supported in latest Appium versions for iOS devices ? I hope Android devices support this.

Appium makes the full UI Automation JavaScript API available as provided by Apple. You can click on buttons within the app. For special buttons such as home, there are work arounds. If you have a specific button in mind, I recommend asking on the appium discussion list.

Android has full support for clicking all buttons. The underlying methods are listed in the UiDevice documentation. To click the back button, for example, the standard selenium command is used. For arbitrary keys, there’s keyevent which is being renamed to press_keycode .

Are there plans to add OCR or image recognition support to appium?

There’s an open issue for FindByImage. Appium supports taking screenshots so you’re free to perform any image recognition based on that data.

Are there any tutorials available as to how to use an existing test automation system (in my case rspec/capybara) with appium?

Appium has code samples on GitHub that include an rspec example. There’s also a tutorial which covers Ruby and Java on appium.io. Documentation is available.

Are you considering hosting a conference about Appium in the near future? say 3 months down the road. Would be nice to have hands-on work shops etc.

If you’re interested in an Appium conference, read this post on the discussion list.

Are you planning to have scheduled releases, like every quarter or every six months, so we know when to expect Appium releases?

Upcoming appium releases along with estimated dates are tracked as milestones on GitHub. For the latest bug fixes, running from source is an option.

Can appium test cases run in parallel on different devices?

Running tests in parallel works fine for Android. On iOS, Apple has limited automation to work on one device at a time. The best way to run at scale with Android Emulators & iOS Simulators is to setup a selenium grid using virtual machines. Sauce Labs provides the easiest way to run tests in parallel on Android and iOS.

Is automation possible without having the Xcode of  an app?

On iOS, you’re expected to have Xcode installed along with a simulator build of the app. A non-simulator build will not work on the iOS simulator. If Xcode isn’t installed then the automation will not work on iOS.

Can I use Appium to test web apps on mobile devices?

Yes. Appium fully supports testing web apps on mobile devices using Chrome on Android and Safari on iOS.

Can we use web application scripts used for Automation in Appium for native app as well?

Reusing test code is going to be a challenge when comparing web automation to native app automation. If you follow the page object pattern, then it’s entirely possible to share some code between the web and native app. Common technology such as test language, results reporting, and parallelization strategy can all be the same.

Can we do gestures on webview for Appium and not use webdriver since gestures are broken for iOS and Android?

The APIs for advanced gestures are currently limited to native apps. Standard WebDriver commands should work inside webviews for gestures.

Can you please provide a short explanation as to what happens when an external application/activity is launched during an Appium session (e.g., the contacts activity in android, or maps in iOS).

The idea behind UI automation is to test the app as a user would. The simple answer to this question is that it’s the same as if a user installed the app and then launched it. For more details, checkout the debug log of the Appium server as it contains exactly what’s going on.

Do I need to be an expert in Selenium WebDriver to effectively use Appium? In other words, is Selenium WebDriver experience a prerequisite for using Appium?

You need to learn WebDriver to use Appium effectively because that’s how Appium works. If you’re already familiar with uiautomator and UI Automation, then it’s possible to use Appium without understanding the details of WebDriver.

If you look at the sample code, the amount of WebDriver knowledge required is not at the expert level.

Do you have known issues when there are multiple devices connected on the same Appium server? We use 6 devices connect to the same Appium server.

To test with more than one Android device locally, you need to have one Appium server per device. For iOS, Apple limits automation to one device at a time. Sauce Labs enables parallel iOS automation by using OS X in a virtual machine.

Does Appium allow one to automate render.js webapps on mobile web for iOS and Android?

Yes, Appium supports automating web apps on iOS and Android.

Does Appium have significant advantages over WebDriver for testing straight up web apps on mobile devices?

Appium is one of the only ways available today to use WebDriver to test mobile apps on mobile devices. WebDriver alone doesn’t support mobile.

for Android, Appium doesn’t detect element id all the time,  so I’ve had to use uiautomatorviewer. But uiautomatorviewer fails while using Appium. Any tips?

Make sure to end any existing Android automation sessions before using uiautomatorviewer. Once you have identified the elements using the viewer tool, then you can update the tests and run them via Appium.

Another workflow is to use the ruby_console to dynamically identify the elements and then interact with them before updating the test code.

Given that I have an app which makes online data exchanges through APIs, and as I want to write reproducible functional tests with Appium, how can I set up an environment to mock my app’s webservices calls in my testing scenarios?

Mocking an app’s webservice calls is a general testing issue and is unrelated to appium. I suggest researching appropriate mocking solutions for your language to see how others have accomplished this.

I have a good working knowledge of Selenium WebDriver 2.0 and of using Page Objects. Does Appium also has the same approach since Appium is using JSON driver behind the scenes?

Yes, the same approach is used.

Hi, I understand that Appium creates only one session for iOS as Apple UIAutomator does not support multiple iOS device support in parallel, whereas in Android, multiple devices are supported and hence each session maps with one devices. Is that correct?

Yes, running in parallel is currently limited to Android due to Apple’s limitations. A work around is to use Sauce Labs as they overcome the one device per machine via virtualization.

How is the support for scrolling going to improve? Currently it is very flaky to scroll up or down.

Properly writing scrolling code is a challenge. I’ve found using explicit waits helps resolve flakiness. On iOS it’s a bit easier because automation of invisible elements is possible. For Android, we’re limited to visible elements. I use complex_find on Android to scroll to elements. A better way is coming soon in Appium that’s easier to use.

Please send some examples of  the multi action gesture API role in Appium?

Check out the unit tests for the various Appium client bindings to see examples of the gesture api.

How do I handle situations such as when Appium just fails, saying Javascript fail error (sendkeys throwing js error on line 68, failed, or my R&D says that Appium was not able to get the page source and that it may be because of a lot of data)?

Please report the issue on GitHub along with a way to reproduce the failure. On iOS I’m aware of a similar problem that’s being looked into. If the Appium devs can reproduce the problem then it’s much easier to fix.

How do I make static text (e.g. labels) appear in Appium inspector?

Properly making your app accessible, such as adding accessibility labels, is well documented by Google and Apple for Android and iOS respectively. I suggest consulting the documentation. Once your app is properly following the guidelines, then the labels will be visible to appium for automation.

How transparent would it be to associate Android and iOS devices/VMs/Emulators with our Selenium Grid infrastructure to use automated tests using Appium?

It’s entirely possible to set up local infrastructure. In my experience, this involves a lot of work so it’s better to use a service provider such as Sauce Labs.

I have seen user reporting lost of issue in Maverics OS using Appium …. how does the Appium project plan to address all these issues? 

I have been using Appium on Mavericks without any issues. Make sure to use the newest version of OS X, Xcode, and Appium.

Is it possible to switch between native app and Safari when executing the workflow?

On iOS, this isn’t possible. The automation session is restricted to the app context and switching out will end the session. For Android, you could switch between the app and the Chrome browser without issue.

I use Firefox’s Responsive Web & UserAgent override to inspect Mobile Web Objects. Is there anything on the roadmap to inspect WebObjects directly from Appiums Inspector?

I recommend using the Chrome Dev tools instead of waiting for the Appium inspector to support web apps. The existing browser tools already work well for this purpose.

I’d love your recommendation for the best emulator for both iPhone and Android

This is highly specific to the business requirements of the app under test. I test on Android using a Nexus 7 and an iPhone Simulator. Sauce Labs supports a bunch of configurations to it’s up to the tester to decide what screen sizes and devices make sense.

I’m working on a mobile website project. Will Appium work for just mobile web, or is it specifically for hybrid and native apps?

Appium works well for mobile web, hybrid, and native apps. It’s not exclusive to any one of them.

If someone wanted to contribute to the most immediate need, what would that be? Features? Documentation? Quick Start Guide? Specific Language Usage Examples?

The most immediate need is documentation. The documentation files are hosted on GitHub and pull requests are welcome. Other areas to contribute include:

If we want to be able to fully automate testing, we need to be able to respond to system prompts, like those that appear to allow the app to use location services or access contacts. Our experience with Appium thus far is that there’s no means to respond. Can you elaborate?

I suggest asking this question on the discussion list. I’m pretty sure that someone has figured out a work around.

In the future, will Appium support running multiple mobile test cases on simulators/emulators and real devices in parallel, instead of running tests one after another? 

That’s already supported on Android. For iOS, Apple does not allow it. I suggest using a solution provider such as Sauce Labs that uses VMs to bypass this Apple limitation.

Inspector of Appium for Windows is much Flakey. Selendroid now doesnt gives Bounds/Size in PageSource.

For Selendroid, try their dedicated inspector tool

Is Appium 1.0.0 stable working for older Android like 4.2.x?
Appium works well on older Androids via Selendroid. uiautomator with Appium is available for API 17 and above.

Is Appium mainly for functional testing or integration testing? My company’s mobile device hits a webservice which returns a SQLite database, which we need to inspect for accuracy, once it’s been returned. Is this possible with Appium, even partially?

Appium is meant for end to end UI Testing, similar to how WebDriver is used to automate browsers. The situation described seems better suited for a web service unit test. I don’t see how a user of the app would inspect a SQLite database so appium doesn’t seem appropriate.

Is it always required that the Android app I’m testing have accessibility features (to locate the elements)? or I can use the simulator/recorder to locate those elements record tests for any app?

Accessibility features are not required. You can automate the app anyway however the other locator strategies will be brittle. The benefit of accessibility is the labels remain constant over the life cycle of the app. As the app changes, it’s better if tests continue working without having to update the locators.

Is there a plan to support testing on multiple iOS devices?

Apple hasn’t communicated about lifting this restriction. The current supported way is using Sauce Labs or another provider that handles parallelization.

Is there an easy way to setup Appium registered “devices” to Selenium GRID so that you can have a single point of access for tests?

A selenium grid is possible to setup. It’s not easy.

Just a suggestion, I also believe that we need more robust & rigorous UnitTest to be integrated with Appium CI, as the Appium release breaks a functionality or two while delivering another cool functionality.

I completely agree. I have experienced this issue and understand the frustration regressions cause. Proper unit testing with continuous integration is a work in progress. The goal is to have all Appium bindings in addition to the Appium server running robust unit tests via CI.

Last time I tried Appium, I had difficulty dealing with alerts coming from the OS or iTunes store (for payment automation). Does Appium 1.0 deal with OS/iTunes alerts better?

I’m not sure. If there’s a way to automate them with UI Automation JavaScript, then it’ll work with appium.

I need Appium/Node Logs to be consumed using a remote webdriver client. The WebDriver client is on remote m/c, so reading the log file doesn’t help much. Can we have a method to get log dumps via Appium-driver?

I believe there’s an open issue related to this. It makes sense to expose the logs via the Selenium log API. Currently supported logs include syslog (iOS), crashlog (iOS), and logcat (Android).

New to Appium, will soon use it. I briefly went over the multi action gestures. Are there any complex gestures that are not yet implemented? (or are still quirky/buggy)

So far the gestures seem to be working well.

One more question: Does Appium support C#, Java and Python languages?
Yes. Appium has client bindings for Ruby, Python, Java, JavaScript, PHP, and C#.

One of the big pain points for our QA is matching the implemented UI with the screen shots that our design team has put together. One guy in particular reached the laughing/crying stage imagining aloud the possibility of automation handling the comparison.

My personal opinion is that automatically matching design images to an app is not going to produce useful results. Appium supports screenshots so you’re welcome to try.

One of the requirements is XCode. Is Appium compatible with other IDEs, e.g. AppCode by Jetbrains?

Appium requires that Xcode is installed, not that you use it. Appium works with every IDE and can even be used without one.

Parallel execution always wonderful when it comes to faster testing. But iOS again some limitations. How are we going to address this?

The immediate solution is to use Sauce Labs. The longer term solution is to persuade Apple to stop limiting their automation technology.

Possibility to mock/stub webservices calls?

Mocks and Stubs are certainly possible. This is a general testing issue and is unrelated to appium.

Robotium can assert Android toasts by using something like waitForText(). Can Appium do the same thing?

uiautomator does not support toasts. Selendroid does.

A Selenium grid basically helps to redirect test cases to right devices … will this help with iOS as we are not able to run the iOS devices in parallel due to Apple limitation.

You could setup your own grid or use an existing solution provider such as Sauce Labs. Grid is not a magic solution to Apple’s limitation on parallelization.

Should we also depreciate/discourage usage of Android version less than 4.2?(Coz UIAutomator came at 4.2, and Selendroid is not stable to use)

I haven’t had good luck with Selendroid either so I only use uiautomator.

I’m attending this webinar to learn about Appium as someone who has never used it to automate testing. I enjoy being able to view screen shots in Sauce when I check my Selenium results… can Appium tests provide the same kind of screen shot results?

Yes, Sauce provides the same support for Appium including screenshots and videos. Locally, you would have to write the code to take the screenshots. I have open sourced a screen recorder for OS X that works with the iOS Simulator and Android emulator.

We had trouble writing tests that interact with tableViews, despite the fact that they are a common element  in iOS gui development. We’ve seen that this is on the list of known issues. Is addressing this on the roadmap in the near future?

I suggest opening an issue on GitHub and include an example that reproduces the problem. My understanding is that tableViews should not have problems on iOS.

Does Sauce Labs provide training for using Appium with Sauce? 

That’s a question to ask the sales team at Sauce Labs. I am working on screencast training that will be available to everyone for free.

Can you speak to whether/when you plan to address how to respond to prompts? Any workarounds available now?

I suggest posting on the discussion list. There are various ways to respond to prompts.

We had trouble migrating tests that ran on Firefox into tests that ran on Internet Explorer. Is it really that easy to migrate our tests to mobile with different browsers?

Appium enables you to automate mobile browsers. Chrome is a different browser from Safari so there will be differences. Browser automation has been successfully done for many years now. It’s entirely possible to test on multiple browsers.

We have the behavior that tests run locally stable and sometimes fail on the Appium server. For us this is not reproducible and could not determine why we have this instability. Do you know that? How do fix?

Flakiness can be caused by a large number of issues:

  • test written incorrectly
  • timing issues
  • app bugs
  • appium server bugs
  • appium client bugs
  • underlying automation bugs
  • emulator/simulator bugs
  • device specific bugs
  • network connectivity
  • environment configuration
  • software versions

If you post on the discussion list with specifics, then someone may be able to provide guidance.

We start our Appium tests with Jenkins and have in one example job 5 tests. From time to time we have 6 test result? Do you know why? It it a Appium issue?

This isn’t an Appium issue. There’s something broken with your test infrastructure if you’re randomly having an extra test result.

What are the changes app developers need to make in-app in order for visible property to be true in Appium inspector?

On Android, everything is always visible if it’s in the inspector. On iOS, if the element is displayed on screen then it should be visible -although sometimes it isn’t. Unfortunately there’s not much you can do to control the visibility attribute.

What are your future goals for automating  native apps for Samsung TV and Apple TV using Appium?

I’m not aware of specific goals to support TV automation. If the TV runs Android then it may work.

What is the best test reporting plugin for mobile that would fit with Appium (JUnit) to be included in CI?

JUnit has numerous plugins for various CI solutions that enable reporting. This isn’t Appium specific or unique to mobile. I suggest researching online to find out what others are using.

What is the main difference between Selandroid and Appium?

Selendroid only supports Android and uses an instrumentation approach. Appium allows automation of iOS and Android. Appium supports Selendroid as one option and also allows using the more modern uiautomator technology.

What native application support does Appium offer?

Appium supports fully automating native apps on Android and iOS. On Android there’s a choice of Selendroid or uiautomator. On iOS, UI Automation JavaScript is used behind the scenes.

Please suggest an operating system on which one should primarily test when launching Android – I.E. Linux, OSX, Windows…?

OS X

When I run Appium, I got the following error, “Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure.” What can I do on my end to fix this?

This is a common issue. I suggest posting to the mailing list if it happens frequently.

When I tried Appium I found it was difficult to pick objects; it seems I need extensive xpath knowledge. Do you have plan to create an object library layer?

The Ruby binding has a helper library so you can use a generic find command that’ll work on almost anything. There are specific helpers for buttons, textfields, text, and alerts. I recommend not using XPath on Android as it’s flaky due to a known Android problem. With all the selector methods available, finding elements is very easy. First select which attribute you’re using to find the element then select the best selector.

When might Ruby users expect switch_to() to be implemented? We have many  hybrid apps that need context switching.

The new Ruby bindings have implemented context switching via set_context. Known issues are in the process of being fixed; these will improve the reliability of context switching.

Will there be a Appium conference soon?

App-solutely! For information about the Appium conference, please read this mailing list thread.

Will there ever be integrated support for CI though Sauce Labs?

Sauce Labs integrates with CI providers such as CloudBees. All CI solutions should be able to integrate with Sauce Labs. There are even plugins available.

You mentioned earlier that you are working on ‘Getting Started’ materials and tutorials… any idea on when you might be ready to release these and where should I look to find them?

These are already available on Appium.io. Check out the documentation, tutorials, and sample code.

You mentioned you now now support Android hybrid apps without Selenium. Which version of Appium supports this, and are there docs that show how to do this?

On Android API 19, Appium is able to connect directly to webviews via chromedriver when using uiautomator. The client libraries have support for listing the webview context and switching into it and back to the native app.

Guest Post: Bridging the Test Divide – Beyond Testing For A Release

June 16th, 2014 by Amber Kaplan

This is the second of a three part series by Matthew Heusser, software delivery consultant and writer. 

When I start to think about testing, I think about it in two broad strokes: new feature testing and release-testing. New feature testing tries to find problems with something new and specific, while release-testing happens after “code complete”, to make sure the whole system works together, that a change here didn’t break something there.

Release-testing (which some call regression testing) slows down the pace of release and delays feedback from our customer. Release-testing also increases cycle time – the time from when we begin work on a feature until it hits production. Over time, as our software become more complex, the amount of testing we want to do during release testing goes up.

Meanwhile, teams want to ship more often, to tighten the feedback loop.

Today I am going to talk about making release testing go away – or at least drastically reducing it.

It all starts during that tutorial in Spain I wrote about last time.

Two Worlds

The frequency of release for the people in my tutorial was very diverse, but two groups really struck me — the telecom that had a four-month test-release cycle, and the Latvian software team with the capability to deploy to production every single day.

That means arriving at the office the morning, looking at automated test runs, and making a decision to deploy.

There is a ‘formula’ to make this possible. It sounds simple and easy:

  • Automate a large number of checks on every build
  • Automate deploy to production
  • Continuously monitor traffic and logs for errors
  • Build the capability to rollback on failure

That transforms the role of test from doing the “testing we always do” to looking at the risk for a given release, lining it up against several different test strategies, and balancing risk, opportunity, reward, and time invested in release-testing?

The trick is to stop looking at the software as a big box, but instead to see it as a set of components. The classic set of components are large pieces of infrastructure (the configuration of the web server, the connections to the database, search, login, payment) and the things that sit on top of that – product reviews, comments, static html pages, and so on. Develop at least two de-ploy strategies — one for audited and mission-critical systems (essential infrastructure, etc) and another for components and add-ons.

We’ve been doing it for years in large IT organizations, where different systems have different release cycles; the trick is to split up existing systems, so you can recognize and make low-risk changes easier.

This isn’t something I dreamed up; both Zappos and Etsy have to pass PCI audits for financial services, while Zappos is part of Amazon and publicly traded. Both of these organizations have a sophisticated test-deploy process for parts of the application that touch money, and a simpler process for lower-risk changes.

So split off the system into different components that can be tested in isolation. Review the changes (perhaps down to the code level) to consider the impact of the change, and test the appropriate amount.

This can free up developers to make many tiny changes per day as long as those changes are low risk. Bigger changes along a theme can be batched together to save testing time — and might mean we can deploy with still considerably less testing than a ‘full’ site retest.

But How Do We Test It?

A few years ago, the ideal vision of getting away from manual, documented test cases was a single ‘test it’ button combined with a thumbs up or down at the end of an “automated test run.”

If the risk is different for each release, and we are uncomfortable with our automation, then we actually want to run different tests for each release — exactly what thinking testers (indeed, anyone on the team) can do with exploratory testing.

So let the computers provide some automated checks, all the time. Each morning, maybe every half an hour, we get a report, look at the changes, and decide what is the right thing for this re-lease. That might mean full-time exploratory testing of major features for a day or two, it might be emailing the team and asking everyone to spend a half hour testing in production.

This result is grown up software testing, varying the test approach to balance risk with cost.

The first step that I talked about today is separating components and developing a strategy that changes the test effort based on which parts were changed. If the risk is minimal, then deploy it every day. Hey, deploy it every hour.

This formula is not magic. Companies that try it find engineering challenges. The first build/deploy system they write tends to become hard to maintain over time. Done wrong continuous testing creates systematic and organizational risk.

It’s also a hard sell. So let’s talk about ways to change the system to shrink the release-test cycle, deploy more often, and reduce risk. The small improvements we make will stand on their own, not threaten anyway — and allow us to stop at any time and declare victory!

A Component Strategy

that_badWhen a company like etsy.com says that new programmers commit and push code to production the first day, do they really mean modifications to payment processing, search, or display for all products?

Of course not.

Instead, programmers follow a well-written set of directions to … wait for it … add the new user to the static HTML ‘about us’ page that lists all the employees, along with an image. If this change generates a bug, that will probably result in an X over an image the new hire forgot to upload, or maybe, at worst, break a div tag so the page mis-renders.

A bad commit on day one looks like this – not a bungled financial transaction in production.

How much testing should we have for that? Should we retest the whole site?

Let’s say we design the push to production so the ‘push’ only copies HTML and image files to the webserver. The server is never ‘down’, and serves complete pages. After the switch, the new page appears. Do we really need to give it the full monty, the week-long burn down of all that is good and right in testing? Couldn’t the developer try it on a local machine, push to stag-ing, try again, and “just push it?”

Questions on how?

More to come.

By Matthew Heusser – matt.heusser@gmail.com for Sauce Labs

Stay tuned next week for the third part of this mini series! You can follow Matt on Twitter at @mheusser.

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Re-Blog: JavaScript Multi Module Project – Continuous Integration

June 11th, 2014 by Amber Kaplan

lubos-krnacOur friend Lubos Krnac describes how to integrate Sauce with Protractor in a quest to implement continuous integration in his JavaScript multi module project with Grunt.

Below is a quote from his most recent blog post along side some code.

Read the rest of his post to get the full how-to here.

An important part of this setup is Protractor integration with Sauce Labs. Sauce Labs provides a Selenium server with WebDiver API for testing. Protractor uses Sauce Labs by default when you specify their credentials. Credentials are the only special configuration in test/protractor/protractorConf.js (bottom of the snippet). The other configuration was taken from the grunt-protractor-coverage example. I am using this Grunt plug-in for running Protractor tests and measuring code coverage.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
// A reference configuration file.
exports.config = {
  // ----- What tests to run -----
  //
  // Spec patterns are relative to the location of this config.
  specs: [
    'test/protractor/*Spec.js'
  ],
  // ----- Capabilities to be passed to the webdriver instance ----
  //
  // For a full list of available capabilities, see
  // and
  capabilities: {
    'browserName': 'chrome'
    //  'browserName': 'firefox'
    //  'browserName': 'phantomjs'
  },
  params: {
  },
  // ----- More information for your tests ----
  //
  // A base URL for your application under test. Calls to protractor.get()
  // with relative paths will be prepended with this.
  baseUrl: 'http://localhost:3000/',
  // Options to be passed to Jasmine-node.
  jasmineNodeOpts: {
    showColors: true, // Use colors in the command line report.
    isVerbose: true, // List all tests in the console
    includeStackTrace: true,
    defaultTimeoutInterval: 90000
  },
  
  sauceUser: process.env.SAUCE_USERNAME,
  sauceKey: process.env.SAUCE_ACCESS_KEY
};

You may ask “how can I use localhost in the configuration, when a remote selenium server is used for testing?” Good question. Sauce Labs provides a very useful feature called Sauce Connect. It is a tunnel that emulates access to your machine from a Selenium server. This is super useful when you need to bypass company firewall. It will be used later in main project CI configuration.

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Bleacher Report’s Continuous Integration & Delivery Methodology: Creating an Integration Testing Server

June 10th, 2014 by Amber Kaplan

Bleacher-report-logoThis is the second of a three part series highlighting Bleacher Report’s continuous integration and delivery methodology by Felix Rodriguez.  Read the first post here.

Last week we discussed how to continuously deliver the latest version of your application to a staging server using Elastic Beanstalk. This week we will be discussing how Bleacher Report continuously runs integration tests immediately after the new version of our app has been deployed.

When our deploy is complete, we use a gem called Slackr to post a message in our #deploys chat room. This is simple enough and just about any chat software can do this. We chose to use Slack because of the built-in integration functionality.

We created an outgoing webhook that submits any posts to our #deploys channel as a post to our Cukebot server. The Chukebot server searches the text, checks for a “completed a deploy” message, then parses the message as a Json object that includes the deploy_id, user, repo, environment, branch, and Github hash.

class Parser
  ##################################################
  ## Sample Input:
  # OGUXYCDI: Dan has completed a deploy of nikse/master-15551-the-web-frontpage-redux to stag_br5. Github Hash is 96dd307. Took 5 mins and 25 secs
  ##################################################
  def self.slack(params)
    text = (params["text"])
    params["deploy_id"] = text.match(/^(.*):/)[1]
    params["branch"] = text.match(/of\s(.*)\sto/)[1]
    params["repo"] = text.match(/to.*_(.*?)\d\./)[1]
    params["cluster"] = text.match(/to(.*?)_.*\d\./)[1]
    params["env"] = text.match(/to\s.*_.*?(\d)\./)[1]
    params["suite"] = set_suite(params["repo"]) 
    params["hash"] = text.match(/is\s(.*?)\./)[1]
    puts params.inspect
    return params
  end
end

Once parsed, we have all the information we need to submit and initiate a test suite run. A test suite and its contained tests are then recorded into our postgresql database.

Here is an example of what this suite would look like:

{
  id: 113,
  suite: "sanity",
  deploy_id: "FJBETJTY",
  status: "running",
  branch: "master",
  repo: "br",
  env: "4",
  all_passed: null,
  cluster: " stag",
  failure_log: null,
  last_hash: "0de4790"
}

Each test for that suite is stored in relation to the suite like so:

{
  id: 1151,
  name: "Live Blog - Has no 500s",
  url: "http://www.saucelabs.com/tests/20b9a64d66ad4f00b21bcab574783d73",
  session_id: "20b9a64d66ad4f00b21bcab574783d73",
  passed: true,
  suite_id: 113
},
{
  id: 1152,
  name: "Writer HQ - All Article Types Shown",
  url: "http://www.saucelabs.com/tests/4edbe941fdd8461ab6d6332ab8618208",
  session_id: "4edbe941fdd8461ab6d6332ab8618208",
  passed: true,
  suite_id: 113
}

This allows us to keep a record over time of every single test that was run and to which suite and deploy it belongs. We can get as granular as the exact code change using the Github hash and later include screenshots of the run. We also have a couple of different endpoints we can check for failed tests in a suite only, tests that have passed only, or the last test suite to run on an environment. We wanted to record everything in order to analyze our test data and create even more integrations.

This helps us automatically listen for those completed deploy messages we talked about earlier, as well as to have a way of tracking those tests runs later. After every test suite run we then post the permalink of the suite back into our #cukes chat room so that we have visibility across the company.

Another added benefit is that it allowed us to build a front end for non tech savvy people to initiate a test suite run on any environment.

Check it out for yourself; we just open sourced it.

Stay tuned next week for part two of this mini series! You can follow Felix on Twitter at .

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Guest Post: A Dialectical Theory of Software Quality, or Why You Need Independent QA

June 9th, 2014 by Amber Kaplan

QAProduct quality, and in particular software quality, can be an ephemeral characteristic of the product. It may not be easy to define, but in a sense, it is the opposite of the definition of pornography. You may not recognize it when it’s there, but you know it when it’s not. I propose that anything in a software product, or for that matter any other product, that induces unnecessary aggravation in the user is a detraction from the quality of the product.

For those unfamiliar with the term “dialectical” or its noun form, “dialectics”, these terms can be very roughly defined as an approach to looking at things that sees them as dualities. For example, the concept of “night” is more meaningful when coupled with the concept of “day.” “Good” has more meaning when paired with the concept of “evil”. Creative and constructive processes can be thought of as dialectical, where there is a tension between opposing imperatives and the result of such processes can be thought of as the resolution of these tensions.

As applied to the discipline of software engineering, one dialectic that exists is that between the imperatives of developers and architects and those of users. In the development process, the imperatives of independent QA engineers are those of users and are theoretically opposite to those of developers. Developers are totally absorbed in the technical intricacies of getting from point A to point B. They work to some set of explicit or implicit product functionality items that make up a product requirements set. Their concern is in how to implement these requirements as easily as possible. They work from the inside out, and are intimate with the details of how the functionality requirements are implemented. Independent QA, on the other hand, works from the same set of defined or implicit functionality and requirements but, in theory, does not care about the details of the implementation. QA engineers are intimately concerned with all aspects of how to use the product. By exercising the product, they find the points of aggravation to which the developers may be completely oblivious. To the extent that their findings are heeded, the quality, defined as, among other things, the lack of aggravation, can be enhanced.

In a sense, any piece of software that is run by someone other than the person who wrote it is being tested. The question is not whether the software will be tested, but by whom, how thoroughly, and under what circumstances. Any shortcuts, data formats, dependencies, and so many other elements that a developer used to get their code to run that are not present outside of their development environment may cause a problem when someone else runs that code.

There are many types of software testing. One fundamental division of testing is that between so-called white box testing and so-called black-box. White-box testing is testing carried out with knowledge of the internals of the software. Black-box testing emphasizes the exercise of the software’s functionality without regard to how it is implemented. Complete testing should include both types of tests. The emphasis in the text that follows is on black-box testing and the user experience, where the dialectical view of QA has the most relevance.

Bugs and other manifestations of poor quality cost money. There is a classical analysis that basically says that the cost of fixing a bug increases geometrically the later on in the development cycle it is found. Having your customer base be your principle test bed can prove to be expensive. Another possible source of expense is the support for workarounds for bugs that are not fixed. I can give a personal example of this. Some time ago I purchased an inexpensive hardware peripheral which came with a configuration software package. This package had a bug that, on the surface, is very minor, but when I used it I had problems configuring the product correctly. It took two calls to their support team to resolve the problem. Given the low price of this peripheral, one may wonder if their profit from the sale of this unit was wiped out. If many people call with the same question, how does this affect their earnings? How much does a product that is difficult to use, buggy, or otherwise of poor quality increase the cost of selling the product? Repeat sales cost less to generate then new sales and to the extent that poor quality impacts repeat sales, the cost of sales is driven up.

The scope of independent QA need not be limited to bug hunting. Test-driven development can be done at both the highest level and the unit level. QA can make an important contribution in the earliest phases of product specification by writing scenario documents in response to a simple features list before any detailed design is done. For example, in response to a single feature item such as “login”, a creative QA engineer may specify tests such as “attempt login specifying an invalid user name, attempt login specifying an incorrect password, begin login and then cancel, attempt login while login is in process, attempt multiple login multiple times specifying invalid passwords”, and on and on. Another engineer, seeing this list of tests, may well think of other tests to try. The developer writing the login functionality can see from the list what cases they need to account for early on in their coding. When something is available to test, the QA engineer executes the tests specified in the scenarios document. Those scenarios that turn out to be irrelevant because of the way the login functionality is implemented can be dropped. Other tests and scenarios that the tester thinks of or encounters in testing can be added. Ambiguities encountered in this testing can be brought to the attention of development for resolution early on.

As more and more software is Web-based, runs in Web browsers and is available to more non-technical users, usability issues become more important. How often have you visited Web sites and been unable or have had great difficulty in doing what you wanted? There are all too many feature-rich Web sites based on some usage model known only to the designer. The simplest of actions such as logout may become difficult simply because the hyperlink for it is in some obscure spot in a tiny font. A vigilant QA engineer given the task of testing this Web page may well notice this user inconvenience and report it. A common user scenario such as placing an order and then cancelling it may leave the user unsure about whether or not the order has actually been cancelled. The developer may not have thought of this scenario at all, or if they did, thought only in terms of a transaction that either went to completion or was rolled back. A consideration that is trivial to the developer, however, may cause grave consternation to the end user. A transaction that did not complete for some catastrophic reason such as a connection being dropped unexpectedly could well leave the end-user wondering about the state of their order. The independent QA engineer may identify a need for a customer to be able to log back into the site and view their pending orders.

Current trends in software development such as Agile, as well as the move to continuous integration and deployment, do not negate the need for an independent QA function. Indeed, continually making changes to an application’s UI, functionality, or operating assumptions may prove unnerving to users. Assumptions of convenience, such as the idea that the user community will understand how to work with a new UI design because they are already familiar with some arbitrary user model supporting it, can easily creep in under an environment of constant change carried out by people who do not question these assumptions. Independent QA is still needed to define and execute user scenarios made possible by product change as well as old scenarios whose execution steps may be made different by UI changes. Automated unit testing, programmatic API testing, and automated UI tests created by development-oriented engineers cannot simulate the dilemmas of a user who is new to the product or is confused by arbitrary UI changes. A highly visible example of this is the failure of Windows 8 to gain widespread acceptance and the huge market for third-party software to bring back the Start menu familiar to experienced Windows users. Nor was the smartphone-style UI, based on a platform with more inherentlimitations than the existing Windows desktop, a big hit with them.

The work of independent QA engineers can, among other things, serve as an “entry point” for tests that may later be added to an automated test suite. A set of steps, initially executed by an actual human doing ad-hoc or exploratory testing, that cause an operation to fail inelegantly, can lead to a test program or script that should be added to the suite that is executed in a continuous integration cycle.

None of these considerations invalidate the value of testing based on knowledge of the internals of a product. Unit testing, white box testing, and anything else that one can think of to exercise the application may uncover bugs or usage issues. White-box testing may quickly uncover change- introduced bugs that black-box testing might only find with a great deal of time and effort, or not at all. In this context, automated tests automatically kicked off as part of a continuous integration cycle are an extension of an existing white box regression test suite but not a replacement for actual hands-on, exploratory, black-box QA. You might say that white-box testing is the dialectical negation of black-box QA. It verifies that the individual pieces work, where independent, black-box QA verifies that the product works for the user. The two approaches to testing complement each other. Both are necessary for a more complete assessment of product quality.

By Paul Karsh for Sauce Labs

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Guest Post: Test Lessons at ExpoQA

June 6th, 2014 by Amber Kaplan

This is the first of a three part series by Matthew Heusser, software delivery consultant and writer. 

Every now and again and opportunity comes along that you just can’t refuse. Mine was to teach the one-day version of my class, lean software testing, in Madrid, Spain, then again the following week in Estonia. Instead of coming back to the United States, I’ll be staying in Europe, with a few days in Scotland and a TestRetreat in the Netherlands.

And a lot of time on airplanes.

The folks at Sauce Labs thought I might like to take notes and type a little on the plane, to share my stories with you.

The first major hit in Madrid is the culture shock; this was my first conference where English was not the primary language. The sessions were split between English and Spanish, with translators in a booth making sure all talks were available in all languages.

The Testing Divide

Right now, in testing, I am interested in two major categories: The day to day work of testing new features and also the work of release-testing after code complete. I call this release testing a ‘cadence’, and, across the board, I see companies trying to compress the cadence.

My second major surprise in Madrid is how wide the gap is —and I believe it is getting wider —between legacy teams that have not modernized and teams starting from scratch today. One tester reported a four-month cycle for testing. Another team, relying heavily on Cucumber and Selenium, were able to release every day.

Of course, things weren’t that simple. The Lithuanian team used a variety of techniques I can talk about in another post to reduce risk, something like devOps, which I can talk about in another post. The point here is the divide between the two worlds.

Large cadences slow down delivery. They slow it down a lot; think of the difference between machine farming in the early 20th century and the plow and horse of the 19th.

In farming, the Amish managed to survive by maintaining a simple life, with no cars, car insurance, gasoline, or even electricity to pay for. In software, organizations that have a decades-long head start: banks, insurance companies, and pension funds, may be able to survive without modernization.

I just can’t imagine it will be much fun.

Batches, Queues and Throughput

Like many other conferences, the first day of ExpoQA is tutorial day, and I taught the one-day version of my course on lean software testing. I expected to learn a little about course delivery, but not a lot —so the learning hit me like a ton a bricks.

The course covers the seven wastes of ‘lean’, along with methods to improve the flow of the team – for example, decreasing the size of the measured work, or ‘batch size’. Agile software development gets us this for free, moving from ‘projects’ to sprints, and within sprints, stories.

In the early afternoon we use dice and cards to simulate a software team that has equally weighted capacity between analysis, dev, test and operations —but high variability in work size. This slows down delivery. The fix is to reduce the variation, but it is not part of the project, so what the teams tend to do is to build up queues of work, so any role never runs out of work.

What this actually does is run up the work in progress inventory – the amount of work sitting around, waiting to be done. In the simulation I don’t penalize teams for this, but on real software projects, ‘holding’ work created multitasking, handoffs, and restarts, all of which slow down delivery.

My lesson: Things that are invisible look free —and my simulation is far from perfect.

After my tutorial it is time for a conference day – kicked off by Dr. Stuart Reid, presenting on the new ISO standard for software testing. Looking at the schedule, I see a familiar name; Mais Tawfik, who I met at WOPR20.Mais is an independent performance consultant; today she is presenting on “shades of performance testing.”

Performance Test Types

Starting with the idea that performance testing has three main measurements: Speed, Scalability, and Stability, Mais explains that there are different types of performance tests, from front-end performance (javascript, waterfalls of HTTP requests, page loading and rendering) to back-end (database, webserver), and also synthetic monitoring – creating known-value transactions continuously in production to see how long they take. She also talks about application usage patterns – how testing is tailored to the type of user, and how each new release might have new and different risks based on changes introduced. That means you might tailor the performance testing to the release.

At the end of her talk, Mais lists several scenarios and asks the audience what type of performance test would blend efficiency and effectiveness. For example, if a release is entirely database changes, and time is constrained, you might not execute your full performance testing suite/scripts, but instead focus on rerunning and timing the database performance. If the focus on changes is the front end, you might focus on how long it takes the user interface to load and display.

When Mais asks if people in the organization do performance testing or manage it, only a handful of people raise their hands. When she asks who has heard of FireBug, even less raise their hand.

Which makes me wonder if the audience is only doing functional testing. If they are, who does the performance testing? And do they not automate, or do they all use Internet Explorer?

The talk is translated; it is possible that more people know these tools, it was just that the translator was ‘behind’ and they did not know to raise their hands in time.

Here’s hoping!

Time For A Panel

At the end of the day I am invited to sit on a panel to discuss the present (and future) of testing, with Dr. Reid, Dorothy Graham, Derk-Jan De Grood, Celestina Bianco and Delores Ornia. The questions include, in no particular order:

  •             Will testers have to learn to code?
  •             How do we convince management of the important of QA and get included in projects?
  •             What is the future of testing? Will testers be out of a job?
  •             What can we do about the dearth of testing education in the world today?

For the problem with the lack of education, Dorothy Graham points to Dr. Reid and his standards effort as a possible input for university education.

When it is my turn, I bring up ISTQB The International Software Testing Qualifications Board. – if ISTQB is so successful (“300,000 testers can’t be wrong?”) then why is the last question relevant? Stefaan Luckermans, the moderator, replied that with 2.9 Million testers in the world, the certification had only reached 10%, and that’s fair, I suppose. Still, I’m not excited about the quality of testers that ISTQB turns out.

The thing I did not get to say, because of time, that I want to do is point out that ISTQB is, after all, just a response to a market demand for a 2-3 day training certification. What can a trainer really do in 2-3 days? At most, maybe, teach a single technical tool, turn the lightbulb of thinking on, or define a few terms. ISTQB defines a few terms, and it takes a few days.

The pursuit of excellent testing?

That’s the game of a lifetime.

By Matthew Heusser – matt.heusser@gmail.com for Sauce Labs

Stay tuned next week for part two of this mini series! You can follow Matt on Twitter at @mheusser.

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Bleacher Report’s Continuous Integration & Delivery Methodology: Continuous Delivery Through Elastic Beanstalk

June 3rd, 2014 by Amber Kaplan

elastic_beanstalkThis is the first of a three part series highlighting Bleacher Report’s continuous integration and delivery methodology by Felix Rodriguez. 

I have been tinkering with computers since I was a kid and I can remember playing Liero on D.O.S. like it was the greatest game ever to exist. I started out building computers and websites, then got into tech support, and now I am a Quality Assurance technician at Bleacher Report – when I’m not cruising around California on my motorcycle, that is.

While working at Bleacher Report,  I helped maintain their existing automation suite. I took it upon myself to revamp the collection of long unrelated rspec tests into a more OOP Cucumber-based testing framework. Now we have a integration testing server that I built with an API to build suites and track tests over time.

We are starting to move some of our new services over to Elastic Beanstalk because we knew it would be easier for us to manage our stacks and issue deploys. Being a rather new service, we were unable to find any integrations with Travis CI out of the box. After experimenting with some of the custom functionality this tool provides, we were able to issue commands to download the binaries to the VM that Travis spins up and create the files we need in order to issue an Elastic Beanstalk deploy command. This was far simpler and less time consuming than trying to install our deployment software on a Travis VM.

After demoing this to our Operations department, they were more than eager to have us switch new applications to Elastic Beanstalk as developers have way more control over how the development environment is configured (think Heroku or Nodejitsu). On my own I was able to build an application and the environment it was contained in, as well as ensure the latest version was being continuously deployed, after a successful travis build, to a staging server, it was able to kick off an integration suite, and return results of each step of the process. This was magic to us; it freed up a lot more time for Operations to focus on making sure our applications scale, allowed QA to focus on writing tests – not running them, and developers to focus on coding their application without having to adhere to the limitations of their environment with old tool sets.

If you’re using Amazon’s Elastic Beanstalk service, or plan on building any new applications, I highly suggest this route to make your life much easier. If not I would skip to “The Hard Way” which allows you to use EB indirectly to update your apps.


The Easy Way

TravisCI unfortunately does not support Elastic Beanstalk out of the box, but using a clever hack you can automate the EB configuration and deploy cycle through a .travis.yml config. You should have keep track of each answer the EB init prompt asks you so we can preseed the responses in the “echo -e” command.

I got most of my inspiration from http://www.sysadminops.com/amazon-beanstalk-and-travis-ci-integration/ but I was unable to get it working completely, so I had to try something else.

after_success:
- wget "https://s3.amazonaws.com/elasticbeanstalk/cli/AWS-ElasticBeanstalk-CLI-2.6.2.zip"
- unzip "AWS-ElasticBeanstalk-CLI-2.6.2.zip"
- AWS-ElasticBeanstalk-CLI-2.6.2/AWSDevTools/Linux/AWSDevTools-RepositorySetup.sh
- mkdir .elasticbeanstalk
- sudo echo "[global]" >> .elasticbeanstalk/config
- sudo echo "AwsCredentialFile=/Users/travis/.elasticbeanstalk/aws_credential_file"
  >> .elasticbeanstalk/config
- sudo echo "ApplicationName=cukebot" >> .elasticbeanstalk/config
- sudo echo "DevToolsEndpoint=git.elasticbeanstalk.us-east-1.amazonaws.com" >> .elasticbeanstalk/config
- sudo echo "EnvironmentName=YOUR_STAGING_ENVIRONMENT_NAME" >> .elasticbeanstalk/config
- sudo echo "Region=us-east-1" >> .elasticbeanstalk/config
- cat .elasticbeanstalk/confi
- cat ~/.elasticbeanstalk/aws_credential_file
- echo "us-east-1" | git aws.config
- echo -e "$AWS_ACCESS_KEY_ID\n$AWS_SECRET_ACCESS_KEY\n1\n\n\n1\n53\n2\nN\n1\n" | AWS-ElasticBeanstalk-CLI-2.6.2/eb/linux/python2.7/eb init
- git aws.push

Now anytime you push code to master and your travis build succeeds you will automatically deploy your new code to the staging enviroment you created.

The Hard Way

TravisCI supports a number of deploy services out of the box, unfortunately for us, we do not use any of those services to deploy our apps. The way we had to approach continous delivery was through travis’s custom webhooks.

First we must build a small application that accepts posts from travis when a build completes. They provide a sample sinatra application to help you get started: https://gist.github.com/feelobot/32edcda4706c06267fa5 we want to modify it a bit to add a json object we create to our amazon sqs queue.

puts "Received valid payload for repository #{repo_slug}" # "stag",
  :repo => repo,
  :branch => "master",
  :user_name => user,
  :env => "1"
})
queue.send if payload["branch"] == "master"

From there I added a deploy queue class that we can accept the information passed from the travis payload like so:

require 'aws-sdk'
require 'json'
class DeployQueue
  def initialize(options={})
    @queue_text = {
      :cluster => options[:cluster], # staging or production 
      :repo => options[:repo],
      :branch => options[:branch],
      :env => options[:env],
      :user_name => options[:user_name]
    }.to_json
    @sqs = AWS::SQS.new
    @q = @sqs.queues.named("INSERT_NAME_OF_QUEUE")
    puts "Deploy sent to queue: #{options[:repo]}_deploy_queue: #{@queue_text}"
  end
  def send
    msg = @q.send_message(@queue_text.to_s)
  end
end 

Then we can add the following to your .tavis.yml

notifications:
  webhooks: http://url/where/your/app/is/hosted.com
  on_success: always
  on_failure: never

Amazon Elastic Beanstalk allows us to build a worker with an easy to use GUI interface that will run commands for each message in our AmazonSQS queue. I created a quick video demonstration for you to see how easy it is! http://bleacher-report.d.pr/6VfI

Basically, all we have to do now is wrap our deploy script inside of a small Sinatra web application.

Create a Procfile with the following:

worker: bundle exec ruby app/worker.rb

As well as an app/worker.rb file

require 'bundler/setup'
require 'aws-sdk'
require 'sinatra'
require_relative '../lib/deploy_consumer'

enable :logging, :dump_errors, :raise_errors

AWS.config(
  :access_key_id => ENV['AWS_ACCESS_KEY_ID'],
  :secret_access_key => ENV['AWS_SECRET_KEY'])

post '/deploy' do
  json = request.body.read
  puts "json #{json.inspect}"
  data = JSON.parse(json)
  DeployConsumer.new(data).deploy ## Your Deploy CMD here
end

The DeployConsumer is not necessary; it’s a script that I made that just takes the Json object received from the queue and uses it to determine what environment it should deploy to.  This should be replaced with your own deploy script. If you are interested in what the consumer looks like, you can view it here: https://gist.github.com/feelobot/a358be435fb71727c8ab

Stay tuned next week for part two of this mini series! You can follow Felix on Twitter at .

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Ask a Selenium Expert: How to Handle Exceptions

May 29th, 2014 by Amber Kaplan

selenium testing & sauceThis is the final follow-up question in a series of 8 from Selenium expert Dave Haeffner. Read up on the firstsecondthirdfourthfifthsixth, and seventh.

During our recent webinar, “Selenium Bootcamp,” Dave discussed  how to build out a well factored, maintainable, resilient, and parallelized suite of tests that run locally, on a Continuous Integration system, and in the cloud. The question below is number 6 of 8 in a mini series of follow-up questions.

8. What’s the best way to handle StaleElementReferenceExceptions? Is this ok just to squelch them?

Yes, for the most part, it’s okay to rescue these exceptions. You can see an approach on how to do that here.

That wraps up our follow-up session from Dave Haeffner! Get more info on Selenium with Dave’s book, The Selenium Guidebook, or follow him on Twitter or Github.

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Ask a Selenium Expert: Should I Test Load with Selenium?

May 22nd, 2014 by Amber Kaplan

selenium testing & sauceThis is part 7 of 8 in a mini series of follow-up Q&A’s from Selenium expert Dave Haeffner. Read up on the firstsecondthirdfourthfifth, and sixth.

During our recent webinar, “Selenium Bootcamp,” Dave discussed  how to build out a well factored, maintainable, resilient, and parallelized suite of tests that run locally, on a Continuous Integration system, and in the cloud. The question below is number 6 of 8 in a mini series of follow-up questions.

7. ­If I want to test load using Selenium, will I have to run the same test multiple times in one “script” or can I instruct Selenium to run it multiple times?

While you could use Selenium to test load, it’s not the best tool for the job since it’s pretty expensive to achieve this outcome. There are better tools suited for the job — like JMeter or Gattling. That being said, there are some companies that specialize in Selenium-based load testing. You can find some of them on the ‘Commercial Support’ section of the Selenium HQ Support page.

Alternatively, you could try a more home grown approach like I outline in this write-up.

-Dave Haeffner, April 9, 2014

Can’t wait to see the rest of the Q&A? Read the whole post here.  Get more info on Selenium with Dave’s book, The Selenium Guidebook, or follow him on Twitter or Github.

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Guest Post: Open Sauce Enables Plone to Focus on Robot Framework

May 16th, 2014 by Amber Kaplan

Robot Framework Our friends in the Plone community recently took to Open Sauce for their testing needs to save time. The results have been stellar; with the time saved they’re able to focus on improving Robot Framework, according to their release manager Eric Steele.

Check out the rest of what they have to say below.

When I took over as release manager for the Plone CMS project, we ran our test suite nightly, but that only covered our Python code and some simple form submissions. The entire JavaScript layer remained largely untested, save a few click-arounds by hand before each release. The suspicion that some critical feature might have broken in a browser combination we hadn’t tried kept me up at night. As I began preaching the need for continuous integration and in-browser testing, it was surprising to find a whole team’s-worth of people excited to obsess over running tests, improving coverage, and collecting a fleet of VMs to run the few Selenium tests we’d put together at that point. The latter proved to be our undoing; we spent more time managing our testing infrastructure than we did doing actual testing.

Thankfully, Sauce Labs’ Open Sauce came along to save us.

Open Sauce has freed up my testing team to do far more interesting things. We’ve put quite a bit of effort into helping Robot Framework grow. Robot’s Behavior-Driven Development abstraction seems to fit everyones’ heads a bit better and allows us to easily alter tests based on which features are active. Asko Soukka, previously featured on this blog, became Plone’s Person of the Year for 2013 based on the work he put into extending Robot Framework for our community.

Asko has created a set of Robot keywords to enable automated screenshots for our Sphinx documentation. This allows our documentation to show the Plone user interface in the same language as the document. Groups deploying Plone sites can regenerate our end-user documentation with screenshots featuring their own design customizations. It’s a huge win; users see examples that look exactly like their own site. Finally, in a bit of pure mad science, Asko has piped those image generation scripts through a text-to-speech program to create fully-automated screencasts.

The Plone community is currently at work on the upcoming release of Plone 5. With its new widgets layer and responsive design, there are so many new ways that bugs could creep into our system. Happily, that’s not the case. I get a nightly report full of screenshots of Plone in action across browser, device, and screen size. Basic accessibility gotchas are quickly caught. Content editing and management features are automatically tested on both desktop and mobile. Open Sauce allows us to focus on getting things done and done correctly. Finally, I can sleep soundly — or at least find something else to worry over. -Eric Steele, Release Manager, Plone.org. Read Eric’s blog here or follow him on Twitter.

Do you have a topic you’d like to share with our community? We’d love to hear from you! Submit topics here, feel free to leave a comment, or tweet at us any time.