Version Support Hell

October 8th, 2015 by Ashley Hunsberger

Facebook. It’s a love-hate relationship. I love seeing my friends and family from afar, what they are up to, how their children are growing. I hate how ingrained it is in everyday life. How on earth did we manage when I was growing up? (Snail mail, actually sending photos that you had to have developed; heaven forbid you pick up a phone and talk to someone). The most amusing aspect to me is how much complaining I see in my news feed every time facebook pushes a change: “No, I do not want to be added to a group without my permission, thank you very much,” or “Why is my grandmother allowed to tag random people I don’t know on my photos now?” (That might be a question I asked. She may need to take a few lessons in what tagging is actually for; I might need to redo all my privacy settings).

In my experience, nobody likes change. It takes us out of our comfort zone. Take it or leave it though — Facebook is on to something. They don’t have to deal with running several versions of their product! They deploy a change, people moan and groan about it, and then get used to it.

In my world, it’s not so simple — a world where we have self-hosted clients (who can determine what version, to an extent, they want to be on) and clients in the cloud (the latest release). We have to test everything. I long for the day where all clients are on one version, and I no longer have to worry about supported upgrade paths, or testing a bug across three releases. (more…)

Cattle, Not Pets – Use Automation Software To Provision Your Servers

October 6th, 2015 by The Sauce Labs Team

This guest post was written by Julian Dunn, Product Manager at Chef Software (@julian_dunn)

In Greg Sypolt’s earlier post on immutable infrastructure, he outlined practices for treating your servers more like cattle and not pets. It’s better to rebuild things if you can rather than spending calories on manually debugging and fixing long-lived servers. (Of course, you will inevitably have a few pets in your infrastructure; for instance, database servers that actually hold customer data that would be inconvenient to constantly rebuild.)

Automation software like Chef can help with the provisioning and automated setup of new cattle in a consistent way without having to resort to shell scripting or other ugly, hard-to-understand tools. Chef can also help you with maintaining consistency among the few pet servers that you do have.

One trend we’ve seen in the pets-to-cattle migration is the externalization of interface testing tools from long-lived, always-on test driver clusters to ephemeral instances or containers to drive UI tests, record the results, report them to a central dashboard and then destroy themselves. These cloud-native testing systems like Sauce Labs also provide a far richer set of functionality, like video recording and playback of the UI test, than can be achieved with artisanally-crafted, on-premise solutions. In other words, why mess around with having to configure and maintain Xvfb, Selenium, Webdriver and so on, yourself, for a boatload of platforms and platform versions when you can just use Sauce Labs?

One technical challenge to using a cloud-based testing solution like Sauce Labs is that many applications are internal-only. They may reside behind a corporate firewall, unreachable from the Internet. That is why several years ago, Sauce Labs created the Sauce Connect proxy, which creates a per-customer VPN tunnel between a network with the application under test, and Sauce Labs’ testing driver machines, allowing you to test your internal applications.

You likely want this VPN tunnel to be transient, and even treat the machines running the tunnel as cattle too. This necessitates a way to install, configure, start up and destroy tunnel machines as needed. This is a perfect use case for Chef; enter the Sauce Connect Cookbook, which does exactly that. By configuring a few simple attributes like your Sauce Labs user API key, you can easily achieve this objective. Chef can also be used to set up and manage all other aspects of your testing infrastructure, like Jenkins or TeamCity servers and their build nodes, and even deploy the applications under test into container runtimes or virtual machines. Hopefully this article has whetted your appetite enough to convince you to extend the “cattle, not pets” philosophy to your testing practices, as well. You can learn more about Chef and what it does at

Recap: Selenium 2015 Conference

October 4th, 2015 by Greg Sypolt

Portland, Oregon is surrounded by green forests. It’s a bike-friendly city, with an abundance of craft beer, and despite the rain it’s where everyone wants to be. The Selenium Conference Committee wisely picked beautiful Portland for this year’s conference.

Image Source: Test the Web Forward

Image source: Test the Web Forward

Going to conferences always energizes me. It rejuvenates my focus and determination. Why? I discover new concepts from some of the best in the industry, while networking with conference attendees.

One of the best ways to build automation knowledge is to attend the Selenium conference. The attendees this year were extremely technical, and everyone was willing to have conversations about their Selenium journeys.

Pre-Conference Workshops

The pre-workshop Selenium lineup this year was rock solid. The all-day session ranged from beginner to seasoned, and some of the topics covered included: (more…)

An Open Letter to Developers

October 1st, 2015 by Ashley Hunsberger

Dear Developers,

I just wanted to let you know that despite common belief, QA is not your enemy! I know – some days it may seem that way. Admittedly, sometimes I do feel giddy when I find a particularly good bug, but it’s just what I do — it’s nothing personal.

However, I think we need to have a chat. I have a bone (or three) to pick with you.

Over the last decade (cough, or more) I have worked with several types of engineers. I will say point blank that the most difficult person to work with is the developer who thinks only the tester owns the quality, and is known as The Hotshot. (See “The Seven Types of Software Engineers” if you need to know what kind of engineer you are). Most of this letter is dedicated to The Hotshot.


How To Run Your Automated Web Tests on Any Browser

September 25th, 2015 by Dave Haeffner

This is the second post in a 3-part series on getting started off with automated web testing on the right foot. You can find the first post here and third post here.

The Problem

In the last post we stepped through how to write an automated web test that uses automated visual testing to perform assertions. This is a strong first step. But the test as it’s written will only work on one browser (Firefox) — leaving you with limited browser coverage.

A Solution

Thankfully Selenium is built to work on all major browser and operating system combinations. Traditionally, you would tap into this functionality by standing up a series of machines (for each operating system and browser you care about) and orchestrating your tests across these machines with Selenium Grid.

This is an unnecessary amount of complexity and hardware to manage when there are third-party services that do the heavy lifting for us. By using a third-party cloud service like Sauce Labs, we’re able to gain access to whatever browser & operating system combinations we need with just a few lines of code. And there are no changes required to make this work with our Applitools Eyes implementation either (it will automatically capture new baseline images regardless of the browser).

Let’s dig in with an example. (more…)

What is your definition of “Done”?

September 17th, 2015 by Greg Sypolt


Why does a daily standup or scrum team have a definition of done (DoD)? It’s simple – everyone involved in a project needs to know and understand what “done” means.

What is DoD? It is a clear and concise list of requirements a software increment must adhere to in order to be considered a completed user story, sprint, or be considered ready for release. However, for organizations just starting to apply Agile methods, it might be impossible to reach immediately. Your organization needs to identify the problems and work as a team to build your version of DoD to solve them.

The Problem

The following conversation occurs during your daily standup:

Product Manager (PM): “Is the user story done?”
Developer (Dev): “Yes!”
Quality Assurance (QA): “Okay, we will execute our manual and/or automated tests today.”

Later that same day:

QA: “We found several issues, did Dev perform any code reviews or write any unit tests?”
PM (to Dev): “QA found several issues, did you do any code reviews or unit testing?”
Dev: “No, the code was simple. It was going to take too much time to write unit tests.”

Has this ever happened to you? (more…)

Another Acronym – MVP vs. MFP

September 15th, 2015 by Ashley Hunsberger

As I dive into a new round of planning and discussions for our next project with the product management team and designers, I keep hearing, “This is MVP.” No, they are not referring to Most Valuable Player, but rather Minimum Viable Product — the product that has just those core features that will still provide value to the customer. Unfortunately, this can sometimes lead to over-promising or large user stories. In the beginning, sometimes it feels like everything is MVP (until you start understanding the actual scope of a feature). Let’s talk about a term my colleague, Trevor Akiyama, came up with: the MFP — Minimum Functional Product (the minimal set of things that actually works, as opposed to the MVP that stakeholders want).

Why do you need them?

Over the years, I have often seen one single user story that was more like an epic, simply because everything within the story was part of the MVP. What resulted was a user story that stayed open forever, with no way to test until all the pieces that were dependent on each other were integrated. Now, what if we had broken down the user story into smaller pieces and found the MFP? We would have had clear, short, testable user stories.

For teams trying to transition into the world of Continuous Delivery, testable stories are a MUST. By identifying your MFPs, you help your team keep stories small, and prioritize how to build (and therefore test) your product. (more…)

Can You Test it All? Test Coverage vs. Resources

September 3rd, 2015 by Ashley Hunsberger

During nearly every project I have worked on, the question Can I test everything? always comes up.  The answer is (usually) a resounding NO. Sometimes it’s because of time, sometimes it’s lack of people. How can we still ensure a quality product, even if we can’t cover it all? Sometimes, we have to test smarter.

The usual suspects

The typical scramble to finish testing and get something released is usually (in my experience) a result of one of the following (or a combination thereof):

User stories that are WAY too big.  When user stories are too large, it makes it difficult to break out tasks and identify all the acceptance criteria. They also become more difficult to plan for unforeseen scenarios, and can often blow estimates out of the water.

Complex Workflows. Depending on your feature, the workflow could be very complicated, and it can be difficult to anticipate how a user is actually going to use the product. This makes it more challenging to find every possible scenario for end-to-end tests. Even if your user stories are small, the overall workflow comprising all user stories can still result in missed tests if it is too complex.

Not using Test Driven Development. If you are still living in a world where Development works on their own and throws it over the proverbial fence to QA, you are opening up doors for late surprises to enter, and blocking bugs that hinder your testing progress. (more…)

Should You Have a Dedicated Automation Team Within Your QA Department?

September 1st, 2015 by Israel Felix

If you’ve led or managed QA teams that have included an automation test team, you’ve probably been in a situation where you had to decide whether you should keep them on board. Normally the decision needs to be made when there is a change in leadership, wherein the new management comes with a mandate to consolidate groups and reduce costs. This situation also tends to arise when working with startups or small companies when they are ready to put together or augment their QA teams. So should you have a dedicated automation team?

Typically, there are two camps with regards to dedicated automation teams. There are those who believe that we should have dedicated automation teams, and those who believe that QA engineers should handle manual testing and automation testing. From my experience working in QA within both small and large companies, I almost always prefer to have a dedicated automation team. However, there are a few scenarios where having a QA team that takes on both roles might make sense.

Time to Market

For automation to be done right, it needs to be a full-time job. From developing the framework and creating the libraries and scripts for different platforms to executing and debugging failures — it will all simply consume too much of an engineer’s time and compromise the actual testing and release date. As you already know, time to market and keeping a release on schedule is top priority, so testing needs to get done, no matter what. (more…)

Getting the Existing Team On Board with Automation (Scripts)

August 27th, 2015 by Greg Sypolt


In an attempt to do more with less, organizations want to test their software adequately, as quickly as possible. Businesses are demanding quick turnaround, pushing new features and bug fixes to production within days, along with quality. Everyone knows manual testing is labor-intensive and error-prone, and does not support the same kind of quality checks that are possible through the use of an automated test. Nobody likes change, but it’s time to educate your team on the importance of onboard automated testing.

The only way to make sense out of change is to plunge into it, move with it, and join the dance.  – Alan Watts

Everyone has had a job interview at some point in their lives, right? It is important to be prepared! The first few minutes of an interview are a make or break moment. Why? Because first impressions can have long-lasting effects. Never underestimate the power of first impressions. The same principle applies when onboarding automation to an existing manual testing team. Your initial presentation to your team or organization should be treated like a job interview. Be prepared. Deliver expectations and explain responsibilities — it’s critical since it is normal for employees to have an emotional reaction to anything they view as a job threat.

Why automated testing?

If things are going well, why do we want to implement automated tests? The demand is to do more with less, which makes manual testing an impossible task, but introducing automated testing into an existing software development lifecycle can also be daunting. However, when implemented, automated testing is a valuable asset that shortens testing cycles and helps teams become more agile.


image source: