Can You Test it All? Test Coverage vs. Resources

September 3rd, 2015 by Ashley Hunsberger

During nearly every project I have worked on, the question Can I test everything? always comes up.  The answer is (usually) a resounding NO. Sometimes it’s because of time, sometimes it’s lack of people. How can we still ensure a quality product, even if we can’t cover it all? Sometimes, we have to test smarter.

The usual suspects

The typical scramble to finish testing and get something released is usually (in my experience) a result of one of the following (or a combination thereof):

User stories that are WAY too big.  When user stories are too large, it makes it difficult to break out tasks and identify all the acceptance criteria. They also become more difficult to plan for unforeseen scenarios, and can often blow estimates out of the water.

Complex Workflows. Depending on your feature, the workflow could be very complicated, and it can be difficult to anticipate how a user is actually going to use the product. This makes it more challenging to find every possible scenario for end-to-end tests. Even if your user stories are small, the overall workflow comprising all user stories can still result in missed tests if it is too complex.

Not using Test Driven Development. If you are still living in a world where Development works on their own and throws it over the proverbial fence to QA, you are opening up doors for late surprises to enter, and blocking bugs that hinder your testing progress. (more…)

Should You Have a Dedicated Automation Team Within Your QA Department?

September 1st, 2015 by Israel Felix

If you’ve led or managed QA teams that have included an automation test team, you’ve probably been in a situation where you had to decide whether you should keep them on board. Normally the decision needs to be made when there is a change in leadership, wherein the new management comes with a mandate to consolidate groups and reduce costs. This situation also tends to arise when working with startups or small companies when they are ready to put together or augment their QA teams. So should you have a dedicated automation team?

Typically, there are two camps with regards to dedicated automation teams. There are those who believe that we should have dedicated automation teams, and those who believe that QA engineers should handle manual testing and automation testing. From my experience working in QA within both small and large companies, I almost always prefer to have a dedicated automation team. However, there are a few scenarios where having a QA team that takes on both roles might make sense.

Time to Market

For automation to be done right, it needs to be a full-time job. From developing the framework and creating the libraries and scripts for different platforms to executing and debugging failures — it will all simply consume too much of an engineer’s time and compromise the actual testing and release date. As you already know, time to market and keeping a release on schedule is top priority, so testing needs to get done, no matter what.

If you don’t have a dedicated automation team, automation will most likely suffer as a result of engineers being consumed with manual testing, and reporting and duplicating bugs for Development to fix. If we ask engineers to prioritize automation, manual testing could suffer as a result of engineers spending too much time with automation-related tasks; therefore, they are unable to complete testing on time.

If you decide to have a QA team that fulfills both roles, I recommend two things:

  • Have a support team that can help the QA team with their automation by providing libraries and templates. The support team will not be automating features and test cases, so they wouldn’t be considered an automation team.
  • Have the QA team primarily automate acceptance test cases, and wrap up the remaining automation after the product is released.


Getting the Existing Team On Board with Automation (Scripts)

August 27th, 2015 by Greg Sypolt


In an attempt to do more with less, organizations want to test their software adequately, as quickly as possible. Businesses are demanding quick turnaround, pushing new features and bug fixes to production within days, along with quality. Everyone knows manual testing is labor-intensive and error-prone, and does not support the same kind of quality checks that are possible through the use of an automated test. Nobody likes change, but it’s time to educate your team on the importance of onboard automated testing.

The only way to make sense out of change is to plunge into it, move with it, and join the dance.  – Alan Watts

Everyone has had a job interview at some point in their lives, right? It is important to be prepared! The first few minutes of an interview are a make or break moment. Why? Because first impressions can have long-lasting effects. Never underestimate the power of first impressions. The same principle applies when onboarding automation to an existing manual testing team. Your initial presentation to your team or organization should be treated like a job interview. Be prepared. Deliver expectations and explain responsibilities — it’s critical since it is normal for employees to have an emotional reaction to anything they view as a job threat.

Why automated testing?

If things are going well, why do we want to implement automated tests? The demand is to do more with less, which makes manual testing an impossible task, but introducing automated testing into an existing software development lifecycle can also be daunting. However, when implemented, automated testing is a valuable asset that shortens testing cycles and helps teams become more agile.


image source:


Continuous Integration – The Heart of DevOps

August 26th, 2015 by Chris Riley

Missed our earlier webinar, “Beyond the Release – CI That Transforms“? Check out the recap below.

In this webinar, we discussed the power of CI and possible considerations for the future. One of the more interesting aspects of the webinar was seeing the modern pipeline with the new Sauce CI Dashboard alongside Cloudbees’ automation templates. We also conducted a small CI survey at the beginning of the webinar, and ended with a Q&A.

The Power of CI

Continuous Delivery and Deployment (CD) steal the show in DevOps conversations, but the reality is that Delivery and Deployment are not for every organization, nor are they already widely adopted. In order to move on to delivery and deployment, organizations must get Continuous Integration (CI) right — unless they were built from day one with the DevOps framework, and did not have to fit the processes into existing environments.

The reason CI is so powerful is that it allows you to dip your toe into the modern delivery pipeline without the risk or complexity of building out delivery and deployment all at once – and with the potential of failure. You can consider CI as the on-boarding for DevOps. And from a process and tool standpoint, CI is nearly identical to delivery and deployment, it means that once you get it right you can easily move on.

Webinar – CI Survey Results

I’m pleased to say the results of the Sauce Labs CI survey were almost exactly what I expected, served with a side of surprise. For me, the most interesting aspect of the survey results is how they appear to be in conflict with the perceived high CI adoption and success rates already existing in the market. Let’s look at the results among 500+ attendees:

What Types of Automated Tests do you run?

  • Unit 28%
  • Functional 40%
  • Integration 27%
  • None 6%

6% of the attendees are not running automated tests at all! This was astonishing to me. I expected 1% at most, especially given the audience, because they are already familiar with automation. At a minimum, I would expect all companies to automate Unit testing. However there is a high likelihood that what this 6% is saying is that they have automated tests, just initiated manually.

The results also showed that many are running functional tests. This is great! However, only 27% are running integration tests. This is troubling because compared to the reported 45% who state they are doing CI already, the lack of integration testing would seem to contradict that statement. I suspect that this is a definition problem, where some may define CI as being simply a shared testing environment, and not really the CI process as described in the webinar. (more…)

Building Applications for Quality

August 20th, 2015 by Zachary Flower

As developers, when building new projects from the ground up, we have a tendency to shoot first and ask questions later. Facebook popularized this attitude with their old motto, “Move Fast and Break Things,” and it’s a notion that seems to have been firmly embedded into startup culture. Unfortunately what often happens is that once things get broken, we fix them with Band-Aids and duct tape in order to keep up the fast pace we’ve established. While we often put a premium on the end result, the foundation we build to achieve that result is just as important.

For every line of hacky code we write, even if it does what it’s supposed to, we are compounding the amount of work that future developers on a project will have to do when you feel your company is finally in a place to “do it right.” In the face of legacy code, we will always be pushed to compromise quality for speed, but it is important to communicate the true impact code debt has on the future of a project. Eventually, Facebook learned this lesson, changing their motto last year to “Move Fast with Stable Infra(structure).” By putting an emphasis on stability, Facebook essentially enacted a speed limit on development to encourage bug-free code.

So, how do we balance speed and quality? The answer, unsurprisingly, is by establishing a set of rules and processes for you and your team to follow. Before beginning any work on a new application, it is important to lay the proper groundwork. This is the time to set up your deployment processes, version control management workflow, and unit test frameworks. In addition, you should be using this time to build out your code style guide and plan your application architecture. (more…)

Reducing Regression Execution Times

August 13th, 2015 by Israel Felix

We all know the saying “time is money.” QA managers are constantly under pressure not only to deliver high-quality software products, but also to do so within time constraints.

Regression testing is a vital component in any software development life cycle to ensure that no new errors are introduced as a result of new features or the correction of existing bugs. Every time we modify existing source codes, new and existing test cases need to be executed on different configurations such as operating systems and platforms. Testing all of these permutations manually is simply not cost- or time-effective, and is also inconsistent. Automated regression addresses these challenges. But as we increase feature coverage and permutation testing, companies increase their execution times to a level that is no longer acceptable for the delivery of high-quality products within a tight schedule. Here are a few ways to improve execution time:

1. Introduce Continuous Integration (CI) Tools:

When going from a few scripts to a few thousand scripts, you begin to notice some growing pains. I frequently notice engineers manually executing script batches one at a time in a serialized manner and monitoring them as they execute. This consumes both time and resources. This becomes even more challenging when regression runs overnight or during weekends, with no one available to troubleshoot. As your automation grows, you need to have an infrastructure in place that allows you to scale and be able to run regressions unattended.

I recommend you use a Continuous Integration (CI) tool to manage automated regression executions, such as CircleCI or Jenkins. Tools like CircleCI or Jenkins can help you bring up virtual machines, start regressions, handle a more dynamic queuing mechanism, monitor regressions, and warn you if something goes wrong that requires manual intervention. It can also help you through a recovery mechanism that can be triggered if something becomes non-operational.

2. Use CI Tools not Only to Run Scripts, but also to Automate All Manual Steps!

When it is time to execute regression, there are a series of steps that need to be executed before a script can be run, such as:

  • Loading the new software to be tested
  • Updating your scripts with the latest version
  • Configuring servers
  • Executing scripts
  • Posting results
  • Communicating failure details

Very frequently we see customers using CI only to run scripts, relying on a manual process to execute the remaining steps, which is indeed a very time-consuming process.

I recommend minimizing manual intervention, and trying to achieve an end-to-end automated process using a CI tool that allows you to monitor and orchestrate the different tasks. (more…)

Why Manual Testing Helps Your Release

August 11th, 2015 by Ashley Hunsberger

Will we ever truly be at 100% automation?  I hope not. Of course automation is critical in implementing Continuous Integration and Delivery, but there are just some things that you can’t leave to a machine. Human evaluation is important.

In a world where we are looking to release faster and faster, why would we want manual testing?  Let’s take a look at some of the things you may want to do that automation can’t, and how manual evaluation helps us deliver the right product.

The human aspect

Several years ago, our UX team kept asking, “Is it delightful?”  I’ve worked on many features that I truly felt would make for a better experience in education.  There are several that, frankly, I just couldn’t stand. We just weren’t building the right product sometimes; even if all tests passed, and there were no bugs — if I didn’t like using it, I found myself asking, “How would users feel?” I have to say, I’m fascinated by the human factor and evoking feelings (for better or worse) when testing software.

As a consumer of software, sometimes I find myself thinking, Wow, did ANYONE look at this? (For example, I’m on the Board of Directors for my Home Owner’s Association, and the software we use to track documents, get assessments, and so forth just makes me want to cry).

I’ll be honest – sometimes it is difficult to see how usable something is until there is something to use it for. Hopefully, though, we can spot this early as we define acceptance criteria and are evaluating the workflows, specs, wireframes, or prototypes.

Be like Lewis and Clark

The obvious manual testing activity that should be at the top of everyone’s list is exploratory testing. In an ideal world, the features themselves are completely automated, and development is done when all tests pass. This is fantastic, but what if that were it?  Lewis and Clark had specific goals, and reported what they found along the way. Exploratory testing is similar to me: I start with a charter (or a goal) from a user perspective, and report what I find along the way. Perhaps it is a feature that works fine in unit and integration tests, but once I start looking at it myself in an end to end workflow, I think of other scenarios that perhaps we didn’t think of when writing scripts. (more…)

The Benefits of Parallel Testing

August 6th, 2015 by Greg Sypolt

Running in slow motion?

Are you running, but can’t make your feet move as fast as you want them to? This is a common feeling among beginners, as well as experienced automation developers. As your regression suite grows, it takes longer to run tests, and soon you have a problem because your regression suite is running longer and longer. There are a few approaches to smarter testing: reduce the number of tests, only run the tests applicable to the change, or optimize the test execution.

Marching toward continuous integration

As your software development team marches toward Continuous Integration (CI), the process involves a lot of automated testing. Even automated testing consumes precious time, so you need to ensure your automated tests are designed to scale. If a particular change is going to cause one or more tests to fail, the team needs to know about it as quickly as possible. Developing scripts to be lean and independent allows for fast feedback to developers.

Let’s unleash the power of parallelization

Use parallelization to speed up slow automated UI tests, and set a standard to develop lean and independent starts. A single cucumber scenario can easily take minutes to run. When you have a lot of scenarios, they can quickly compound your suite and take several minutes or hours to complete. No one wants slow automated tests — tests so slow that they only run a couple of times per day. Everyone expects automated tests to be launched for every build and send feedback within minutes, not hours. Set a standard for every build: the test execution must complete and send feedback within 10 minutes. (more…)

To Validate or Verify?

July 30th, 2015 by Ashley Hunsberger

You say to-MAY-toe, I say to-MAH-toe.

I hear the questions daily — “Did you validate the system?  Did you verify the feature?” The words validate and verify are used interchangeably, but what do they really mean? Is there a difference? In the world of software development and quality assurance, yes… and you need to do both. It’s even more important for the tester to understand what they are, and what each entails – and how some definitions may change in a world where waterfall is out and continuous delivery is king.

What’s the difference?

The English definitions of validate and verify are pretty close; in fact, they are listed as synonyms of each other.

Validate – to recognize, establish, or illustrate the worthiness or legitimacy of (Def 2b, Meriam-Webster Dictionary Online

Verify – to prove, show, find out, or state that (something) is true or correct (Merriam-Webster Dictionary Online

In other words, is something right?

But how do we use them in software development? As I started thinking about this (and slowly started to get more and more irked by people using the words interchangeably), I realized that we may see a shift in how most people view verification and validation as more development teams adopt continuous delivery practices. (more…)

Using QA to Enhance Communication

July 21st, 2015 by Ashley Hunsberger

Have you ever worked on a project and found yourself constantly shaking your head? I can say that 99% of the time that I experienced frustration, it was largely due to communication issues within a team. I’ve personally been on project teams and wondered if anyone there had ever taken a basic communications course and learned concepts like active listening, empathy, and being clear and concise. A team that can communicate will find success, but what about those who are not interacting well? Who can help your team get back on track? Believe it or not, your answer is the tester.

The Communication Breakdown

What exactly is keeping your team from communicating effectively? The answer may not be so obvious. In my experience, I’ve seen a few key contributors that include, but are not limited to:

The team does not own quality
Of course life would be easier if we could say ‘Well, I did my job!  I’m done! Time for the tester!”  But how does that foster communication in a team if you just throw something over the proverbial fence? If you set out only to do your job and not to understand anyone else’s, are you really being part of a team? You know the saying — “There is no I in TEAM.” We usually hear that as kids when we first embark in sports, but it goes for software development projects, too!

Not seeing the big picture
It is so easy to start looking in an Agile world at the small, granular user story level.  But when you start to lose the big picture, the ability to communicate becomes more and more difficult. There have been projects I’ve worked on where I just didn’t understand the business reasons (and I wasn’t the only one). Can you guess how many of those projects succeeded? (more…)