Should You Have a Dedicated Automation Team Within Your QA Department?

September 1st, 2015 by Israel Felix

If you’ve led or managed QA teams that have included an automation test team, you’ve probably been in a situation where you had to decide whether you should keep them on board. Normally the decision needs to be made when there is a change in leadership, wherein the new management comes with a mandate to consolidate groups and reduce costs. This situation also tends to arise when working with startups or small companies when they are ready to put together or augment their QA teams. So should you have a dedicated automation team?

Typically, there are two camps with regards to dedicated automation teams. There are those who believe that we should have dedicated automation teams, and those who believe that QA engineers should handle manual testing and automation testing. From my experience working in QA within both small and large companies, I almost always prefer to have a dedicated automation team. However, there are a few scenarios where having a QA team that takes on both roles might make sense.

Time to Market

For automation to be done right, it needs to be a full-time job. From developing the framework and creating the libraries and scripts for different platforms to executing and debugging failures — it will all simply consume too much of an engineer’s time and compromise the actual testing and release date. As you already know, time to market and keeping a release on schedule is top priority, so testing needs to get done, no matter what.

If you don’t have a dedicated automation team, automation will most likely suffer as a result of engineers being consumed with manual testing, and reporting and duplicating bugs for Development to fix. If we ask engineers to prioritize automation, manual testing could suffer as a result of engineers spending too much time with automation-related tasks; therefore, they are unable to complete testing on time.

If you decide to have a QA team that fulfills both roles, I recommend two things:

  • Have a support team that can help the QA team with their automation by providing libraries and templates. The support team will not be automating features and test cases, so they wouldn’t be considered an automation team.
  • Have the QA team primarily automate acceptance test cases, and wrap up the remaining automation after the product is released.

Read the rest of this entry »

Getting the Existing Team On Board with Automation (Scripts)

August 27th, 2015 by Greg Sypolt

Introduction

In an attempt to do more with less, organizations want to test their software adequately, as quickly as possible. Businesses are demanding quick turnaround, pushing new features and bug fixes to production within days, along with quality. Everyone knows manual testing is labor-intensive and error-prone, and does not support the same kind of quality checks that are possible through the use of an automated test. Nobody likes change, but it’s time to educate your team on the importance of onboard automated testing.

The only way to make sense out of change is to plunge into it, move with it, and join the dance.  – Alan Watts

Everyone has had a job interview at some point in their lives, right? It is important to be prepared! The first few minutes of an interview are a make or break moment. Why? Because first impressions can have long-lasting effects. Never underestimate the power of first impressions. The same principle applies when onboarding automation to an existing manual testing team. Your initial presentation to your team or organization should be treated like a job interview. Be prepared. Deliver expectations and explain responsibilities — it’s critical since it is normal for employees to have an emotional reaction to anything they view as a job threat.

Why automated testing?

If things are going well, why do we want to implement automated tests? The demand is to do more with less, which makes manual testing an impossible task, but introducing automated testing into an existing software development lifecycle can also be daunting. However, when implemented, automated testing is a valuable asset that shortens testing cycles and helps teams become more agile.

marketingautomation1011

image source: http://marketing-works.net/

Read the rest of this entry »

Continuous Integration – The Heart of DevOps

August 26th, 2015 by Chris Riley

Missed our earlier webinar, “Beyond the Release – CI That Transforms“? Check out the recap below.

In this webinar, we discussed the power of CI and possible considerations for the future. One of the more interesting aspects of the webinar was seeing the modern pipeline with the new Sauce CI Dashboard alongside Cloudbees’ automation templates. We also conducted a small CI survey at the beginning of the webinar, and ended with a Q&A.

The Power of CI

Continuous Delivery and Deployment (CD) steal the show in DevOps conversations, but the reality is that Delivery and Deployment are not for every organization, nor are they already widely adopted. In order to move on to delivery and deployment, organizations must get Continuous Integration (CI) right — unless they were built from day one with the DevOps framework, and did not have to fit the processes into existing environments.

The reason CI is so powerful is that it allows you to dip your toe into the modern delivery pipeline without the risk or complexity of building out delivery and deployment all at once – and with the potential of failure. You can consider CI as the on-boarding for DevOps. And from a process and tool standpoint, CI is nearly identical to delivery and deployment, it means that once you get it right you can easily move on.

Webinar – CI Survey Results

I’m pleased to say the results of the Sauce Labs CI survey were almost exactly what I expected, served with a side of surprise. For me, the most interesting aspect of the survey results is how they appear to be in conflict with the perceived high CI adoption and success rates already existing in the market. Let’s look at the results among 500+ attendees:

What Types of Automated Tests do you run?

  • Unit 28%
  • Functional 40%
  • Integration 27%
  • None 6%

6% of the attendees are not running automated tests at all! This was astonishing to me. I expected 1% at most, especially given the audience, because they are already familiar with automation. At a minimum, I would expect all companies to automate Unit testing. However there is a high likelihood that what this 6% is saying is that they have automated tests, just initiated manually.

The results also showed that many are running functional tests. This is great! However, only 27% are running integration tests. This is troubling because compared to the reported 45% who state they are doing CI already, the lack of integration testing would seem to contradict that statement. I suspect that this is a definition problem, where some may define CI as being simply a shared testing environment, and not really the CI process as described in the webinar. Read the rest of this entry »

Announcing Support for Microsoft Edge

August 25th, 2015 by Bill McGee

Microsoft Edge LogoFollowing last month’s news about support for Windows 10 we’re tickled to announce that Sauce Labs now also supports automated testing on Microsoft Edge. As part of this update, we have upgraded our version of Edge from v.11 to v.20, thus adding more stability for both manual and automated tests.

In order to run a test on Edge, you would specify the following desired capabilities (or build the code, including advanced capabilities, using our Automated Test Configurator):

"platform": "Windows 10",
"browserName": "microsoftedge",

Login to get started – happy testing!

Building Applications for Quality

August 20th, 2015 by Zachary Flower

As developers, when building new projects from the ground up, we have a tendency to shoot first and ask questions later. Facebook popularized this attitude with their old motto, “Move Fast and Break Things,” and it’s a notion that seems to have been firmly embedded into startup culture. Unfortunately what often happens is that once things get broken, we fix them with Band-Aids and duct tape in order to keep up the fast pace we’ve established. While we often put a premium on the end result, the foundation we build to achieve that result is just as important.

For every line of hacky code we write, even if it does what it’s supposed to, we are compounding the amount of work that future developers on a project will have to do when you feel your company is finally in a place to “do it right.” In the face of legacy code, we will always be pushed to compromise quality for speed, but it is important to communicate the true impact code debt has on the future of a project. Eventually, Facebook learned this lesson, changing their motto last year to “Move Fast with Stable Infra(structure).” By putting an emphasis on stability, Facebook essentially enacted a speed limit on development to encourage bug-free code.

So, how do we balance speed and quality? The answer, unsurprisingly, is by establishing a set of rules and processes for you and your team to follow. Before beginning any work on a new application, it is important to lay the proper groundwork. This is the time to set up your deployment processes, version control management workflow, and unit test frameworks. In addition, you should be using this time to build out your code style guide and plan your application architecture. Read the rest of this entry »

[WEBINAR] Making the Transition from Manual to Automated Testing

August 19th, 2015 by Bill McGee

If you have been manually testing your Web application, making the transition to test automation can seem daunting. Where do you start, how do you determine what tests to automate and what tools do you use? These are questions that are at the top of the list for every first timer. Smoothing your transition to building test automation can be made easier with a little preparation and the right tools.

In this webinar we will discuss the key differences between manual and automated testing and provide tips to help you prepare for the transition. We will also show you how QA analysts that are not programmers can build test automation quickly and very easily using the eureQa Testing Platform, and run these tests across browsers and devices on the Sauce Labs’ Cloud.

This webinar will cover:

  • Preparing to automate cross browser/cross device tests – what to expect and what to look out for?
  • Planning the test automation – what, when, who and how
  • Maintaining your test automation – Keeping up with application changes
  • Getting value from your automation – Managing test results
  • Demo of eureQa Testing Platform showing how you can build test automation and run it on multiple browsers and devices on Sauce Labs

Join us for this presentation next Tuesday, August 25th at 11am PDT/2pm EDT. There will be a live Q&A following the presentation

Click HERE to register today.

Want to learn more about automated testing? Download this free white paper, “How To Get The Most Out Of Your CI/CD Workflow Using Automated Testing“.

Appium + Sauce Labs Bootcamp: Chapter 4, Advanced Desired Capabilities

August 18th, 2015 by Jonathan Lipps

Appium, by and large, supports the desired capabilities you’re familiar with from Selenium. However, given that it also exposes mobile-specific functionality, we have some mobile-specific desired capabilities. We even have capabilities that are specific to one mobile platform: iOS or Android. For the full list of capabilities Appium supports, please see the capabilities doc.

Environment-specifying Capabilities

The main differences you’ll recognize will have to do with selecting which type of mobile environment you want to automate. Appium has adopted the “Selenium 4″ style of session capabilities, which means that, instead of capabilities like this, for example:

{
  browserName: 'safari',
  version: '6.0'
}

You would specify the following, for Mobile Safari on iOS 7.1:

{
  platformName: 'iOS',
  platformVersion: '7.1',
  deviceName: 'iPhone Simulator',
  browserName: 'safari'
}

The new capabilities platformName, platformVersion, and deviceName are _required_ for every session. Valid platforms are iOS and Android.

To automate a native or hybrid app instead of a mobile browser, you omit the browserName capability and include the app capability, which is a fully-resolved local path or URL to your application (.app, .ipa, or .apk). For example:

{
  platformName: 'Android',
  platformVersion: '5.0',
  deviceName: 'Android Emulator',
  app: '/path/to/my/android_app.apk'
}

Sometimes we might want to run an Android test on an older device, and so we need to use Appium’s built-in Selendroid support for older devices. To specify that you want to use the Selendroid backend, keep everything else the same, but specify the automationName capability, as follows:

{
  automationName: 'Selendroid',
  platformName: 'Android',
  platformVersion: '5.0',
  deviceName: 'Android Emulator',
  app: '/path/to/my/android_app.apk'
}

And of course, if you’re running any tests on Sauce Labs, make sure to specify the version of Appium you’d like to use with the appiumVersion capability:

{
  platformName: 'Android',
  platformVersion: '5.0',
  deviceName: 'Android Emulator',
  browserName: 'Browser',
  appiumVersion: '1.3.4'
}

Read the rest of this entry »

Reducing Regression Execution Times

August 13th, 2015 by Israel Felix

We all know the saying “time is money.” QA managers are constantly under pressure not only to deliver high-quality software products, but also to do so within time constraints.

Regression testing is a vital component in any software development life cycle to ensure that no new errors are introduced as a result of new features or the correction of existing bugs. Every time we modify existing source codes, new and existing test cases need to be executed on different configurations such as operating systems and platforms. Testing all of these permutations manually is simply not cost- or time-effective, and is also inconsistent. Automated regression addresses these challenges. But as we increase feature coverage and permutation testing, companies increase their execution times to a level that is no longer acceptable for the delivery of high-quality products within a tight schedule. Here are a few ways to improve execution time:

1. Introduce Continuous Integration (CI) Tools:

When going from a few scripts to a few thousand scripts, you begin to notice some growing pains. I frequently notice engineers manually executing script batches one at a time in a serialized manner and monitoring them as they execute. This consumes both time and resources. This becomes even more challenging when regression runs overnight or during weekends, with no one available to troubleshoot. As your automation grows, you need to have an infrastructure in place that allows you to scale and be able to run regressions unattended.

I recommend you use a Continuous Integration (CI) tool to manage automated regression executions, such as CircleCI or Jenkins. Tools like CircleCI or Jenkins can help you bring up virtual machines, start regressions, handle a more dynamic queuing mechanism, monitor regressions, and warn you if something goes wrong that requires manual intervention. It can also help you through a recovery mechanism that can be triggered if something becomes non-operational.

2. Use CI Tools not Only to Run Scripts, but also to Automate All Manual Steps!

When it is time to execute regression, there are a series of steps that need to be executed before a script can be run, such as:

  • Loading the new software to be tested
  • Updating your scripts with the latest version
  • Configuring servers
  • Executing scripts
  • Posting results
  • Communicating failure details

Very frequently we see customers using CI only to run scripts, relying on a manual process to execute the remaining steps, which is indeed a very time-consuming process.

I recommend minimizing manual intervention, and trying to achieve an end-to-end automated process using a CI tool that allows you to monitor and orchestrate the different tasks. Read the rest of this entry »

New! Support for OS X 10.11 and iOS 8.3, 8.4 & 9.0

August 12th, 2015 by Ken Drachnik

El Capitan 200 px wideIn our continuing effort to make Sauce Labs the best place to test, we’ve just added some additional OSes and Browsers. In addition to Windows 10 and the Edge Browser we announced last week, today we are extending the platforms we support to include released and upcoming iOS and OS X operating systems:

OSX 10.11 El Capitan (beta)
iOS 8.3 and 8.4
iOS 9 (beta)
Safari 8.0.7

To use any of these platforms visit our platforms configurator.

When we ask our users what it is they love about Sauce one of the most common responses is the wide variety of platforms we provide our users. In an ecosystem of increasing complexity, we know our customers have a lot of devices and platforms they need to test against. Our humble aspiration is to make your life a just little bit easier by having the platforms you need when you need them.

Sign in to start testing

Why Manual Testing Helps Your Release

August 11th, 2015 by Ashley Hunsberger

Will we ever truly be at 100% automation?  I hope not. Of course automation is critical in implementing Continuous Integration and Delivery, but there are just some things that you can’t leave to a machine. Human evaluation is important.

In a world where we are looking to release faster and faster, why would we want manual testing?  Let’s take a look at some of the things you may want to do that automation can’t, and how manual evaluation helps us deliver the right product.

The human aspect

Several years ago, our UX team kept asking, “Is it delightful?”  I’ve worked on many features that I truly felt would make for a better experience in education.  There are several that, frankly, I just couldn’t stand. We just weren’t building the right product sometimes; even if all tests passed, and there were no bugs — if I didn’t like using it, I found myself asking, “How would users feel?” I have to say, I’m fascinated by the human factor and evoking feelings (for better or worse) when testing software.

As a consumer of software, sometimes I find myself thinking, Wow, did ANYONE look at this? (For example, I’m on the Board of Directors for my Home Owner’s Association, and the software we use to track documents, get assessments, and so forth just makes me want to cry).

I’ll be honest – sometimes it is difficult to see how usable something is until there is something to use it for. Hopefully, though, we can spot this early as we define acceptance criteria and are evaluating the workflows, specs, wireframes, or prototypes.

Be like Lewis and Clark

The obvious manual testing activity that should be at the top of everyone’s list is exploratory testing. In an ideal world, the features themselves are completely automated, and development is done when all tests pass. This is fantastic, but what if that were it?  Lewis and Clark had specific goals, and reported what they found along the way. Exploratory testing is similar to me: I start with a charter (or a goal) from a user perspective, and report what I find along the way. Perhaps it is a feature that works fine in unit and integration tests, but once I start looking at it myself in an end to end workflow, I think of other scenarios that perhaps we didn’t think of when writing scripts. Read the rest of this entry »