Top 5 Quality Disasters (or Misses) of 2015

February 8th, 2016 by Ashley Hunsberger

2015 was quite the year for quality in almost every industry. Here are some defects (some disastrous, some just funny) that really caught my attention over the last year, and a few lessons we can learn from them as we develop our own test strategies such as data usage, environments, security, and more in our day-to-day work.

#1 – Social Media: Facebook tells me I’ve known you since before I was born!

Image Source: http://ti.me/1mp25FH

Image Source: http://ti.me/1mp25FH

This was probably one of the funnier, and definitely not critical, bugs of the year. (Let’s face it — no one actually got hurt, and I think we all chuckled a little when we saw it pop up in our news feeds!) While no one actually _confirmed_ what happened, most theories surround the Unix Epoch (January 1, 1970, which with time zone adjustments could be sometime on December 31, 1969), so 46 years from the day we started seeing our nearly golden friendships appear. Microsoft engineer Mark Davis offered his hypothesis (see http://ti.me/1mp25FH).

What can we learn from this? Let’s say you’re adding in a feature that has to deal with dates. Always consider testing not just with fresh data, but also data that was present prior to the feature being added. I must admit that in my own testing we hate dealing with date/time stamps as it usually involves manipulating data in unrealistic scenarios, but test we must. Make sure you are doing some time travel with your data — it helps get you closer to what your customers will have! Read the rest of this entry »

Announcing New REST API Rate Limits

February 3rd, 2016 by Yaroslav Borets

Sauce Labs is introducing a new rate limiting on our REST endpoints in order to ensure a great experience for all of our customers. In addition to the recent limits placed on the number of requests per second we will be implementing further restrictions with dedicated hourly request limits for each endpoint. The new restrictions will limit the access to all endpoints to 10 reqs/s or 3500 reqs/hour if the user is logged in and 2 reqs/minute if user is logged out.  The limits will be tracked on a per account basis for both logged in and logged out users.

The new limits will go in the effect on Tuesday, March 1st 2016. We strongly encourage customers who use the REST API to modify their code to be able to gracefully handle a new set of restrictions. Please refer to the code samples below on how to prepare for the new limits as well as the headers to use.

The addition of more restrictive rate limits will be handled in a multi-stage process as follows:

  1. Starting February 1st , customers can opt-in to the new rate limits in order to test how their code handles rate limiting. The opt-in capability will be provided via a new header.
  2. On March 1st, the new rate limits will be in place by default, but customers can opt out using a dedicated header.
  3. Finally, in the beginning of April the new rate limits will be in place, and customers will no longer be able to opt out.

Read the rest of this entry »

Paired Testing: Two Is Better Than One

January 27th, 2016 by Ashley Hunsberger

Paired programming brings two developers together to produce higher quality code compared to those same two engineers coding separately. Just as paired programming has someone writing code while another person reviews the code as it is being written, paired testing has someone doing the testing while another person takes notes, asks questions, and spots/reports bugs. I’ve personally found paired testing to foster creativity, maintain focus, provide a new way to teach others, and help release better software in general. Two testers are better than one.

Pick a partner!

It’s probably important to note that not all people actually like to pair up. Let’s face it, many of us work in a world of introverts. Some people just don’t like to talk, or share their personal space. That said, there are a few good pairings you can look at: Read the rest of this entry »

Changing Development Culture to Become Quality Focused

January 25th, 2016 by Joe Nolan

How many project teams have you worked on where the accepted culture was to rely on the QA members to bear the load for quality? As the leader of a QA meetup, I still constantly hear stories from my members about developers’ assumptions that it is QA’s responsibility to find bugs. Not only is this attitude demoralizing for QA, it is also not in the best interest of the team. How can a team change development culture to one that is quality focused for the entire team?

From the Top Down

The answer seems obvious — Management needs to declare that all team members will have a hand in quality. This truly is critical to a successful culture change! Besides empowering dev managers and Scrum Masters to direct team activities (such as enforcing unit tests), it will give QA members the confidence to push the team as well. Read the rest of this entry »

Consider Your Application’s Home: Designing for Resiliency

January 21st, 2016 by Eric Jeanes

I am a firm believer in taking a cross-discipline-based approach to technology — taking something learned in one subject area and applying it to a problem in our everyday work. The political philosopher John Rawls, in his seminal work A Theory of Justice, provides a construct that (surprisingly) has a place in specific stages of application development.

When building systems, we are constantly held back to some degree by technical debt (the time wasted by repetitive tasks, and the bug fixes required to keep systems up and running). Not only is this time costly, but it is also typically less interesting than designing and constructing new systems (naturally). Read the rest of this entry »

A Functional Tester Looks at Performance

January 15th, 2016 by Ashley Hunsberger

Even if you aren’t directly responsible for performance, it is important to consider it under the umbrella of quality. As a tester, how do you move forward and help drive performance quality (especially when you are new to this area, like me)? What are the ramifications of not considering performance within QA? Let’s take a look at what performance is, the questions QA can ask during design and implementation, some of the types of testing that can be done, and making performance part of your acceptance criteria (and, therefore, part of your Definition of Done).

What is software performance, and why is it important?

As an end user, I think of performance as just how fast or stable something is. If I click on something, does it take forever to load in a website? Does my app crash every time I try to open it or submit something? Do I give up and find a better solution to meet my needs? Of course we want a feature to work, but do we think about the system holistically?

I can tell you now that if a website or app that I am using crashes, I instantly think that the quality is just not there. If I have a choice in what I use, I quickly delete it and find another that does work. You may be tied into an app and not have a choice, but your opinion of that app (and the company) can quickly plummet based on stability alone.

Although performance is multi-faceted, some basic topics to think about include:

  • Response time – How quickly does the system react to user input?
  • Throughput – How much can the system accomplish in a specified amount of time?
  • Scalability – Can the system increase throughput under an increased load when resources are added?
  • Availability – Is the system available for service when requested?

To retain customers, you must consider performance as part of overall quality.

Understanding performance during development

The problem I’ve seen is that performance is always deemed important, but is not necessarily addressed up front. All too often I recall discussing performance long after a feature was coded and tested. It was pushed until the end, and it can be difficult to make your features meet performance expectations after they’ve been built. This was difficult in a Waterfall world, but how do you adapt what was an afterthought as more and more companies are moving towards a Continuous model? Performance needs to be considered first and understood by the team.

Here are some sample questions to be considered DURING design and development to help ensure you are considering performance needs early on as you go through your non-functional requirements (remember, you will need to discuss as a team and get guidance in determining what is expected):

CategoryNon-Functional Requirement Questions
Response TimeWhat is the acceptable waiting time for users?

Do we need to consider users on various devices and speeds? Do we need to simulate slower speeds? Some may be on modern desktop/laptops on high speed Internet, or modern mobile devices on 3g and above — but others may not.

Example - Changing a password. How long before I can expect a change to take effect? Do I need to show progress feedback?
Data VolumeHow do we ensure that data volume does not impact user experience?

What's the maximum and typical volume of data that will be involved?

Example - Entering a page that lists users. Do I show all 20,000 users in the system? Do I show the first 25? How long does it take? Can I perform other actions while the list generates? Do I see a blank screen while I wait?
CachingCache is king. Queried and calculated data can be reused to eliminate duplicate work.

When do we need to invalidate the cache? Which data cannot tolerate staleness?

How long can the cache live for?

Could cache staleness impact the system or the user session only?

Example - Notification badges. How long do those notifications last once a user has viewed, or just upon first login? If the cache is stale, does it impact just the authenticated user?

Testing performance

There are several types of testing that can help ensure your apps are performing as expected. Please note that this is not a comprehensive list, just a high-level overview to get you started1)Summarized from http://goo.gl/RC4AaS and http://goo.gl/5ukqAi:

Type of TestOverview
LoadApplication is tested for response times during normal and peak usage. How does the app respond with a few users completing a few interactions vs. thousands of users completing thousands of interactions at a time?
StressFinds ways to break the system by increasing the load. Start with a good benchmark (identified during your load testing), and increase the load until you see which components start lagging and fail first.
VolumeTest if application performance degrades with more data volume. Do you access the database directly? How does it handle the query if there are millions of records?
Reliability/RecoveryIf your app does fail, testing will show if and how it recovers, and how long it takes to get back to an acceptable state.
ScalabilityTests if your app’s performance improves if you add resources (hardware, memory, etc.)

Improving performance quality faster

It’s time to stop pushing performance to the end and hoping for the best. As stories are designed, add performance to your acceptance criteria. Make sure that everything in your acceptance criteria is marked as complete (part of your Definition of Done).

As with anything, the longer you put something off, the more difficult (and/or expensive) it is to implement later. Be proactive, and build performance in.

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices.  Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen.  In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

References   [ + ]

1. Summarized from http://goo.gl/RC4AaS and http://goo.gl/5ukqAi

Planning Quality Architecture for 2020

January 11th, 2016 by Greg Sypolt

I was inspired by Denali Lumma (@denalilumma) when she delivered a glimpse of the future in her talk about 2020 testing at the Selenium 2015 conference. The session was an excellent introduction that compared many scenarios of the minority testing elite versus the more common development team. The elite companies consider infrastructure FIRST, and the majority thinks about infrastructure LAST. It got my wheels turning regarding the future of software development. I don’t have all the answers right now, but I want to be part of the movement to plan and build architecture with quality. A few words come to mind when thinking about quality architecture — automation, scalability, recoverability, and analytics.

Build a culture

When building a culture, avoid too much control. You want a culture that embraces freedom, responsibility, and accountability. Why is building a culture like this important? It allows passionate employees to innovate and find big-time solutions. You can’t plan for innovation. It naturally happens. When you give passionate employees an inch, they’ll take a mile. The future team culture needs to push the envelope and step outside their comfort zone. Read the rest of this entry »

Re-Energize Your QA Career With Automation and DevOps

January 7th, 2016 by Joe Nolan

It’s time for you to stop being content with the status quo and re-energize your QA career with Automation and DevOps — otherwise, you might find yourself fading away like Marty McFly! I’m talking to YOU, manual tester! And YOU, QA manager! Oh, and YOU TOO, automation engineer! Every one of you who has a vested interest in your career growth needs to familiarize yourself with automation and DevOps tools.

Of Course You Need to Understand Automation

Let’s face it: In this day and age of software development, speed is the key to survival. In order to achieve clean builds, Continuous Integration, Continuous Delivery, and Agile development, manual testing just ain’t gonna cut it.

Everyone with the QA title needs to continuously build on their skill set, just like a developer. Even if you aren’t actively writing automation code, you still need to understand the capabilities and benefits of each type of automated test, especially the ones written by your development team. The team is relying on your expertise to guide them with acceptance criteria for stories, while bringing QA concepts to the table. Read the rest of this entry »

Introducing Stability Improvements to Sauce Connect

January 5th, 2016 by Yaroslav Borets

Starting with Sauce Connect v4.3.13, new features have been added which aim to increase stability when testing websites behind the firewall.

1) Prevent Sauce Connect from shutting down when actively being used by job(s)

Argument:  --wait-tunnel-shutdown

Adding the above argument prevents users from shutting down their tunnels while jobs are still running.  Should the user attempt to close the tunnel, they will receive a warning as well as a count of tests currently using the tunnel (see example output below). Once the test(s) using the tunnel complete, the tunnel will close itself.

05 Jan 09:08:20 - Sauce Connect is up, you may start your tests.
05 Jan 09:11:13 - Cleaning up.
05 Jan 09:11:13 - Removing tunnel 3a248df76eb14145ad0401c6c4aaf690.
05 Jan 09:11:13 - Waiting for any active job using this tunnel to finish.
05 Jan 09:11:13 - Press CTRL-C again to shut down immediately.
05 Jan 09:11:13 - Number of jobs using tunnel: 1.
05 Jan 09:11:19 - Number of jobs using tunnel: 1.
05 Jan 09:11:25 - Number of jobs using tunnel: 1.
05 Jan 09:11:33 - Number of jobs using tunnel: 1.
05 Jan 09:11:41 - All jobs using tunnel have finished.
05 Jan 09:11:41 - Waiting for the connection to terminate...
05 Jan 09:11:42 - Connection closed (8).
05 Jan 09:11:42 - Goodbye.

If the user wants to force shut the tunnel, they can do so by sending another close command (e.g. Ctrl + C)

2) Prevent Sauce Connect shut down due to colliding tunnels

Argument: --no-remove-colliding-tunnels

When this argument is added, any tunnel started with the same username and tunnel-identifier will be pooled together creating failover/load balancing for Sauce Connect. Once a pool of tunnels is established, newly started tests will be assigned to an active tunnel in the pool at random.  

Note: In order to join a pool, each tunnel must be started with this argument.

Read the rest of this entry »

The Importance of a Triage Team

December 30th, 2015 by Joe Nolan

I grew up watching shows like M*A*S*H, and Emergency!. Doctor and paramedic characters would perform triage of injuries and determine which ones were critical and which could wait. If you think about it, bugs in a feature are like injuries to your code, and when they are discovered, they too need to be triaged. Without triage, bug tickets can add time to your development process and even cause invalid fixes. Every development team should triage their bugs!

Just What Does a Triage Team Do on a Development Team?

How many times have you worked on a bug that says something like “this feature is broken”? You might think this is an exaggeration, but it’s really not too far off. Especially if your team conducts bug bashes with users who don’t normally write tickets. This will either start a round-and-round process of different team members clarifying the ticket, or worse, a developer will take it upon him or herself to fix what he or she THINKS is implied. All of this is a time suck to the team.

On the other hand, how many times have you wondered why some tickets are being worked while more critical tickets are just sitting there? How frustrating is that? This happens frequently if tickets are not prioritized and put into the backlog.

A good triage team prevents all of this!

Read the rest of this entry »