Posts Tagged ‘QA Testing’

JIRA is Just a Click Away with Our New Plugin

May 3rd, 2016 by Ken Drachnik

As more and more devs work in agile teams, they need tools to plan, track, and release software and many of them use JIRA. As a tracking tool, JIRA is amazing for collaboration and project planning. For many teams, JIRA is the place of record for everything in the software development lifecycle. We have found that many of our customers use JIRA and the #1 product ask was to integrate Sauce Labs’ test results with JIRA so it would be simple to track all the tests associated with a project in one place.

Let’s say you are running an automated or manual test on Sauce Labs and find a bug. You want to add it to JIRA so that someone on your team can take a look or so that it can be prioritized in the backlog. Historically, one would have to download all the Sauce assets, login to JIRA, create a ticket, and upload the assets again. This can be tedious when you’re running lots of tests.

With Sauce Labs for JIRA, this is all simplified and automated. With the click of a button you can now create a JIRA ticket directly from your Test Details page. The plugin gives you the option to upload the screenshots, logs, and video link to your tests, making it easy to share out among your team.

To download Sauce Labs for JIRA, visit the Atlassian Marketplace: https://marketplace.atlassian.com/plugins/sauce-jira-integration/cloud/overview.  To read more, visit our JIRA integration Docs page.

Happy Testing!

The Sauce Labs Ecosystems & Integrations Team

Why is Manual QA Still So Prevalent?

April 28th, 2016 by Joe Nolan

This past week I casually heard comments alluding to the imminent death of the QA Analyst or Manual Tester. (To be clear, I am not referring to the QA Automation Engineer, who builds test automation.)

Not only does the function not seem to be going away, recruiters are still out their hunting testers down. Out of curiosity I did my own review of randomly selected job posts from Monster and Indeed for average QA positions, and discovered that there are still a lot of jobs available for manual testers.

With the importance of catching bugs early, and the ability to automate all testing, why do companies and projects resist the investment in CI and test automation? I will explore the reasons why now, and whether the resistance is good or bad. (more…)

Waiting for Green

April 12th, 2016 by Ashley Hunsberger

Every now and then, you may encounter a time when you need to stabilize your automated UI tests (for myself, that time is now). Although you don’t want to add to a framework that you are stabilizing, you probably don’t want to halt development on new features. (Warning — telling your leadership team no one is allowed to add more tests until everything goes green might not go over well.)

What do you do in the meantime? The answer is simple, and I look to some practices in Behavior Driven Development (BDD) as a guide – build a test skeleton into your current framework.

The First Rule of Stabilization: Don’t Create a Manual Test Suite

While you may temporarily need to revert to manual execution, it does not necessarily mean you’ll want to go back to a manual test suite for a couple of reasons: (more…)

What is your definition of “Done”?

September 17th, 2015 by Greg Sypolt

Introduction

Why does a daily standup or scrum team have a definition of done (DoD)? It’s simple – everyone involved in a project needs to know and understand what “done” means.

What is DoD? It is a clear and concise list of requirements a software increment must adhere to in order to be considered a completed user story, sprint, or be considered ready for release. However, for organizations just starting to apply Agile methods, it might be impossible to reach immediately. Your organization needs to identify the problems and work as a team to build your version of DoD to solve them.

The Problem

The following conversation occurs during your daily standup:

Product Manager (PM): “Is the user story done?”
Developer (Dev): “Yes!”
Quality Assurance (QA): “Okay, we will execute our manual and/or automated tests today.”

Later that same day:

QA: “We found several issues, did Dev perform any code reviews or write any unit tests?”
PM (to Dev): “QA found several issues, did you do any code reviews or unit testing?”
Dev: “No, the code was simple. It was going to take too much time to write unit tests.”

Has this ever happened to you? (more…)

Guest Post: Test Lessons at ExpoQA

June 6th, 2014 by Bill McGee

This is the first of a three part series by Matthew Heusser, software delivery consultant and writer. 

Every now and again and opportunity comes along that you just can’t refuse. Mine was to teach the one-day version of my class, lean software testing, in Madrid, Spain, then again the following week in Estonia. Instead of coming back to the United States, I’ll be staying in Europe, with a few days in Scotland and a TestRetreat in the Netherlands.

And a lot of time on airplanes.

The folks at Sauce Labs thought I might like to take notes and type a little on the plane, to share my stories with you.

The first major hit in Madrid is the culture shock; this was my first conference where English was not the primary language. The sessions were split between English and Spanish, with translators in a booth making sure all talks were available in all languages.

The Testing Divide

Right now, in testing, I am interested in two major categories: The day to day work of testing new features and also the work of release-testing after code complete. I call this release testing a ‘cadence’, and, across the board, I see companies trying to compress the cadence.

My second major surprise in Madrid is how wide the gap is —and I believe it is getting wider —between legacy teams that have not modernized and teams starting from scratch today. One tester reported a four-month cycle for testing. Another team, relying heavily on Cucumber and Selenium, were able to release every day.

Of course, things weren’t that simple. The Lithuanian team used a variety of techniques I can talk about in another post to reduce risk, something like devOps, which I can talk about in another post. The point here is the divide between the two worlds.

Large cadences slow down delivery. They slow it down a lot; think of the difference between machine farming in the early 20th century and the plow and horse of the 19th.

In farming, the Amish managed to survive by maintaining a simple life, with no cars, car insurance, gasoline, or even electricity to pay for. In software, organizations that have a decades-long head start: banks, insurance companies, and pension funds, may be able to survive without modernization.

I just can’t imagine it will be much fun.

Batches, Queues and Throughput

Like many other conferences, the first day of ExpoQA is tutorial day, and I taught the one-day version of my course on lean software testing. I expected to learn a little about course delivery, but not a lot —so the learning hit me like a ton a bricks.

The course covers the seven wastes of ‘lean’, along with methods to improve the flow of the team – for example, decreasing the size of the measured work, or ‘batch size’. Agile software development gets us this for free, moving from ‘projects’ to sprints, and within sprints, stories.

In the early afternoon we use dice and cards to simulate a software team that has equally weighted capacity between analysis, dev, test and operations —but high variability in work size. This slows down delivery. The fix is to reduce the variation, but it is not part of the project, so what the teams tend to do is to build up queues of work, so any role never runs out of work.

What this actually does is run up the work in progress inventory – the amount of work sitting around, waiting to be done. In the simulation I don’t penalize teams for this, but on real software projects, ‘holding’ work created multitasking, handoffs, and restarts, all of which slow down delivery.

My lesson: Things that are invisible look free —and my simulation is far from perfect.

After my tutorial it is time for a conference day – kicked off by Dr. Stuart Reid, presenting on the new ISO standard for software testing. Looking at the schedule, I see a familiar name; Mais Tawfik, who I met at WOPR20.Mais is an independent performance consultant; today she is presenting on “shades of performance testing.”

Performance Test Types

Starting with the idea that performance testing has three main measurements: Speed, Scalability, and Stability, Mais explains that there are different types of performance tests, from front-end performance (javascript, waterfalls of HTTP requests, page loading and rendering) to back-end (database, webserver), and also synthetic monitoring – creating known-value transactions continuously in production to see how long they take. She also talks about application usage patterns – how testing is tailored to the type of user, and how each new release might have new and different risks based on changes introduced. That means you might tailor the performance testing to the release.

At the end of her talk, Mais lists several scenarios and asks the audience what type of performance test would blend efficiency and effectiveness. For example, if a release is entirely database changes, and time is constrained, you might not execute your full performance testing suite/scripts, but instead focus on rerunning and timing the database performance. If the focus on changes is the front end, you might focus on how long it takes the user interface to load and display.

When Mais asks if people in the organization do performance testing or manage it, only a handful of people raise their hands. When she asks who has heard of FireBug, even less raise their hand.

Which makes me wonder if the audience is only doing functional testing. If they are, who does the performance testing? And do they not automate, or do they all use Internet Explorer?

The talk is translated; it is possible that more people know these tools, it was just that the translator was ‘behind’ and they did not know to raise their hands in time.

Here’s hoping!

Time For A Panel

At the end of the day I am invited to sit on a panel to discuss the present (and future) of testing, with Dr. Reid, Dorothy Graham, Derk-Jan De Grood, Celestina Bianco and Delores Ornia. The questions include, in no particular order:

  •             Will testers have to learn to code?
  •             How do we convince management of the important of QA and get included in projects?
  •             What is the future of testing? Will testers be out of a job?
  •             What can we do about the dearth of testing education in the world today?

For the problem with the lack of education, Dorothy Graham points to Dr. Reid and his standards effort as a possible input for university education.

When it is my turn, I bring up ISTQB The International Software Testing Qualifications Board. – if ISTQB is so successful (“300,000 testers can’t be wrong?”) then why is the last question relevant? Stefaan Luckermans, the moderator, replied that with 2.9 Million testers in the world, the certification had only reached 10%, and that’s fair, I suppose. Still, I’m not excited about the quality of testers that ISTQB turns out.

The thing I did not get to say, because of time, that I want to do is point out that ISTQB is, after all, just a response to a market demand for a 2-3 day training certification. What can a trainer really do in 2-3 days? At most, maybe, teach a single technical tool, turn the lightbulb of thinking on, or define a few terms. ISTQB defines a few terms, and it takes a few days.

The pursuit of excellent testing?

That’s the game of a lifetime.

By Matthew Heusser – matt.heusser@gmail.com for Sauce Labs

Stay tuned next week for part two of this mini series! You can follow Matt on Twitter at @mheusser.

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Ask a Selenium Expert: is Random Execution Order Necessary?

May 14th, 2014 by Bill McGee

This is part 6 of 8 in a mini series of follow-up Q&A’s from Selenium expert Dave Haeffner. Read up on the firstsecondthirdfourth, and fifth.

During our recent webinar, “Selenium Bootcamp,” Dave discussed  how to build out a well factored, maintainable, resilient, and parallelized suite of tests that run locally, on a Continuous Integration system, and in the cloud. The question below is number 6 of 8 in a mini series of follow-up questions.

6. Since each test you showed performs its own setup and teardown, is random execution order really necessary?

Yes. Random execution order is necessary to help identify (and prevent) cross-test dependencies. It also has the added benefit of exercisizing the application under test in a random order.

-Dave Haeffner, April 9, 2014

Can’t wait to see the rest of the Q&A? Read the whole post here.  Get more info on Selenium with Dave’s book, The Selenium Guidebook, or follow him on Twitter or Github.

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Webinar: Selenium Bootcamp with Expert Dave Haeffner

March 20th, 2014 by Bill McGee

Whether you want to start fresh, learn more, or fill in gaps about QA testing using Selenium, you won’t want to miss our latest webinar! Join us for Selenium Bootcamp with expert Dave Haeffner on Thursday, March 27th, 2014 10:00am PDT.

QA testing with Selenium can help to dramatically improve your business’ productivity by minimizing testing time, cost, and hassle. Dave will walk you through all the necessary steps to learn how to build out a well-factored, maintainable, resilient, and parallelized suite of tests that will run locally, on a Continuous Integration system, and in the cloud.  A live Q&A session will follow. Register today!

Dave is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.