Beyond the Release: Continuous Integration The Transforms Organizations [WEBINAR]

May 20th, 2015 by Bill McGee

Continuous Integration is not simply about automated releases, it is also about knowing about how your software delivery pipeline works – its weak points and how it is performing over time, critical data points to make sure your CI system is healthy and sustainable.

In our next webinar, Chris Riley (DevOps Analyst), Andy Pemberton (CloudBees) and Abhijit Pendyal (Sauce Labs)will show you how Jenkins and Sauce Labs can work together to build a comprehensive CI tool set to help you release your desktop apps faster, at a higher quality and with more visibility.

This webinar will cover:

  • How CI goes beyond releases and into pipeline optimization
  • The challenges in adopting CI and the importance of getting it right
  • Thought leaders’ insights into future possibilities for CI

Join us for this presentation next Wednesday, May 27 at 11am PDT/2pm EDT. There will be a Q&A with both Chris, Andy and Abhijit at the end of the presentation.

Click HERE to register today.

Want to read more about Continuous Delivery? Download the 2015 Guide to Continuous Delivery Research Spotlight.

Recap: Getting Started with Mobile Test Automation & Appium [WEBINAR]

May 18th, 2015 by Bill McGee

Thanks to everyone who joined us for our last webinar, Getting Started with Mobile Test Automation &  Appium, featuring Siva Anna from InfoStretch and Abhijit Pendyal from Sauce Labs. The webinar covered best practices in implementing Appium to quickly automate mobile application testing. Topics covered included:

  • Mobile Automation Tools and Landscape
  • Setting  Up an Appium Environment & Object Inspectors
  • Writing a Basic Automation Script for Android and iOS platforms
  • Running an Analysis Execution in Appium and Sauce Labs

Missed the presentation, want to hear it again, or share with a colleague?

Listen to the recording HERE and view the slides below.

Interested in learning more about Appium and how to get started? Download our free Getting Started with Appium guide.

Leverage Your QA Team Upfront

May 15th, 2015 by Ashley Hunsberger

All too often, QA is perceived as the bottleneck in getting software out the door.  This makes sense when you only see QA as just the “bug finder” in a world where you build, test, and release. But how can you leverage your QA team to improve your pipeline to delivery? There are many ways that you can do this, but let’s look at using defect prevention and analysis to test wisely and improve your pipeline.

Prevention

The scenario: a designer has sent specifications (and if you’re lucky, wireframes) for a new feature.  Depending on your development process, you would probably see the following happen:

  1. A developer writes some code
  2. A tester goes off and writes some tests
  3. Once the developer finishes code, the tester executes the tests, finds bugs, and tells the developer to fix the bugs that were found.

The above scenario seems simple, however at the later stages often one finds that the test cases do not 100% align with code. To fix this, what if you changed this process to look more like this?

  1. The designer, engineer, and tester meet to discuss the feature.
  2. QA starts guiding the team into the tests that are needed to ensure the feature works.
  3. The engineer and designer see what is going to be tested, and also contribute – identifying areas that were missed in the spec or wireframes.
  4. The team agrees to the tests.
  5. The engineer builds the feature, running each test until all are passing.

In the first approach, QA is relegated to be only the bug finder.  But in the latter, QA helps drive the team towards bug prevention.

In writing tests first, before anything is built, there are a few advantages:

  • A shared understanding of the requirements early in the development process puts everyone on target for the same goal. Have you ever worked on (or managed) a project where the engineer and tester’s interpretation of a feature was completely different? The end result is code that does not pass the test, and time spent trying to reconcile the issue delays future work. When tests are written first, the guessing game is taken away; it highlights bugs earlier in the process as well as gaps in feature to final product issues.
  • Issues found early save resources. Consider how expensive it is to find a bug late in the cycle, after Development is ‘finished’ (a term I use loosely here). When QA facilitates tests before features, developers catch bugs before QA even sees them. Then there are no bugs to track, therefore no need to review the bug in triage or write specific regression tests that make sure that one specific bug is fixed (those test were already written before the feature was coded). It also eliminates the need to perform those specific regression tests on every build since the bug does not exist in the first place. Lastly, there’s no need to hold a meeting to justify the cost of fixing the bug versus the cost of leaving it in, and no need to document its presence and or implement its workaround. You do the math; this translates to hours -even days- of time saved.
  • Holes in design are found early and rework is minimized. In leveraging your QA team to guide test development, usability (UX) should also be a consideration when designing tests. It is not possible to anticipate the success of all features when they meet the user. Perhaps there is a technical aspect that has not been considered, and the team will need to rethink the design.  This is much easier to do up front than after things have started developing.  Everyone in Development should be involved in a UX discussion early on in the process.

Of course there is much more we could dive into, but it is pretty clear that using your QA team to prevent bugs can only help in your product’s delivery pipeline.

Test Wisely

Defects are inevitable. Try as we might to prevent them, they are just going to happen. But what do you do with bugs that you do find?

Your QA team can look at results and identify trends in the system. Perhaps the majority of the bugs are in a certain area. Your team can quickly adapt testing to areas that seem to be in more trouble, spending resources prudently as you get ready to deliver your product.  If you need to get a product out the door – does it make more sense to spend resources in areas that have had no (or few) bugs reported? Or rather to add more to a feature that has had more problems than others? As you can see there are not just implementation questions, there are strategic ones as well.

Summary

Let’s start getting rid of the perception that QA is the reason things get delayed, and instead look at the point of view that QA gets your product out the door faster, and with better quality!  Work with your QA teams to learn to drive defect prevention and to analyze those that do come in, and you should see a path to improving your pipeline. QA is then a facilitator of success, not a blocker.

About The Author

Ashley Hunsberger is currently a Product Quality Architect at Blackboard Inc., where she has worked for the past 10 years, dedicated to the client experience.  She is a test expert focused on functional, regression, exploratory, and acceptance testing, working in both automation and manual environments. Ashley is experienced with test strategy planning, project management, team leadership, and process improvement across development teams.  She resides in Raleigh, NC with her family. 

Appium 1.4 Released on Sauce Labs, with Better Support for Semantic Versioning

May 13th, 2015 by Jonathan Lipps

Appium logo w- tagline {final}-01

We’re pleased to announce that Appium version 1.4 is now available on Sauce. This hefty release has quite the changelog (see below). We’re also pleased to announce that Sauce now has better support for semantic versioning when requesting Appium versions on Sauce. What does this mean? Well, the Appium team is getting more serious about the meaning behind certain types of releases. We’ve decided that a patch release (i.e., an increase in the 3rd version number, like “7” in “1.3.7”) will signify only an incremental hotfix on the minor version (e.g., “1.3”).

What this means for you as an Appium user is that, if you are using Appium 1.4.0, you should be able to safely upgrade to Appium 1.4.1, or Appium 1.4.2, since we promise not to introduce any new features or remove support for any old features with these patch releases. Instead, patch releases will involve a very small, incremental fix for a regression. As such, we now support specifying Appium versions including only the major and minor version numbers, e.g., “1.4”. If you send in a set of desired capabilities with the appiumVersion of “1.4”, the Sauce Cloud will give you the highest patch release available. So if we have released 1.4.1 and 1.4.2, your tests will automatically be upgraded to those releases if you simply specify “1.4”. Of course, you can always specify a specific version like “1.4.1” if you have any problems. We just want to make it easy for customers to automatically get the benefits of a more advanced and reliable Appium. Read the rest of this entry »

Not Just Faster Releases; Better Understanding

May 7th, 2015 by Chris Riley

This is a guest post by Chris Riley, a technologist who has spent 12 years helping organizations transition from traditional development practices to a modern set of culture, processes and tooling.

In DevOps there are several processes and tools that go beyond their face value. System Analytics, Application Performance Monitoring, and Continuous Integration (CI) go far beyond their core abilities. In particular, CI not only changes the speed and quality of releasing code, it improves communication and finds bugs in the software development process itself.

One of my biggest messages for companies moving to modern software delivery is the idea of being deliberate. Being deliberate means picking a culture, process, and tools that focus on results and how to attain them. Too often tools, especially those that are open source, are adopted on a whim without much forethought. This is the converse of being deliberate; allowing the tools to define the process and culture for you.

When organizations follow the ‘deliberate’ approach, they naturally get to a place where they move faster and what they have built is sustainable. A huge component of getting to this place is CI. No DevOps shop can survive without a continuous integration process. It allows front-end developers to participate in quality, find bugs before QA, test new functionality faster than ever. Read the rest of this entry »

Video: “Eliminate Rollbacks” Talk by Neil Manvar

May 6th, 2015 by Amber Kaplan

Missed Velocity Conf SF? We’ve got you covered. Check out this talk by Neil Manvar, Sauce Lab’s new Professional Services Lead. View “Eliminate Rollbacks” below and let us know what you think in the comments.

Q&A: Reducing False Positives in Automated Testing

May 5th, 2015 by Amber Kaplan

Last month, QASource and Sauce Labs partnered together to present a webinar, Reducing False Positives in Automated Testing. We wanted to provide you with some answers to the most commonly asked questions in response to this webinar. Please feel free to comment with additional questions and let us know how the techniques to reducing false positives have impacted your automation testing.

Q: Are there specific tests to avoid while automating to eliminate false positives in automated testing?

A: When automating tests, first you must define your goal to determine which types of test to automate.  While setting your goal, you should avoid the following:

  • Unstable areas or areas with frequent UI changes
  • Scenarios which are not supported by your automation tool.  For example, if you are using Selenium, you shouldn’t go for tests that require interaction with Windows 32 components because Selenium does not support desktop based applications.
  • Areas which have been identified to have performance issues
  • Areas which cannot be identified using unique locators

Q: How do you identify a well written automated test?

A: A well written automated test is defined by the way we structure our test script, workflow, and tear down fixture.  The script should only contain these strips and verifications points.  This will allow test cases to have 1:1 mapping.  In addition, well written automated tests should not contain any hardcoded data and exceptions should be handled.  All well written automated tests should follow best coding practices, commenting and naming conventions. Read the rest of this entry »

Recap: Best Practices in Mobile CI [WEBINAR]

April 29th, 2015 by Amber Kaplan

Thanks to those of you who joined us for our last webinar, Best Practices in Mobile Continuous Integration, with Kevin Rohling. The webinar covered topics like:

  • What makes mobile CI so different
  • Best ways to use emulators and simulators in testing
  • Suggestions for CI tools and mobile testing frameworks

Missed the presentation, want to hear it again, or share with a colleague?

Listen to the recording HERE and view the slides below.

Read the rest of this entry »

Infographic: Announcing 250 Million Tests Run on Sauce Labs!

April 29th, 2015 by Amber Kaplan

It’s true! We’re celebrating the fact that more than 250 million tests have been run on our platform! It’s crazy to think that we announced just over 100 million tests at the end of February, 2014. That’s an increase of 150% in just 14 months.

This time we thought we’d take a look at how our ecosystem has been growing as well, including our work with Appium, a cross-platform mobile test automation framework sponsored by Sauce Labs and a thriving community of open source developers. Read the rest of this entry »

Translating Web App Functional Testing To Mobile

April 27th, 2015 by Amber Kaplan

This is a guest post by Greg Sypolt, a Senior Engineer at Gannett Digital and automated testing expert.

Technological advancements and the explosion of devices across multiple platforms means hardware and software developers have a much more difficult job. They have to keep up with the demand to develop and roll out new products. One of the most significant issues is accounting for the differences in system response, when responding to mobile traffic rather than to internet traffic. Read the rest of this entry »