March 31st, 2016 by Ken Drachnik
After months of beta testing with customers, we are announcing general availability for our Real Device Cloud today. With the ability to test websites, native apps, and hybrid apps on both iOS and Android devices, we enable enterprise customers to easily scale their CI/CD testing needs on the most popular devices. By providing large numbers of each device type, our customers will not have to wait in a queue for their tests to run on a real device, thus speeding up the pace of testing, especially when running concurrent parallelized tests. In conjunction with our mobile emulators and simulators, we now provide the most comprehensive automated mobile testing platform in the market, covering both web and mobile app testing needs across a variety of device types. After running your tests, view the results on our test details pages, with videos of each test, screen shots, logs and commands so you can resolve issues quickly and iterate on your app more frequently.
To request a quote for real devices, contact your account executive.
Mobile Testing on Sauce Labs Includes:
- Instant Availability – Get access to the most popular iOS or Android devices with no waiting, no queues and no reservations system.
- Android emulators and iOS simulators covering over 140 device-OS combinations.
- Massive Concurrency – Run your tests in parallel to dramatically reduce your total test time.
- Integrate with your CI tool of choice – automate all your tests using the top CI tools like Jenkins, Bamboo, Microsoft VSTS, Travis, Team City or Circle CI.
- Test native apps, hybrid apps, and mobile web – all on the same platform.
- Security – test with Sauce Connect, allowing your codebase to stay behind your firewall while utilizing Sauce’s extensive collection of OS platforms, browsers, and devices.
- Pinpoint issues quickly with full reporting – instant access to videos, screenshots and all the data for your tests so you can analyse your results quickly.
- Enterprise management features – account and team management lets you provision test resources as needed and SSO integration means you don’t have to go to IT to add another tester to your account.
- Professional Services and Training – we have professional consultants and partners to help you get started with Appium and Selenium or if you’re already proficient, our experts can help your team become super users.
For more information, visit our Automated Mobile Testing Page
March 30th, 2016 by Joe Nolan
Implied Testing is a way to write a test that indicates other parts of your workflow are working as you try to accomplish a goal. Make use of Implied Testing to minimize the amount of documentation and testing artifacts on a project.
According to the Manifesto for Agile Software Development, we should favor working software over comprehensive documentation. While this sounds good in theory, all too often test teams are asked to produce documentation explaining what they plan to test (in detail). The concept of Implied Testing will help save a lot of writing as it will eliminate duplication, and streamline the tests that feed into automation, allowing for simpler, more re-usable scripts.
Why Do We Write Detailed Tests?
In an ideal Agile world we limit our test documentation and focus on automated tests, with manual smoke and exploratory tests to supplement them. Our Acceptance Criteria in the stories should guide the tests necessary for the story to be complete. Unfortunately, circumstances can require an extensive amount of test documentation artifacts to be produced. Why? Read the rest of this entry »
March 29th, 2016 by Isaac Murchie
We are happy to support the newly-released Appium 1.5.1. This release fixes a
number of issues with 1.5.0, including one bug that prevented some frameworks
from correctly polling for status during Safari tests.
- allow `platformName` to be any case
- Windows process handling is cleaned up
- Desired capabilities `language` and `locale` added
- iOS 9.3 (Xcode 7.3) support
- Fix handling of return values from `executeScript` in Safari
- Don’t stop if Instruments doesn’t shut down in a timely manner
- Escape single quotes in all methods that set the value on an element
- Allow custom device names
- Make full use of process arguments to Instruments
- Pass `launchTimeout` to Instruments when checking devices
- Make use of `–bootstrap-port` server argument
- Fix `keystorePassword` capability to allow a string
- Fix handling of localization in Android 6
- Use Appium’s unlock logic for Chrome sessions
- Make sure reset works
- Make unlock more reliable for later versions of Android
- Allow Xpath searching from the context of another element
- Make full use of process arguments to adb
- Better error messages when ChromeDriver fails to start
March 28th, 2016 by Ashley Hunsberger
User Interface (UI) Testing.
The idea is simple — automate some UI tests to ensure your application is still behaving as expected. Usually your first set of tests — running green, no doubt — make you all cheer and pat yourselves on the back. Then you open up the framework to more people. Despite the reviews (so many reviews), the failures start to come, and they don’t stop. Or they run green and then fail and then run green again. And then fail again. So why are they so unstable? Is it bad scripts? Environment issues? Sometimes you just don’t know, and you think you are going to lose your mind. Let’s take a look at some common and potential issues you may be facing.
Architecture, Environments and Settings
Is your infrastructure designed for stability? Are you using on-premise or cloud instances? What may have saved you a dollar upfront could cost you many more down the road – so your testing environment is important.
Understand if your tests require particular system settings. Tests failing because of unwanted server variables is a waste of everyone’s time. We found that out the hard way a long time ago. You may need to have some isolated tests that cannot be run on the same server so the majority of your other tests can pass. (Or maybe decide how important the test really is.)
Or let’s say your testing frameworks are stable, but what about the tools or libraries you are importing? Are you pinning your stack to a version of these tools or libraries? A new version can completely break everything. Read the rest of this entry »
March 24th, 2016 by Joe Nolan
If your test automation team’s directive is to automate X amount of tests, and you have no strategy as to which tests they should focus on, you are wasting your time. Before you begin writing your first line of automation code, make sure you have a strategy in place. Otherwise, you will have a ton of ineffective tests to maintain.
Don’t Choose a Random Goal
How many times have you been told that the goal of the team is to have X amount of test coverage? This is an arbitrary value picked out of the sky. What is it based on? If a UI automation team were to cover 80% of the stories in a sprint, they would never get done in time.
We all know how fragile UI automation is! How many times will a designer make a change that directly affects the UI and breaks the test? This is almost manageable during a sprint while you are working closely together, but how about when the product is sent to be translated to another language? The translator inevitably comes back with suggestions to allow for phrases more common and translatable. Bugs might be entered and UI changes made by a maintenance team with no heads up to the automation team, and Bam! — You have broken tests that need to be investigated. Read the rest of this entry »
March 22nd, 2016 by Joe Alfaro
From Engineering to DevOps
I presented this cartoon to our development staff during a recent planning meeting because I think it’s a nice illustration of the progressive change in software engineering process over the past twenty years. While the cartoon portrays the change as evolutionary, it has also been revolutionary. Continuous development and continuous operations enable developers to deliver rapid iterations on their products in response to customer needs–meaning what previously took months to deliver can now be done in a matter of days or hours.
When I joined Sauce Labs a few months ago as the Vice President of Engineering. I knew I wasn’t signing on for your typical, run-of-the-mill engineering management job. Sauce Labs is a young company, working with a new technology, that is experiencing tremendous growth. With its roots in open source, and a strong ethos formed around supporting individual developers, Sauce Labs is evolving into an enterprise software company. In order to scale to meet the demands of large enterprise customers, the Engineering discipline at Sauce Labs would also need to evolve. So, I wasn’t just accepting a job, but accepting responsibility for leading the Sauce Labs Engineering team through this critical journey which can be challenging but also very rewarding. I’ve done this before at companies like Citrix Online, Lynda.com, and GoDaddy, so I went into this with eyes wide open for the journey ahead. Read the rest of this entry »
March 16th, 2016 by Ashley Hunsberger
Behavior Driven Development, or BDD, can help get your teams building the RIGHT product. Although I’ve heard the term used interchangeably with Test Driven Development (TDD), I personally see it as an extension of TDD to help your team focus on the business’ goals. While TDD provides tests that drive development, those tests may or may not be helping you meet those goals.
The WHY Behind the Code: BDD vs. TDD
|Behavior Driven Development||Test Driven Development
- Start with business value, then drill down to feature sets
- Lots of tests that may or may not meet the business value
- Team gets feedback from the Product Owner
- Coder gets feedback from the code
BDD is a more outside-in approach that is really focused on business drivers. It takes TDD a step further (as you still want that feedback from the code), but it now gives you feedback on the feature.
So what is the general process to use BDD? It aligns itself nicely in the Agile framework and is simply a WAY to implement Agile. Continue with your usual scrum activities: milestone planning, defining user stories and acceptance criteria and developing, and iteratively repeat that process until you are ready to release. Read the rest of this entry »
March 14th, 2016 by Greg Sypolt
Using Cucumber with outlined best practices in your automated tests ensures that your automation experience will be successful and that you’ll get the maximum return on investment (ROI). Let’s review some important best practices needed before you start developing Cucumber tests.
Feature files help non-technical stakeholders engage and understand testing, resulting in collaboration and bridging the communication gap. For this reason, well-written feature files can be a valuable tool for ensuring teams use the BDD process effectively.
Here are some suggested standards for Cucumber Feature files:
Read the rest of this entry »
|Organization||Feature files can live at the root of the /features directory. However, features can be grouped in a subfolder if they describe a common object. The grouped filenames should represent the action and context that is covered within.
|Feature Files||Every *.feature file consists in a single feature, focused on the business value.
Feature: Title (one line describing the story)
Narrative Description: As a [role], I want [feature], so that I [benefit]
Scenario: Title (acceptance criteria of user story)
And [some more context]...
And [another outcome]...
|Background||The background needs to be used wisely. If you use the same steps at the beginning of all scenarios of a feature, put them into the feature’s background scenario. The background steps are run before each scenario.
|Scenarios||Keep each scenario independent. The scenarios should run independently, without any dependencies on other scenarios.
|Scenario Outline||If you identify the need to use a scenario outline, take a step back and ask the following question: Is it necessary to repeat this scenario ‘x’ amount of times just to exercise the different combination of data? In most cases, one time is enough for UI level testing.
|Write Declarative Scenarios, Not Imperative||The declarative style describes behavior at a higher level, which I think improves the readability of the feature by abstracting out the implementation details of the application.
Scenario: User logs in
Given I am on the homepage
When I click on the "Login" button
And I fill in the "Email" field with "firstname.lastname@example.org"
And I fill in the "Password" field with "secret"
And I click on "Submit"
Then I should see "Welcome to the app, John Doe"
Scenario: User logs in
Given I am on the homepage
When I log in
Then I should see a login notification
Just avoid unnecessary details in your scenarios.
|Given, When, and Then Statements||I’ve often seen people writing the Gherkin syntax confuse when to put the verification step in the Given, When, Then sequence. Each statement has a purpose.
- Given is the pre-condition to put the system into a known state before the user starts interacting with the application
- When describes the key action the user performs
- Then is observing the expected outcome
Just remember the ‘then’ step is an acceptance criteria of the story.
|Tagging||Since tags on features are inherited by scenarios, please don’t be redundant by including the same tags on scenarios.
March 11th, 2016 by Bill McGee
Thanks to everyone who joined us for our recent webinar, “Achieve True Continuous Integration with Sauce Labs and Microsoft Visual Studio Team Services”, featuring Sauce Labs Product Manager Jack Moxon. In his presentation. Jack shows the new Sauce Labs plugin for Visual Studio Team Services (VSTS) and how to launch tests on Sauce Labs as part of their VSTS build, enabling Continuous Integration (CI).
Jack also shows how the plugin allows users to launch Sauce Connect – a tunnel that allows customers to securely test pre-production web and mobile apps in the cloud. The plugin also integrates Sauce Labs test results and detailed reports (like videos, screenshots and logs) back into Visual Studio and TFS to enable collaboration and expedite the debugging process.
The webinar covers:
- How Sauce Labs and Visual Studio Team Services (VSTS) are integrated for Continuous Integration
- Launching automated tests as a part of a VSTS build
- How to test pre-production apps securely behind your firewall
- Sharing detailed test results and reports to collaborate and debug faster
Want to learn more about the Sauce Labs plugin for VSTS? We’ll be exhibiting at Microsoft Build in San Francisco March 30 – April 1, 2016. The conference is sold out but you can add a reminder to your calendar to make sure you don’t miss the live stream.
Access the recording HERE and view the slides below:
March 10th, 2016 by Yaroslav Borets
We have released a brand new desired capability shortcut that will allow you to quickly take advantage of the latest desktop browser versions as soon as they are available on Sauce Labs. We’ve noticed that most of our customers perform the majority of their testing on the latest browser releases and, as such, they have to manually adjust their desired capabilities whenever a vendor releases a new browser build.
As part of this update you can now can specify “latest” as a browser version in your desired capabilities and Sauce Labs will always serve the latest stable version of the chosen browser. Additionally, users who are interested in testing on browsers that are several versions behind latest, can specify “latest-1“, “latest-2“, “latest-3” or essentially “latest(minus)n” as way to obtain previously released browser builds. Currently this functionality is extended to all of our desktop browsers including Chrome, Firefox, Safari and Internet Explorer. Read the rest of this entry »