Posts Tagged ‘testing’

Bleacher Report Uses Appium by Sauce to Achieve “Olympic Quality” Results to Win at QA Testing (VIDEO)

February 28th, 2014 by Amber Kaplan

It’s an Olympic year, and Bleacher Report gets the gold when it comes to QA mobile testing. Winning!

Bleacher Report is a digital media company that delivers engaging content to sports fans all over the world. With more than half of their traffic coming from mobile devices, the team knew they needed to find a way to automatically test their mobile apps and mobile web experience. So they turned to Sauce Labs and Appium. Check out the video below to learn more.

Are you using Appium to test all your things? We’d love to hear what you’re working on. Leave us a comment on this post, tweet at us, or sign up for a free Sauce account to start testing.

Announcing iOS 7 support in the Sauce Labs Cloud

October 31st, 2013 by Ashley Wilson

ios7_01

Today we’re excited to let you know that you can now test your hybrid and native apps with iOS 7 on Sauce. With more than 200M downloads since its release, we know that this is an important platform for our users to test on.

To get started, visit our browsers and platforms page for Appium and copy and paste the DesiredCapabilities provided for either version of iOS (or Android) you want to test on. And if you’re new to mobile test automation in general, check out our Getting Started with Appium guide.

If you have any issues, give us a shout. Otherwise, happy (iOS 7) testing!

Come Visit Sauce at Apps World 2013

October 11th, 2013 by Bill McGee

Want a great reason to visit London later this month? Our very own Santiago Suarez Ordoñez, Senior Infrastructure Developer, will be sitting on an Apps World 2013 panel on October 23rd titled “Best practices in managing QA testing for the multi-device nightmare”. Apps World 2013 is Europe’s largest multi-platform app event and takes place at Earls Court 2 in central London on the 23rd and 24th of this month.

The panel will be moderated by Martin Wrigley, Executive Director, App Quality Alliance (AQuA) and Santi will be alongside Matthew Brown (Test Manager, Apptivation), Philipp Benkler (Founder & CEO, Testbirds) Paul Rutter (Test Manager – Mobile Platforms, BBC) and Becky Wetherill (Product Director, Borland) – a royal QA roster!

Topics to be discussed include determining how many devices to test, whether or not emulators can be effective as a substitute to testing on actual devices, the balance  of functional, compatibility & performance testing requirements across devices, issue resolution strategies post launch and more.

Once you’ve tucked the panel under your belt (and all of the juicy take-aways), please be sure to take a quick jaunt to stand 217 and say hello to Saucers! We’ll be demo’ing Appium, an open source test automation framework that aims to turn multi-device QA testing from nightmare to a dream. Did we mention there will also be a spiffing robot too?

So hop on the tube and come join the Saucers at Apps World 2013. Cheerio!

Break Things Faster with Ruby Parallelization

February 25th, 2013 by Dylan

You’re probably wasting two things when you’re testing. Your time… because the other thing is your extra CPUs. Any time you’re waiting on a relatively slow resource, your CPU is just sitting there, twiddling its silicon thumbs. If you’re only using one CPU core at a time, the other cores are doing much the same. Unfortunately for web developers who do things involving CRUD operations, slow resources includes databases. Most unfortunately for you, Dear Reader (and us, Dear… Us), they also include the browsers you’re integrating with for Selenium testing.

Source: http://www.flickr.com/photos/calliope/440681335/

This is what waiting for browsers and io should be measured with

One of the best ways to get more test for your tokens is to run more tests at once. If you’ve got several tests going, even if some are waiting for one slow resource, the others can use the CPU. The more tests you can run at once, the shorter your test cycles will be, especially if you could, say, spin up more then one copy of the slow resource to test with (Psst: I’m talking about Sauce Labs Parallelization).

I’ve been looking for ways to make parallelization easier when testing with Ruby, and I stumbled across the Parallel Tests gem. It’s actively developed, has some nice documentation and integrates with rspec, rspec-rails and test:unit. Their benchmarks show that using the gem, the Rails ActionPack test suite time was cut in half, from 88 seconds to 44, with just 4 test runners. This, conveniently, is the number of parallel tests you can run with a Mild plan.

Source: http://www.flickr.com/photos/59937401@N07/5857826966/

Checkmate. Wait. Snap. Game Set Match?

So for many of you, there’s already a possibility of making your tests take half the time. Which means you can run twice as many. Which means a much faster turnaround time for TDD, BDD, and release testing.

I’d say that’s a win.

Announcing “Open Sauce,” free unlimited testing for Open Source projects

December 13th, 2012 by Ashley Wilson

For about a year, we’ve been quietly giving free testing support to some high profile open source projects, like Mozilla and the Selenium Project. As open source advocates and contributors ourselves, we know it’s important to support projects that we benefit from on a regular basis. And what better way to do it then by providing the infrastructure that helps ensure new releases are fully tested?

Given that history, we are very excited to announce today that we’re taking this effort one huge leap forward and providing free “Open Sauce” accounts to any OSS project that could benefit from our automated and manual cross-browser testing services.

If you have an open source project, simply sign up here and enter in your project’s repository URL and a description of what it is. You’ll automatically be enrolled in the plan, which provides unlimited testing minutes on up to three parallel VMs and access to all our features, including 96+ OS/browser combos, screenshots, debugging tools and more.

In exchange, we just ask that you agree you will only use this account for your OSS project(s) and that all of your test results (videos, screenshots, and the Selenium log for Selenium testing) will be publicly accessible (we actually think this is an awesome thing, as it makes it super simple to share the tests with other developers).

For more info, visit our Open Sauce page or sign up for an account. And if you’d like for us to list your project on Open Sauce, let us know.

Happy (open source) testing!

The Eschaton: What The End Game Looks Like For Testing with Selenium

November 13th, 2012 by The Sauce Labs Team

This is the first in a series of posts by QAOnDemand, which offers self-service QA scripting and testing. For more info, visit http://qaondemand.com.

In theological circles, “The Eschaton” is defined as the end of time. In fact, there’s a whole field of study called Eschatology that studies what the end of the world looks like. While I have to believe their office parties are pretty grim, they’re definitely on to something. Visualizing the end result of what you’re building before ever laying down any foundation can lead to better decision-making, which is a key thing to keep in mind when setting up a mature testing environment for the first time.

To help you with this effort, I’ve devised a zombie readiness kit that includes how to set up the five key components for building a mature testing environment with Selenium:

  • A place to store your tests (source code repository)
  • A place to run your tests (Sauce Labs or a Selenium server)
  • A mechanism to trigger your tests (continuous integration server like Jenkins)
  • A place to log and track defects (a bug tracker like Bugzilla)
  • An army of human testers to test what cannot or has not yet been automated

While you can certainly test software without all five in place, it’s radically more productive with everything integrated and running smoothly. I’ll be diving into each component in more detail, so let’s get started with the guide!

A place to store your tests.

The Z: drive of your network is not a place to store tests. Neither is Dropbox. Your tests belong under source control. All of them. And by all, yes, I mean both manual test scripts and your test automation code.

There are many reasons why bringing your test assets under source control is a good idea, but here‘s the top 26:

A) Continuous integration servers are designed to pull from repositories. This means that if you put your test code in a well-organized repository, it’s relatively easy to configure your CI server to pull the latest copies from the repo and execute your tests automatically. Plus when a test fails, it’s also easy to take a quick look at your revisions to see if a recent change to the test code could be responsible for the failed test.

B) Products like the Atlassian suite are designed to broadcast repository activity. This is a good thing. Too often, QA gets a siege-like quality where we only come up from the dungeon when there’s a problem or free food. The truth is that by continuously broadcasting QA test results into the main communication streams of your company, you normalize the process of QA for everyone outside the group. QA test results become less of an interrupt and more routine. That’s a good thing and a worthy goal.

C-Z) Revision control. If your test code isn’t under revision control, then you’ve been living in the woods too long. You’s ignorant so listen up! If it’s worth doing, it’s worth keeping track of. The ability to trace changes over time is one of the most underrated tools out there. If something breaks and you’ve got to pop open the code, the first thing to look at is what has changed. Did the code change? Did the test change? With Git, CVS, SVN, Merc, whatever, you can easily see the evolution of your test code. It solves so many problems and enables so many good things such as skills development, accountability, and humility.

A place to run your tests.

You’re reading this on Sauce Labs‘ blog so this hardly needs mentioning, but there’s a nuance here worth talking about. One of the best qualities of Sauce Labs is the visibility it creates. The value in being able to rerun a test or capture a screenshot cannot be overstated. Whether you’ve provided a manual step-by-step set of instructions or you’ve provided a link to a screen cast, the first step in remediating a bug is to reproduce it and observe it in action. So if that “place to run your test” is “Joey’s Laptop,” then you’re going have a bad time. But if it’s a generally available service that anyone can access, it’s going to be a whole lot more fun.

A mechanism to trigger your tests

We see a lot of test teams “kicking off” tests manually. This is fine; there are lots of cases where you need to do this, but it’s waaaay better when a continuous integration (CI) server does this for you. Getting your CI to manage your test execution is tricky. Now I’d love to tell you there’s a simple script ./make-it-so.py, but alas, there is not. There are different types of builds, different deployment scenarios, and different types of check-ins that should fire off different types of tests.

But the net-net is that in the end, you want your QA process to be seamlessly integrated into the development process. And increasingly, CI drives development. Consider this question: In five years, is continuous integration and automated deployment going to be more or less prevalent? Is it going to be more common or less common? The answer is yes, so why bring up the rear of the parade? Get out in front. The sooner your QA process is wired into your CI, the easier it’ll be in the end.

A place to log and track defects

I’m going to move quickly through this point because most of you probably have this at some level. The one feature I’ve seen in the last few years that’s really been a huge boon to development is tying tickets to checkins, such that you can see the code that is related to the ticket. This makes it infinitely easier to quickly find the business requirements associated with a ticket and cross-reference it to the code itself. If you can tie your test code to tickets in a similar fashion, it’s so much the better because you’ve created visibility for both the created tests and related revisions. I highly recommend you select bug-tracking software that does this. It’s absolutely worth every expense.

An army of human testers to test what can’t (or has yet to be) automated

In its heart of hearts, quality assurance is a request for an opinion that is inescapably a human observation. “Does it work as you expect?” can sometimes be described in a way that can be automated. Sometimes it can be described in a way that a tester with nominal knowledge of the system can test. Sometimes it takes a domain expert to tell if something works as expected.

The point is, human testing always has been and always will be a part of QA. Anyone who says it can be completely automated is either an academic or a fool. So plan for it and work towards an end game that uses human testers in a productive and efficient way. We think it’s helpful to break up the problem of organizing human-based QA work into three buckets using the following guidance.

  • a) Automate the simple stuff that’s easy to maintain or is tedious. Automate the routine stuff that won’t break often, but when it does, it’s catastrophic. There’s no shortage of people who will tell you that with a good framework and their secret sauce, you can automate everything. You can’t. And more importantly, it’s not worth it. Automation is fundamentally an economic problem, not an engineering problem. You should only automate that which the cost of automation is meaningfully less than the cost of just running the check repeatedly by hand when needed. The bottom-line? Don’t get dragged into complex automation strategies. Automate the simple routine stuff in a simple and routine way.
  • b1) Outsource the intermediate stuff, such as layout, copy writing, new features, and regressing fixes on a well-defined but complex bug. Outsourcing works great where cultural nuance, domain expertise and a qualified point of view don’t matter all that much. The overhead is much lower than it used to be and in the sage words of Eric S. Raymond: “Given enough eyeballs, all bugs are shallow” .
  • b2) Outsource the creation of simple automation — you know, like writing your Sauce Labs tests. Just sayin’. When a well-defined bug gets checked, have an outsourced team write an automated test to check it again. Test harnesses with a thousand tests get built one test at a time. ***
  • c) Use your domain experts as big guns. Focus them on the hard stuff – subtle features that require an understanding of how the code works or how a customer sees the world. If your in-house QA engineers are testing to see if your upload feature correctly kicks out an error for oversized files or disallowed formats, then you are wasting valuable expertise. Again, it’s helpful to think about testing as an economic problem and price your top talent. Give your top dogs a fully loaded cost and socialize the notion that it’s not a few hours, it’s a few dollars. Remind people of what they’re asking for in economic terms – that a detailed, cross-browser, multi-platform manual test by your in-house team is easily a $1,000 request. A simple question to posit: “Is that the best way to spend $1,000?”

So, what now?

In conclusion, take time to work through your endgame. Almost all QA work is time-bound, meaning we all work as hard and as fast as we can testing until the clock runs out. If nothing explodes during testing, we ship. It’s not fair, but that’s the way it is. So if you don’t block off some time each cycle to build toward a better way of doing things, then you’re selling yourself and your team short. Hopefully this post gave you a little perspective on what that end-game could look like.

In the next post, we’ll get into specifics of some of the tools we’ve found to be most helpful and more specifics about what works for us.

*** Look, QAOnDemand basically does B1 and B2 for a living. We see a lot of people’s QA efforts. Our services are structured and priced to make it easy for you to say yes to modest outsourcing. We’ll knock out your intermediate testing without crushing you with a big contract or a lot of overhead. Plus we offer a decent free trial with no strings attached, so give it a go. We’ll also bootstrap your Sauce Labs environment for you if you haven’t done so already. It’s much easier to add to a working system once it’s set up correctly. Net-net, a little help in the beginning goes a long way. Ok, ’nuff said.

Python Virtualenv

July 26th, 2012 by jeremy avnet .:. brainsik

Python logo iconA Python virtualenv is a Python interpreter and set of installed Python packages. Python packages are installed separate from the main system so you don’t need to use sudo/su or worry about installing things system-wide. Since the interpreter (the python run) and packages in a virtualenv are separate from other virtualenvs, you can switch between different versions of Python and different versions of installed packages with a single command.

Using virtualenv lets you do things like:

  • Replicate your production Python environment in a dev setup so you can be sure you’re writing code and tests using the same package versions your deployed code will use.
  • Create environments with the same set of Python packages but using different versions of the Python interpreter (e.g., Python 2.5, Python 2.7, and PyPy).
  • Setup experimental environments for trying out new Python package versions or new Python software projects.

The easiest way to get going with virtualenv on Mac and Linux is to use the virtualenv-burrito installer. This one-line command installs virtualenv and virtualenvwrapper (a nice way to use virtualenvs):

curl -s https://raw.github.com/brainsik/virtualenv-burrito/master/virtualenv-burrito.sh | $SHELL

Once installed, you can make new virtualenvs with mkvirtualenv <name>, install packages with pip install <package>, and switch between virtualenvs using workon <name>.

Let’s make a virtualenv and run the example Sauce Labs test:

mkvirtualenv saucelabs
pip install selenium

Login to (or create a free account on) saucelabs.com. Go to the Python getting started page, copy your private curl command, and run it like:

curl -s https://saucelabs.com/example/se2/python/private-?????? | bash

To install the same Python package versions in another virtualenv we can use pip freeze to get what’s in our current environment, save it to a file, and use that file as an install list:

workon saucelabs
pip freeze > requirements.txt
mkvirtualenv likesaucelabs
pip install -r requirements.txt

By developing, testing, and running production Python code in virtualenvs created using the same requirements files, you greatly reduce the risk of writing bugs which only show up in one environment but not another. We highly recommend using this tool to help ship code faster.

Sauce OnDemand Now Supports Selenium 2.1.0

August 1st, 2011 by Santiago Suarez Ordoñez

In keeping up-to-date with the releases pushed by the Selenium project, Selenium version 2.1.0 is now fully available in our service.

This new release includes a mayor fix to an important bug affecting some native clicks on elements. You can check out the official changelog for more information.

Due to our new release process, there will be a testing period before we make this the default version in our service. (Once we’ve decided to do so, we’ll announce it in advance). In the meantime, we advise you to try out your tests in this new version using the following Desired Capabilities/JSON key-value:

"selenium-version": "2.1.0"

We’d love to hear if you see any issues after moving your tests to Selenium 2.1.0. And stay tuned, as we’ll be announcing 2.2.0 as well as other versions through our blog too!

#SeConf Videos Now Available!

May 2nd, 2011 by Ashley Wilson

In case you missed the awesome Selenium Conference that happened in early April, check out videos from each of the presentations. We’ll be posting a couple at a time, so stay tuned to future posts!


Jason Huggins’ Opening Keynote


Dave Hunt & Andrew Smith: Automating Canvas Applications
Despite recent improvements to automated testing tools, there’s still a large gap when it comes to emerging technologies such as HTML5. Recent developments like the canvas element present an interesting dilemma for traditional automated testing as they expose little or no information to debug tools. In order to move forwards, both developers and testers will need to work together. Using Selenium, Java, and JavaScript we will demonstrate writing automated tests for a popular canvas game.


Dima Kovalenko: Selenium and Cucumber
Wouldn’t it be nice to have the BA’s write out the acceptance criteria in plain English, and then have those criteria run as tests? Join us for a beginner to intermediate walk through of Cucumber and Selenium. Learn how to write tests that are easy to understand and run. There will be plenty of examples and sample code to get you going in the right direction.

How to serve PHP/Pear packages with GitHub

April 27th, 2011 by The Sauce Labs Team

PHP packages are distributed through Pear channels. If you want to download a PHP package, it’s as simple as downloading Pear and using it. The process of using pear is telling it the “channel” you want to download the package from, and then telling it to download the package. That’s what you do if you want to download somebody else’s PHP. If you want other people to be able to download yours, you have to make your own pear channel.

We had to do that recently, and it turns out there’s an easy way to do it thanks to GitHub and Fabien Potencier‘s Pirum. It’s pretty straightforward. Here come the lists. So many lists! Like 40. Still. Straightforward.

By the way, you’ll need to install pear and install git if you haven’t already.


 

Make a pear channel

  1. Create a new repository on GitHub called pear
    1. Make the project on github
    2. $ mkdir pear
    3. $ cd pear
    4. $ git init
    5. $ git remote add origin git@github.com:[your git username]/pear.git
  2. Install Pirum
    1. $ pear channel-discover pear.pirum-project.org
    2. $ pear install pirum/Pirum-beta
  3. Create a pirum configuration file:
    1. It’s called pirum.xml
    2. It goes in the root of your pear repository
    3. It contains:
      1. <?xml version="1.0" encoding="UTF-8" ?>
        <server>
        <name>[username].github.com/pear</name>
        <summary>[username]'s PEAR Channel Server</summary>
        <alias>[username]</alias>
        <url>http://[username].github.com/pear</url>
        </server>
  4. Run the build command
    1. $ pirum build .
  5. Now add and commit everything
    1. $ git add -A
    2. $ git commit -m "Initial server build. Sauce Labs is awesome"
  6. Rename your master branch to gh-pages and push it to GitHub
    1. $ git branch -m master gh-pages
    2. $ git push origin gh-pages
  7. Your PEAR channel server is now available (after maybe 15 minutes) under [username].github.com/pear. Test it out!
    1. $ pear channel-discover [username].github.com/pear
    2. $ pear channel-info [username]
    3. $ pear list-all -c [username]

 


There! Now you have a pear channel. Now you need to

Make a PHP package

  1. Go to the directory that contains your PHP files
  2. Create a package.xml file that contains metadata about your package
  3. Check that it’s a valid package
    1. $ pear package-validate
  4. Make the package!
    1. $ pear package (this should create a .tgz file that’s named after the package you detailed in package.xml)

 


Woo! Now you have a package and a channel. Next step is to

 

Add the package to the channel

  1. Copy the .tgz file to your pear repository
  2. Navigate to that directory
  3. Add the package to the channel locally
    1. $ pirum add . [filename].tgz
  4. Upload the changes to github! Note that you push to gh-pages and not to master.
    1. $ git add -A
    2. $ git commit -m "Added first version of my pear package. Sauce Labs is awesome"
    3. $ git push origin gh-pages

 


And you’re done! No more bullets! Or numbered lists. Now the whole world is exposed to your PHP. Hopefully that’s a good thing.

 

Here’s our Pear channel: https://github.com/saucelabs/pear

Here’s the source we distribute through it: https://github.com/saucelabs/phpunit-selenium-sauceondemand

Special thanks to Jan Sorgalla for showing me by example how to do all this.