[Webinar] Managing Continuous Delivery of Mobile Apps – for the Enterprise

July 23rd, 2015 by Bill McGee

Today, almost all organizations have mobile apps and for some, the mobile app is their only way of interacting with customers. With this increasing emphasis on mobile, the pressure to routinely update mobile apps means embracing Continuous Integration (CI) and Continuous Delivery (CD) methodologies.

Enabling CI / CD in your mobile development process means understanding the different solutions, overcoming unique challenges and ensuring the right ownership of the processes. In this webinar, join Harshal Vora from InfoStretch and Abhijit Pendyal from Sauce Labs to learn the steps required to enable Continuous Delivery of Mobile Application Platforms.

This webinar will cover:

  • Value of CI/CD in Mobile Development
  • CI/CD Architecture for Mobile Application Platforms
  • CI/CD Case Study – Requirements, Challenges and End Results
  • Demo – Jenkins / Code Update / Build Mobile App / Run automated tests using Sauce Labs

Join us for this presentation next Wednesday, July 29 at 11am PDT/2pm EDT. There will be a Q&A with Harshal and Abhijit afterwards.

Click HERE to register today.

Want to learn more about making Continuous Integration (CI) a part of your mobile development process? Download this free white paper, “Why Continuous Integration Should be Part of Your Mobile Development Process“.

Using QA to Enhance Communication

July 21st, 2015 by Ashley Hunsberger

Have you ever worked on a project and found yourself constantly shaking your head? I can say that 99% of the time that I experienced frustration, it was largely due to communication issues within a team. I’ve personally been on project teams and wondered if anyone there had ever taken a basic communications course and learned concepts like active listening, empathy, and being clear and concise. A team that can communicate will find success, but what about those who are not interacting well? Who can help your team get back on track? Believe it or not, your answer is the tester.

The Communication Breakdown

What exactly is keeping your team from communicating effectively? The answer may not be so obvious. In my experience, I’ve seen a few key contributors that include, but are not limited to:

The team does not own quality
Of course life would be easier if we could say ‘Well, I did my job!  I’m done! Time for the tester!”  But how does that foster communication in a team if you just throw something over the proverbial fence? If you set out only to do your job and not to understand anyone else’s, are you really being part of a team? You know the saying — “There is no I in TEAM.” We usually hear that as kids when we first embark in sports, but it goes for software development projects, too!

Not seeing the big picture
It is so easy to start looking in an Agile world at the small, granular user story level.  But when you start to lose the big picture, the ability to communicate becomes more and more difficult. There have been projects I’ve worked on where I just didn’t understand the business reasons (and I wasn’t the only one). Can you guess how many of those projects succeeded?

Not knowing exactly what to build
So many times, I’ve seen (and worked on) projects where a designer says “build this.” The engineer goes off and starts building, and the tester starts writing tests — presumably for the same feature.  But if an engineer has a clarifying question and does not include the tester when asking the designer, or vice versa, then what?  It’s an easy recipe that gets off track when you work in a silo.

Enter the Tester

Your tester can help alleviate many communication gaps. It just requires a different way of thinking.  Take each of the breakdown factors I’ve mentioned, and let’s look at how this QA team member is uniquely positioned to enhance your team’s communication and collaboration.

Everyone owns quality! Yes, you too, engineers…
Rather than creating a team that thinks QA alone is responsible for the quality of a product, what if everyone had skin in the game? Of course, part of that is understanding each other’s roles and what the value of testing is. A good tester will work with your team to decide what types of testing need to occur, who can take care of them, and free up those people to take on testing activities. One way to understand this is to take a look at the Agile Testing Quadrant.

agility_test_quadrants

If everyone is concerned about quality, and everyone starts talking about quality, and they aren’t just checking out as soon as their task is ‘complete’, you’re already on the path to improved communication.  You take it to a whole new level when you get people INVOLVED in quality.

The 30,000 foot view
I mentioned before that  it’s very easy to get caught in the weeds looking at an individual user story.  A tester should, however, know where that user story falls in the grand scheme of things — understanding the business value a feature brings, as well as the impacts it may have on other features.  With teams I have worked with in the past, we have invested time in making sure any new QA team member learns the product — not just the one area they are working on. (To repeat,  not just be the tester with the product knowledge.) It will be beneficial to anyone on the team, but your tester can help drive the conversations needed to understand the user story, taking a step back to see where it all fits in, and guide testing in a direction perhaps maybe not thought of before.

Everyone on the same page
Let’s imagine a world where we all go about building and testing the same product, with the same vision.  Is that possible when you work isolated from your team? Let’s look at a central testing practice for Agile teams called Specification by Example (also known as Acceptance Test-Driven Development, or ATDD).  The key idea here is that your tester, engineer, and designer all work together BEFORE anything is built to understand the feature, write the tests (your acceptance criteria), and know exactly how the feature will be tested. It is done when all tests pass. Manual or automated, this practice gets everyone on the same page. Your tester can help drive these conversations (I have often gone in with a list of tests, but typically just to elicit more questions and answers with the engineers and designers to get clarity and walk away with more tests — or changed tests, and ensure that we all understand the acceptance criteria as we have now written the tests together).

What are you waiting for?
I hope these concepts get you going in the right direction. Use your tester! QA is not just there to execute a test, but is uniquely structured to help foster communication and collaboration on your team, and make sure everyone is invested in quality.

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices.  Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen.  In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

T: @aahunsberger
L: https://www.linkedin.com/in/ashleyhunsberger

 

Recap: What DevOps Is – and Why You Should Care [Webinar]

July 16th, 2015 by Bill McGee

Thanks to everyone who joined us for our recent webinar, “What DevOps Is – and Why You Should Care“, featuring DevOps Analyst Chris Riley. In his presentation, Chris discussed the meaning and history of DevOps and gave his perspective on the movement, and his ideas about its future. He also shared the knowledge that he has gathered from tools vendors and practitioners, as well as:

  • The difference between the practice of DevOps and the movement
  • What the future of DevOps holds
  • The intersection of DevOps and QA

Missed the presentation, want to hear it again, or share with a colleague?

Access the recording HERE and view the slides below.

Want to read more about DevOps and Continuous Integration? Download this free GigaOm white paper by Chris Riley.

Don’t Let Them Leapfrog You

July 15th, 2015 by Ashley Hunsberger

The entry barrier for nearly all markets has been dramatically reduced, as continuous integration and delivery allows very small companies to leapfrog massive institutions. It’s no longer just about having that extra feature to get clients, it’s also about how you deliver. Let’s see why companies that have adopted continuous delivery are leaving their competitors in the dust — and how you can, too.

A Tale of Two Companies

“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way.” 
Charles Dickens: A Tale Of Two Cities (1859)

You may ask what on Earth one of my favorite novels has to do with testing and continuous delivery, but stay with me for a minute (or five). No, I won’t go into the French Revolution (though moving to continuous integration may feel like you are sparking a revolution in your development team). Let’s take a look at two businesses.

Meet Company A. They’ve been around awhile. In fact, they were one of the first companies in their market. As with most older companies, the company began as a waterfall shop and slowly transitioned to Agile, but is still showing signs of waterfall (something I like to call Scrummerfall).  They still get specs, they develop, and they test,  albeit in defined sprint time frames. Tests may be written while development is going on, but are performed after development hands off to QA.

Now, meet Company B, Company A’s competitor. Small and lean, the team has adopted continuous delivery, and acceptance test driven development (ATDD). They write tests first, and develop until all tests pass. They build quality in, rather than check for it later.

And the Winner Is…

Can you guess which company is chipping away at the market? The clear winner here is Company B.  Embracing DevOps has allowed Company B to realize several benefits, including (but not limited to):

Faster time to market – Get a desired product into customers’ hands faster!

Better quality product – Build the quality in! Find the bugs before QA even gets the product, or even more importantly, before customers see it.

Reduced cost of development – The cost to find bugs early (write tests first, build until all tests pass) is MUCH lower than the cost of finding them later in traditional testing cycles. And if a bug is found by the client? The cost skyrockets.  Build the quality in and reduce costs down the line.

Higher customer satisfaction – Continuous integration (CI) yields faster delivery, which means faster feedback. Issues are fixed faster, which means products are in clients’ hands faster. The customer has trust in your company and product.

ibm_blog_image.

Source: IBM http://www.ibm.com/developerworks/devops/

A Cultural Shift

Getting your team on the road to continuous delivery is not just a process change. It is a huge cultural shift. It is a completely different way of thinking, and it does not lie only within the development team. How will Sales present the company? How will Marketing develop materials? How will Support handle the change in release cadence? How will product managers adapt to learn from customers? It’s a ripple effect that will impact the entire company, for the better.

The longer you take to embrace DevOps (and continuous delivery), the more your competition has a chance to move in. Take your company to the next level. Be the change!

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices.  Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen.  In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

T: @aahunsberger
L: https://www.linkedin.com/in/ashleyhunsberger

An Efficient Release Made is a Penny Earned

July 9th, 2015 by Greg Sypolt

Introduction

Everyone wants to save money and deliver faster, right? Even so, you would be surprised how many software development organizations still do not practice continuous integration. They still depend heavily on expensive manual testing. Is your software development organization ready to change, to deliver faster with built-in quality and fewer meetings to boost productivity?

Without analysis, it is hard to realize the ROI (return on investment) for CI (continuous integration) and automated tests. Like any type of change, it requires analysis of investment cost, the timeline, and ROI before it’s adopted into your development organization.

Building In Quality

There is one common trend in software development organization — ‘building quality in’ at the beginning rather than considering quality at the end of the process.

EVERYONE owns quality — yes, even the product designers and developers. Everyone has visibility for all types of tests upfront (unit, integration, e2e, and manual).

Why should engineers and designers be involved? They help us understand what is possible at the unit, business, and front-end layers. By having everyone involved, the team can ALL agree on the acceptance criteria.

By including the developers and designers in your test strategy, everyone owns quality. You are there to drive that quality — not just be a tester. Engineers are testers. Designers are testers. Everyone can contribute!

Building quality in will prevent bugs rather than lead to the discovery of bugs at the end of the development cycle. The effect of building quality in therefore raises your ROI, and here’s how:

  • Built-in quality, reducing the number of bugs — since you are now preventing bugs, not just finding them.
  • Reduced manual testing, because testing has become automated, and most importantly, you’ve developed the right tests.
  • Quicker and more confident releases
  • Stabilized product that may give product management a chance to breathe and redefine the product strategy.
  • It is more fun for developers to work on a system that behaves predictably, doesn’t break anytime anyone touches it (and if it does break, you know immediately, not a week down the road).

Building quality in can make your system behave predictably. Slow adoption of continuous integration and automated tests makes it hard to realize ROI in this process. The most common (and hidden) cost in manual testing is finding a bug. So how do these effects translate into a dollar value?

PennyEarned_v2

IBM Research presented at AccessU Summit 2015

Here are perfect examples of the potential for improvement in ROI when ‘building quality in’ after adopting CI and automated tests without immediately seeing the savings:

A developer commits some bad code which breaks the build, and sometimes cripples the entire manual test for hours or days. Adopting continuous integration and automated tests will not cripple your QA teams.

Manual testing takes several days to complete, along with reporting the test results. Manual testing is repetitive (and kind of boring) after performing the same tests multiple times. Human errors result from manual testing. Adopting automated tests allows testing that’s repeatable and provides testing results within minutes, instead of waiting for days or weeks for testing results.

Wasting Time in Meetings

A simple way to boost productivity is to reduce meetings and stop wasting time in meetings by creating simple guidelines.

Let’s acknowledge that meetings do have a place in our development lives. They are great tools for planning, sharing information, and quickly getting everyone on the same page. Then there are those recurring meetings, often scheduled for the sake of having a meeting. At those times, it doesn’t hurt to question why you need to be present.

Example:
You have five employees meeting in a conference room or virtually. Let’s say the average annual salary is $55K and the company’s recurring meeting is once per week for one hour. How much does this meeting cost your company? Approximately $6,500 per year.

Time spent together is not always time spent getting work done. It is important to create simple guidelines to eliminate wasteful meetings and save the company money. Here’s some suggestions to help make meetings more productive:

  • Spend more time on the agenda and allow the attendees time to review.
  • Invite the right people — Don’t waste people’s time.
  • Always start the meeting on time, regardless of who may be late.
  • Avoid one-hour meetings unless they are completely necessary. You can even opt to switch that weekly one-hour meeting to a 10-15 minute stand-up meeting.
  • End the meeting at the agreed-upon time, even if the agenda is not finished.
  • Table any discussion that is not relevant to the agenda.

Conclusion

By following some of these suggested guidelines, reducing the number of meetings, and working smarter, your company will start to see real ROI. An efficient release made is a penny earned.

Greg Sypolt (@gregsypolt) is a senior engineer at Gannett and co-founder of Quality Element. He is a passionate automation engineer seeking to optimize software development quality, coaching team members how to write great automation scripts, and helping testing community become better testers. Greg has spent most of his career working on software quality – concentrating on web browsers, APIs, and mobile. For the past 5 years he has focused on the creation and deployment of automated test strategies, frameworks, tools, and platforms.

[Webinar] What DevOps Is – and Why You Should Care

July 8th, 2015 by Bill McGee

DevOps has become the newest buzzword. You’ve probably seen or heard the term DevOps once, twice, a thousand times. But even the biggest supporters of DevOps will admit that the concept is creating as much noise and confusion as converts.

The practice of DevOps is not new, yet in the last two years it has seemingly dominated chatter within software development. But what does DevOps really mean? And what is the impact of DevOps on QA teams, if any at all?

Join us as DevOps Analyst Chris Riley shares the meaning and history of DevOps, his perspective on the movement, and his ideas about its future. He will share the knowledge that he has gathered from tools vendors and practitioners, all to help you navigate the sea of DevOps conversations to maximize the movement to your advantage.

This webinar will cover:

  • The difference between the practice of DevOps and the movement
  • What the future of DevOps holds
  • The intersection of DevOps and QA

Join us for this presentation next Tuesday, July 14 at 11am PDT/2pm EDT. There will be a Q&A with Chris afterwards.

Click HERE to register today.

Want to read more about DevOps and Continuous Integration? Download this free GigaOm white paper.

Appium + Sauce Labs Bootcamp: Chapter 3, Working with Hybrid Apps and Mobile Web

July 6th, 2015 by Isaac Murchie

Mobile applications can be purely native, or web applications running in mobile browsers, or a hybrid of the two, with a web application running in a particular view or set of views within a native application. Appium is capable of automating all three types of applications, by providing different “contexts” in which commands will be interpreted.

Contexts

A context specifies how the server interprets commands, and which commands are available to the user. Appium currently supports two contexts: native and webview. Both of these are handled by different parts of the system, and may even proxy commands to another framework (such as webviews on Android, which are actually served by a managed ChromeDriver instance). It is important to know what context you are in, in order to know how you can automate an application.

Native contexts

Native contexts refer to native applications, and to those parts of hybrid apps that are running native views. Commands sent to Appium in the native context execute against the device vendor’s automation API, giving access to views and elements through name, accessibility id, etc. As well, in this context commands to interact directly with the device, to do operations such as changing the wifi connect or setting the location, can be used. These very powerful operations are not available within the context of a webview.

In addition to native and hybrid applications, the native context can be accessed in a mobile web app, in order to have some of the methods only available there. In this case it is important to understand that the commands are not running against the web application running in the browser, but rather are interacting with the device and the browser itself.

Webviews

There are two types of webviews. The first is the bulk of a mobile web application. Indeed, all automation of a mobile web application is done within a webview context, though one can switch into the native context in order to take advantage of some of Appium’s features for automating the device and handling the application life cycle. The second type of webview is that part of a hybrid application that is inside a UIAWebView (for iOS) or android.webkit.WebView (for Android). In the webview context the commands that can be used are the standard WebDriver commands, giving access to elements through css selectors and other web-specific locators such as link text.

Mobile web is essentially a specialized version of a hybrid application. What would be the native portion of a hybrid application is the browser itself! As you automate your application you can step out into the native context in order to interact with the browser or with the device itself. But when you begin automating a mobile web application Appium automatically takes you into the webview context. If you have a hybrid application that begins in a webview, you can have the same functionality by automatically entering into the initial webview by using the autoWebview desired capability set to true. Otherwise the automation script will need to first enter into webview before interacting with any elements.

Navigation

To move between contexts there is a method that takes the string name of the context to which you want to switch. The native context will have the name “NATIVE_APP” while the available webview contexts will have a name like WEBVIEW_1 (for iOS) or WEBVIEW_io.appium.android.apis (for Android, where io.appium.android.apis is the running activity). The generic WEBVIEW will choose the first available webview. This is not necessary when automating a mobile web browser

# switch to first available webview
driver.switch_to.context("WEBVIEW_1")

Once in the webview context Selenium commands to interact with a web application can be used.

driver.find_element_by_css_selector('.some_class')
driver.find_element_by_partial_link_text('Home Page')

The source at this point will be the html of the page loaded into the webview view, or the mobile web browser.

To return to the native context (which is not necessary for automating mobile web applications), you use the same command as used to get into the webview, but asking to switch to the native context.

# switch back to native context
driver.switch_to.context("NATIVE_APP")

Now, in the native context, if you get the source you will get an xml document describing all the elements in the view itself, not the html even if there is html being rendered in that view!

Querying contexts

It is possible to get a list of the available contexts, and choose the one to which to switch from those. This has the added bonus of making your tests capable of handling changes in context naming, and being the same across platforms. There will always be one (and only one) native context, named NATIVE_CONTEXT, and zero or more webview contexts, all of which will start with WEBVIEW.

webview = driver.contexts.last
driver.switch_to.context(webview)

Finally, you can retrieve the current context in order to make sure you are in the correct place, and to programmatically switch contexts at the correct time.

current_context = driver.context

# or
current_context = driver.current_context

Multi-tabbed web browsers

If your mobile environment supports tabbed browsing, as mobile Chrome does on Android, the tabs are accessible through the window commands in a webview context, just as in desktop browser automation!

# enter into the webview
webview = driver.contexts.last
driver.switch_to.context(webview)

# cycle through the tabs
for tab in driver.window_handles:
    driver.switch_to.window(tab)

# return to native context
driver.switch_to.context("NATIVE_APP")

Conclusion

The main thing about switching from a native context into a webview is that subsequent commands get proxied to a Selenium WebDriver session automating the browser which backs the webview. This makes it possible to run any webdriver commands that you would like! For instance, in a native context you cannot find an element using a css selector, but in a webview context that is perfectly reasonable. The underlying source for the app at that point is the html of the web page being displayed!

But Appium has a number of methods that are not available to normal webdriver. In order to take advantage of these methods one must be in a native context so that Appium itself handles the request, rather than a proxy.

It worked on my machine – Communication Trap

July 1st, 2015 by Ashley Hunsberger

“I don’t see it on my machine!” said every developer ever. Every QA professional I have talked to in my career has heard this at least once.

But why?

Have we asked what’s in a bug?

The answer can either be what gets your team on the road to efficiency, or it can become a kink in the delivery hose. Let’s discuss how your QA can help the team deliver faster by providing a consistent language to keep everyone on target.

Don’t Let The Bad Bugs Bite…

Over the last decade, I have seen issues that have almost no noted content in them (seriously, some have just declared something to the tune of “This feature is… not working”). Then there are tickets that are the golden standard, that have all the information you could possibly want (and probably some with more than you need that turn out to be a few bugs in themselves).

But what happens when you don’t have a common way to report a ticket, and why is it important?

I just came across an issue recently that seemed to have some steps to reproduce, but the setup was not included. Try as I might, I could not replicate the bug. The only way that I could come close to the reported result did not match the steps provided, and I could only guess that the setup I created was what the reporter had done. I will let you guess how long this issue took. Hint: It wasn’t a few hours.

Or perhaps you have an offshore team. I’ve seen many, many instances when someone reports a bug that just doesn’t have enough information in it. If the engineer cannot figure out exactly what the issue is, and has to place it on hold, back to the reporter, the engineer waits another night while the person on the other side of the world hopefully notices the ticket is back in his or her queue for more details. That is another full day that the bug exists, delaying when the root cause can be identified and the issue fixed.

Depending on the makeup of your team, and whether you are in an automated or manual setup — you need to consider how the issue will be verified. The person testing the fix (or writing the automated test to ensure the issue does not occur again) may not be the one who reported it. (Again, more time is spent figuring out how to test if the fix is correct.)

The bottom line? The back and forth that occurs from a poorly reported bug is costly in terms of time and resources.

Cut The Chit Chat

Having a uniform language/template will help reduce uncertainty across the board, and reduce the time a bug is spent unresolved. But what should be included in a bug report to cut out this back and forth, and keep the team on track?  There are several other things you may want to consider adding, but these are some of the top things I like to see from a tester:

  • Summary/Title: This should be succinct yet descriptive. I almost try to make these sound like a user story <user> <can/cannot><do x action> in <y feature>. When I sit in a triage meeting, can I tell what the issue is just by reading the summary?
  • Environment: every now and then we come across bugs that are very specific to the OS, database type, browser, etc.  Without listing this information, it’s all too easy to say ‘Can’t reproduce’, only to have a client find it in the field.
  • Build: Hopefully you are testing on the latest build, but if for some reason you have servers that are updated at different rates than others, you need to pinpoint when exactly the bug was found.
  • Devices: if you’re doing any type of mobile testing, what type of device were you using? What version? If you found a bug on the web app, do you see it on the mobile app too? Which one? Android or iOS?
  • Priority: The priorities are all relatively standard across the field — Critical, High, Medium and Low. Have criteria defined up front so everyone is on the same page as to what constitutes each selection.
  • Steps to reproduce: Not just ‘When I did this, it broke.’  Really break it down, from login and data setup to every click you make.
  • Expected Result vs. Actual Result: What were you expecting, and why?  What happened instead?
  • Requirements and Wireframes: This helps to point to why testing occurred, and why someone wrote up a bug and linked it back to the originating artifact, though hopefully you are on the same page upfront, before development begins. Sometimes things slip through and perhaps an engineer has a different understanding of a feature than the tester. Being able to point back to why you think an element is a bug is helpful, and gets you all on the same page.

Of course, there are people other than your traditional testers writing bugs, and it is essential to use your QA to drive conformity. Perhaps your UX team is performing audits, or you have bug bashes where people from other departments are invited to test the system and find bugs, or you have someone new to the team that simply needs training. Having a template will ensure clarity and reduce inefficiencies, regardless of who enters the ticket.

Utilize QA to promote consistency, get bugs out of purgatory, and drive faster delivery.

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices.  Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen.  In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

T: @aahunsberger
L: https://www.linkedin.com/in/ashleyhunsberger

Recap: Test Automation Newbie? Robot Framework Will Save the Day! [Webinar]

June 26th, 2015 by Bill McGee

Thanks to everyone who joined us for our recent webinar, “Test Automation Newbie? Robot Framework Will Save the Day“, featuring Bryan Lamb (Founder, RobotFrameworkTutorial.com) and Chris Broesamle (Solutions Engineer, Sauce Labs), The webinar demonstrated how you can use Robot Framework, an open source, generic framework to create continuous automated regression tests for web, batch, API, or database testing. Topics covered included:

  • Where to find Robot Framework
  • How to install it
  • How to create automated test cases using plain English keywords
  • How to integrate automated tests into a Jenkins build
  • How to run test cases locally or on the Sauce Labs platform

Missed the presentation, want to hear it again, or share with a colleague?

Access the recording HERE and view the slides below.

Want to read more about using automated testing to get more out of your CI/CD workflow? Download this free white paper.

Guest post: Proving that an application is as broken as intended

June 25th, 2015 by Björn Kimminich
Typically you want to use end-to-end (e2e) tests to prove that everything works as intended in a realistic environment. In the Juice Shop application that idea is changed to the contrary. Here the main purpose of the e2e test suite is to prove that the application is as broken as intended!

Juice Shop: Broken beyond hope – but on purpose!

“WTF?” you might ask, and rightfully so. Juice Shop is a special kind of application. It is an intentionally insecure Javascript web application designed to be used during security trainings, classes, workshops or awareness demos. It contains over 25 vulnerabilities that an aspiring hacker can exploit in order to fulfill challenges that are tracked on a scoreboard.

The job of the e2e test suite is twofold:

  1. It ensures that the overall functionality (e.g., logging in, placing products in the basket, submitting an order, etc.) of the application is working. This is the above mentioned typical use case for e2e tests.
  2. It performs attacks on the application that should solve all the existing challenges. This includes SQL Injections, Cross-Site Scripting) attacks, business logic error exploits and many more.

 

When does Juice Shop pass its e2e test suite? When it is working fine for the average nice user and all challenges are solvable, so an attacker can get a 100% on the scoreboard!

Juice Shop logo

Application Architecture

Juice Shop is created entirely in Javascript, with a Single-Page-Application frontend (using AngularJS with Bootstrap) and a RESTful backend using Express on top of NodeJS.

 

The underlying database is a simple file-based SQLite with Sequelize as a OR-mapper and sequelize-restful to generate the simple (but not necessarily secure) parts of the API dynamically.

Test Stages

There three different types of of tests to make sure Juice Shop is not released in an unintendedly broken state:

  1. Unit tests make sure that the frontend services and controllers work how they should. The AngularJS services/controller are tested with Karma and Jasmine.
  2. API tests verify the RESTful backend is behaving properly when running as a real server. These tests are done with Karma and frisby.js for orchestrating the API calls.
  3. The e2e test suite performs typical use cases and all kinds of attacks via browser-automation using Protractor and Jasmine.

 

If all stages pass and the application survives a quick monkey-test by yours truly it will be released on GitHub and SourceForge.

Why Sauce Labs?

There are two reasons to run Juice Shop tests on Sauce Labs:

  1. Seeing the front-end unit tests pass on a laptop already gives a good feeling for an upcoming release. But there they run only on PhantomJS, so not in a real browser. Seeing them pass on various browsers increases confidence in the release.
  2. The e2e tests must be executed before shipping a release. Wanting to make sure they are not skipped due to laziness or overconfidence (“Oh’ it’s such a small fix, what could it possibly break?” – sound familiar?) the e2e suite must be integrated into the CI pipeline.

 

Having laid out the context the rest of the article will explain how both these goals could be achieved by integrating with Sauce Labs.

Execution via Travis-CI

Juice Shop builds on Travis-CI which Sauce Labs integrates nicely with out of the box. The following snippet from the .travis.yml shows the necessary configuration
and the two commands being called to excecute unit and e2e tests.

addons:
  sauce_connect: true
after_success:
- karma start karma.conf-ci.js
- node test/e2eTests.js
env:
  global:
  - secure: <your encrypted SAUCE_USERNAME>
  - secure: <your encrypted SAUCE_ACCESS_KEY>

Frontend Unit Tests

The karma.conf-ci.js contains the configuration for the frontend unit tests. Juice Shop uses six different OS/Browser configurations:

var customLaunchers = {
    sl_chrome: {
        base: 'SauceLabs',
        browserName: 'chrome',
        platform : 'Linux',
        version: '37'
    },
    sl_firefox: {
        base: 'SauceLabs',
        browserName: 'firefox',
        platform: 'Linux',
        version: '33'
    },
    sl_ie_11: {
        base: 'SauceLabs',
        browserName: 'internet explorer',
        platform: 'Windows 8.1',
        version: '11'
    },
    sl_ie_10: {
        base: 'SauceLabs',
        browserName: 'internet explorer',
        platform: 'Windows 8',
        version: '10'
    },
    sl_ie_9: {
        base: 'SauceLabs',
        browserName: 'internet explorer',
        platform: 'Windows 7',
        version: '9'
    },
    sl_safari: {
        base: 'SauceLabs',
        browserName: 'safari',
        platform: 'OS X 10.9',
        version: '7'
    }
};

 

In order associate the test executions with the Travis-CI build that triggered them, some extra configuration is necessary:

    sauceLabs: {
        testName: 'Juice-Shop Unit Tests (Karma)',
        username: process.env.SAUCE_USERNAME,
        accessKey: process.env.SAUCE_ACCESS_KEY,
        connectOptions: {
            tunnelIdentifier: process.env.TRAVIS_JOB_NUMBER,
            port: 4446
        },
        build: process.env.TRAVIS_BUILD_NUMBER,
        tags: [process.env.TRAVIS_BRANCH, process.env.TRAVIS_BUILD_NUMBER, 'unit'],
        recordScreenshots: false
    }
    reporters: ['dots', 'saucelabs']

 

Thanks to the existing karma-sauce-launcher module the tests are executed and their result is reported back to Sauce Labs out of the box. Nice. The e2e suite was a tougher nut to crack.

End-to-end Tests

For the Protractor e2e tests there are no separate configuration files for local and CI, just one protractor.conf.js with some extra settings then running on Travis-CI to pass necessary data to Sauce Labs:

if (process.env.TRAVIS_BUILD_NUMBER) {
    exports.config.seleniumAddress = 'http://localhost:4445/wd/hub';
    exports.config.capabilities = {
        'name': 'Juice-Shop e2e Tests (Protractor)',
        'browserName': 'chrome',
        'platform': 'Windows 7',
        'screen-resolution': '1920x1200',
        'username': process.env.SAUCE_USERNAME,
        'accessKey': process.env.SAUCE_ACCESS_KEY,
        'tunnel-identifier': process.env.TRAVIS_JOB_NUMBER,
        'build': process.env.TRAVIS_BUILD_NUMBER,
        'tags': [process.env.TRAVIS_BRANCH, process.env.TRAVIS_BUILD_NUMBER, 'e2e']
    };
}

 

The e2e tests are launched via e2eTests.js which spawns a separate process for Protractor after launching the Juice Shop server:

var spawn = require('win-spawn'),
    SauceLabs = require('saucelabs'),
    colors = require('colors/safe'),
    server = require('./../server.js');

server.start({ port: 3000 }, function () {
    var protractor = spawn('protractor', [ 'protractor.conf.js' ]);

    function logToConsole(data) {
        console.log(String(data));
    }

    protractor.stdout.on('data', logToConsole);
    protractor.stderr.on('data', logToConsole);

    protractor.on('exit', function (exitCode) {
        console.log('Protractor exited with code ' + exitCode + ' (' + (exitCode === 0 ? colors.green('SUCCESS') : colors.red('FAILED')) + ')');
        if (process.env.TRAVIS_BUILD_NUMBER && process.env.SAUCE_USERNAME && process.env.SAUCE_ACCESS_KEY) {
            setSaucelabJobResult(exitCode);
        } else {
            server.close(exitCode);
        }
    });
});

 

The interesting part regarding Sauce Labs is the call to setSaucelabJobResult(exitCode) in case the test is run on Travis-CI with Sauce Labs credentials which are passed in by the extra config part in protractor.conf.js.

This function passes the test result from Protractor on to Sauce Lab’s REST API:

function setSaucelabJobResult(exitCode) {
    var sauceLabs = new SauceLabs({ username: process.env.SAUCE_USERNAME, password: process.env.SAUCE_ACCESS_KEY });
    sauceLabs.getJobs(function (err, jobs) {
        for (var j in jobs) {
            if (jobs.hasOwnProperty(j)) {
                sauceLabs.showJob(jobs[j].id, function (err, job) {
                    var tags = job.tags;
                    if (tags.indexOf(process.env.TRAVIS_BUILD_NUMBER) > -1 && tags.indexOf('e2e') > -1) {
                        sauceLabs.updateJob(job.id, { passed : exitCode === 0 }, function(err, res) {
                            console.log('Marked job ' + job.id + ' for build #' + process.env.TRAVIS_BUILD_NUMBER + ' as ' + (exitCode === 0 ? colors.green('PASSED') : colors.red('FAILED')) + '.');
                            server.close(exitCode);
                        });
                    }
                });
            }
        }
    });
}

 

This was necessary because there was no launcher available at the time that would do this out-of-the-box.

Determining solved Challenges

How does Protractor get its test result in the first place? It must be able to determine if all challenges were solved on the score board and cannot access the database directly to do that. But: It can access the score board in the application:

Screenshot score board

As solved challenges are highlighted green instead of red some simple generic function was used to assert this:

protractor.expect = {
    challengeSolved: function (context) {
        describe("(shared)", function () {

            beforeEach(function () {
                browser.get('/#/score-board');
            });

            it("challenge '" + context.challenge + "' should be solved on score board", function () {
                expect(element(by.id(context.challenge + '.solved')).getAttribute('class')).not.toMatch('ng-hide');
                expect(element(by.id(context.challenge + '.notSolved')).getAttribute('class')).toMatch('ng-hide');
            });

        });
    }
}

 

When watching the e2e suite run Protractor will constantly visit the score board to check each challenge. This is quite interesting to watch as the progress bar on top moves closer to 100% with every test. But be warned: If you plan on trying to hack away on Juice Shop to solve all the challenges yourself, you will find the following screencast to be quite a spoiler! ;-)

Bjoern Kimminich is responsible for IT architecture and application security at Kuehne + Nagel and as a side job is giving lectures on Software Engineering at the private university Nordakademie. When not working on his ownJuice Shop, Bjoern thinks up Code Katas and regularly speaks at conferences and meetups on topics like application security and software craftsmanship. Twitter: @bkimminich