To Validate or Verify?

July 30th, 2015 by Ashley Hunsberger

You say to-MAY-toe, I say to-MAH-toe.

I hear the questions daily — “Did you validate the system?  Did you verify the feature?” The words validate and verify are used interchangeably, but what do they really mean? Is there a difference? In the world of software development and quality assurance, yes… and you need to do both. It’s even more important for the tester to understand what they are, and what each entails – and how some definitions may change in a world where waterfall is out and continuous delivery is king.

What’s the difference?

The English definitions of validate and verify are pretty close; in fact, they are listed as synonyms of each other.

Validate – to recognize, establish, or illustrate the worthiness or legitimacy of (Def 2b, Meriam-Webster Dictionary Online http://www.merriam-webster.com/dictionary/validate)

Verify – to prove, show, find out, or state that (something) is true or correct (Merriam-Webster Dictionary Online http://www.merriam-webster.com/dictionary/verify)

In other words, is something right?

But how do we use them in software development? As I started thinking about this (and slowly started to get more and more irked by people using the words interchangeably), I realized that we may see a shift in how most people view verification and validation as more development teams adopt continuous delivery practices.

There are a lot of sites that indicate that verifying software asks, “Did we build the software right?”, while validating software asks, “Did we build the right software?”1)An example can be found here: http://softwaretestingfundamentals.com/verification-vs-validation/ – and those are great questions that definitely still stand going from a waterfall to a continuous world. However, I’ve been reading other posts that indicate verification does not actually include testing the code 2)Examples that illustrate verification does not include testing in the code can bef ound here: http://testingbasicinterviewquestions.blogspot.com/2012/01/difference-between-verification-and.html and here: http://www.softwaretestingclass.com/difference-between-verification-and-validation/, and I just cannot get behind this. I’m not saying that the authors are necessarily wrong. Some of the posts are pretty old (in software, things age in dog years) and posts like these are more than likely written for teams using a waterfall methodology, or trying to obtain certifications like CMMI or adhere to standards like IEEE that have certain things spelled out. But as we make the shift to continuous delivery, I think we need to change a little bit about what our perceptions of ‘artifacts’ and verification really mean.

My opinion ( perhaps mine alone), is that in continuous integration and delivery, where testing is brought upfront and not held off until the end, verification may also come in the form of testing each user story, and since development is not complete until all of those tests pass, I am answering the question, “Did I build the software right?”  Therefore, my testing is verification. This contradicts others’ posts that this is done without using the software, and is about pure reviews of various artifacts.  Others may disagree, but I think that testing the software is an important part of verification. How else do you know it was built correctly if you are not testing?

From my point of view, part of validation comes from the acceptance criteria the team defines upfront — diving into the question of whether or not the software is useful for the client, and what is considered acceptable. Yes, there is a lot of validation activity that mostly still occurs at the end (such as usability tests, customer or beta testing, etc…), but I think if you are using Acceptance Test Driven Development, you are identifying early on that what you are building is what is best for the client (whether through prototyping, creating wireframes, writing acceptance tests, and so on) and answering for yourself, “Am I building the right software?” Validation can go beyond just the acceptance criteria (though hopefully the upfront discussions of the user stories, business reasons, and acceptance criteria help mitigate potentially not validating something very late in the game.) Some things you just do not know until it’s built, and so you may not be able to fully validate something until an audit or end-to- end workflow or exploratory testing occurs to really identify how usable something is.

Image Source: http://www.easterbrook.ca/steve/2010/11/the-difference-between-verification-and-validation/

Image Source: http://www.easterbrook.ca/steve/2010/11/the-difference-between-verification-and-validation/

Why do I need to do both?

It is important to both verify and validate your product. Just because you perform one does not mean that the other can be ignored. For example, I can verify that all tests have passed and that technically the software is built correctly.  But that does not tell me that I have actually developed something usable and meaningful for the client. In other words, what has been verified is not validated.

Does the vocabulary really matter? In all likelihood, people are still going to use the words interchangeably. But now you know the difference.

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices.  Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen.  In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

T: @aahunsberger
L: https://www.linkedin.com/in/ashleyhunsberger

 

References   [ + ]

Updated Jenkins CI Plugin for Sauce

July 29th, 2015 by Ken Drachnik

Because enterprises are under extreme internal and external pressure to deliver better software, faster, development teams are transitioning to new software delivery processes such as Continuous Integration (CI) and Continuous Delivery (CD). Both practices have proven their value in accelerating software production beyond traditional approaches, yet achieving continuous testing to support these processes is often the most difficult hurdle. Our updated Jenkins Plugin offers developers the combined power of the Sauce platform with Jenkins CI, the world’s leading CI tool, to further simplify and accelerate the development process.

The latest release of the Jenkins CI Plugin offers a set of rich features to help developers better utilize their CI systems to increase developer productivity so they can release better software, faster.

Here’s a summary of what’s new:

Updated UI
The latest release of the Sauce OnDemand Jenkins plugin includes an updated UI with features to make it easier to run tests faster in parallel and understand the results.

Updated Browser Selection Tool
This makes it easier to speed testing by running tests in parallel. The new UI makes it very simple to select multiple platforms and browsers at one time.

Enhance Reporting
Reporting enhancements include test job details so that users get more detailed information about their test results on the Jenkins build page. Users now can view a detailed list of Sauce jobs in Jenkins by name, OS, browser and version, pass/fail status, link to the test video and log.

Latest Version of Sauce Connect
Includes the latest version of Sauce Connect v4.3.9 (Sauce Labs’ secure tunnel) that gives users the latest security enhancements when running tests on applications located behind their firewall.

Automated Support Log Generation
Users can now create a zip file that contains the Sauce Connect log and Jenkins build output that makes support and debugging tests easier.

Updated Jenkins Build Messaging
Includes log information to indicate the status of the job such as when it has started, stopped and finished processing. This gives you more detailed information about the status of the job so you can debug more effectively.

The updated plug in is available today.
Download the new plugin
Download from the Jenkins Plugin Marketplace:
Read the docs:

Windows 10 Support

July 28th, 2015 by Ken Drachnik

Windows 10 150 pxWindows 10 is scheduled to launch tomorrow so today Sauce Labs is pleased to announce support for manual and automated Windows 10 testing – get your Win10 tests set up with our Automated Test Configurator.

We often release new OS / browser combinations ahead of their scheduled launch so you can begin testing to make sure your apps work on the shiny new systems ahead of time. Unfortunately, this release was a bit different as we did not get access to the bits we needed until later in the cycle.

Happy Testing!

[Webinar] Managing Continuous Delivery of Mobile Apps – for the Enterprise

July 23rd, 2015 by Bill McGee

Today, almost all organizations have mobile apps and for some, the mobile app is their only way of interacting with customers. With this increasing emphasis on mobile, the pressure to routinely update mobile apps means embracing Continuous Integration (CI) and Continuous Delivery (CD) methodologies.

Enabling CI / CD in your mobile development process means understanding the different solutions, overcoming unique challenges and ensuring the right ownership of the processes. In this webinar, join Harshal Vora from InfoStretch and Abhijit Pendyal from Sauce Labs to learn the steps required to enable Continuous Delivery of Mobile Application Platforms.

This webinar will cover:

  • Value of CI/CD in Mobile Development
  • CI/CD Architecture for Mobile Application Platforms
  • CI/CD Case Study – Requirements, Challenges and End Results
  • Demo – Jenkins / Code Update / Build Mobile App / Run automated tests using Sauce Labs

Join us for this presentation next Wednesday, July 29 at 11am PDT/2pm EDT. There will be a Q&A with Harshal and Abhijit afterwards.

Click HERE to register today.

Want to learn more about making Continuous Integration (CI) a part of your mobile development process? Download this free white paper, “Why Continuous Integration Should be Part of Your Mobile Development Process“.

Using QA to Enhance Communication

July 21st, 2015 by Ashley Hunsberger

Have you ever worked on a project and found yourself constantly shaking your head? I can say that 99% of the time that I experienced frustration, it was largely due to communication issues within a team. I’ve personally been on project teams and wondered if anyone there had ever taken a basic communications course and learned concepts like active listening, empathy, and being clear and concise. A team that can communicate will find success, but what about those who are not interacting well? Who can help your team get back on track? Believe it or not, your answer is the tester.

The Communication Breakdown

What exactly is keeping your team from communicating effectively? The answer may not be so obvious. In my experience, I’ve seen a few key contributors that include, but are not limited to:

The team does not own quality
Of course life would be easier if we could say ‘Well, I did my job!  I’m done! Time for the tester!”  But how does that foster communication in a team if you just throw something over the proverbial fence? If you set out only to do your job and not to understand anyone else’s, are you really being part of a team? You know the saying — “There is no I in TEAM.” We usually hear that as kids when we first embark in sports, but it goes for software development projects, too!

Not seeing the big picture
It is so easy to start looking in an Agile world at the small, granular user story level.  But when you start to lose the big picture, the ability to communicate becomes more and more difficult. There have been projects I’ve worked on where I just didn’t understand the business reasons (and I wasn’t the only one). Can you guess how many of those projects succeeded?

Not knowing exactly what to build
So many times, I’ve seen (and worked on) projects where a designer says “build this.” The engineer goes off and starts building, and the tester starts writing tests — presumably for the same feature.  But if an engineer has a clarifying question and does not include the tester when asking the designer, or vice versa, then what?  It’s an easy recipe that gets off track when you work in a silo.

Enter the Tester

Your tester can help alleviate many communication gaps. It just requires a different way of thinking.  Take each of the breakdown factors I’ve mentioned, and let’s look at how this QA team member is uniquely positioned to enhance your team’s communication and collaboration.

Everyone owns quality! Yes, you too, engineers…
Rather than creating a team that thinks QA alone is responsible for the quality of a product, what if everyone had skin in the game? Of course, part of that is understanding each other’s roles and what the value of testing is. A good tester will work with your team to decide what types of testing need to occur, who can take care of them, and free up those people to take on testing activities. One way to understand this is to take a look at the Agile Testing Quadrant.

agility_test_quadrants

If everyone is concerned about quality, and everyone starts talking about quality, and they aren’t just checking out as soon as their task is ‘complete’, you’re already on the path to improved communication.  You take it to a whole new level when you get people INVOLVED in quality.

The 30,000 foot view
I mentioned before that  it’s very easy to get caught in the weeds looking at an individual user story.  A tester should, however, know where that user story falls in the grand scheme of things — understanding the business value a feature brings, as well as the impacts it may have on other features.  With teams I have worked with in the past, we have invested time in making sure any new QA team member learns the product — not just the one area they are working on. (To repeat,  not just be the tester with the product knowledge.) It will be beneficial to anyone on the team, but your tester can help drive the conversations needed to understand the user story, taking a step back to see where it all fits in, and guide testing in a direction perhaps maybe not thought of before.

Everyone on the same page
Let’s imagine a world where we all go about building and testing the same product, with the same vision.  Is that possible when you work isolated from your team? Let’s look at a central testing practice for Agile teams called Specification by Example (also known as Acceptance Test-Driven Development, or ATDD).  The key idea here is that your tester, engineer, and designer all work together BEFORE anything is built to understand the feature, write the tests (your acceptance criteria), and know exactly how the feature will be tested. It is done when all tests pass. Manual or automated, this practice gets everyone on the same page. Your tester can help drive these conversations (I have often gone in with a list of tests, but typically just to elicit more questions and answers with the engineers and designers to get clarity and walk away with more tests — or changed tests, and ensure that we all understand the acceptance criteria as we have now written the tests together).

What are you waiting for?
I hope these concepts get you going in the right direction. Use your tester! QA is not just there to execute a test, but is uniquely structured to help foster communication and collaboration on your team, and make sure everyone is invested in quality.

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices.  Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen.  In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

T: @aahunsberger
L: https://www.linkedin.com/in/ashleyhunsberger

 

Recap: What DevOps Is – and Why You Should Care [Webinar]

July 16th, 2015 by Bill McGee

Thanks to everyone who joined us for our recent webinar, “What DevOps Is – and Why You Should Care“, featuring DevOps Analyst Chris Riley. In his presentation, Chris discussed the meaning and history of DevOps and gave his perspective on the movement, and his ideas about its future. He also shared the knowledge that he has gathered from tools vendors and practitioners, as well as:

  • The difference between the practice of DevOps and the movement
  • What the future of DevOps holds
  • The intersection of DevOps and QA

Missed the presentation, want to hear it again, or share with a colleague?

Access the recording HERE and view the slides below.

Want to read more about DevOps and Continuous Integration? Download this free GigaOm white paper by Chris Riley.

Don’t Let Them Leapfrog You

July 15th, 2015 by Ashley Hunsberger

The entry barrier for nearly all markets has been dramatically reduced, as continuous integration and delivery allows very small companies to leapfrog massive institutions. It’s no longer just about having that extra feature to get clients, it’s also about how you deliver. Let’s see why companies that have adopted continuous delivery are leaving their competitors in the dust — and how you can, too.

A Tale of Two Companies

“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way.” 
Charles Dickens: A Tale Of Two Cities (1859)

You may ask what on Earth one of my favorite novels has to do with testing and continuous delivery, but stay with me for a minute (or five). No, I won’t go into the French Revolution (though moving to continuous integration may feel like you are sparking a revolution in your development team). Let’s take a look at two businesses.

Meet Company A. They’ve been around awhile. In fact, they were one of the first companies in their market. As with most older companies, the company began as a waterfall shop and slowly transitioned to Agile, but is still showing signs of waterfall (something I like to call Scrummerfall).  They still get specs, they develop, and they test,  albeit in defined sprint time frames. Tests may be written while development is going on, but are performed after development hands off to QA.

Now, meet Company B, Company A’s competitor. Small and lean, the team has adopted continuous delivery, and acceptance test driven development (ATDD). They write tests first, and develop until all tests pass. They build quality in, rather than check for it later.

And the Winner Is…

Can you guess which company is chipping away at the market? The clear winner here is Company B.  Embracing DevOps has allowed Company B to realize several benefits, including (but not limited to):

Faster time to market – Get a desired product into customers’ hands faster!

Better quality product – Build the quality in! Find the bugs before QA even gets the product, or even more importantly, before customers see it.

Reduced cost of development – The cost to find bugs early (write tests first, build until all tests pass) is MUCH lower than the cost of finding them later in traditional testing cycles. And if a bug is found by the client? The cost skyrockets.  Build the quality in and reduce costs down the line.

Higher customer satisfaction – Continuous integration (CI) yields faster delivery, which means faster feedback. Issues are fixed faster, which means products are in clients’ hands faster. The customer has trust in your company and product.

ibm_blog_image.

Source: IBM http://www.ibm.com/developerworks/devops/

A Cultural Shift

Getting your team on the road to continuous delivery is not just a process change. It is a huge cultural shift. It is a completely different way of thinking, and it does not lie only within the development team. How will Sales present the company? How will Marketing develop materials? How will Support handle the change in release cadence? How will product managers adapt to learn from customers? It’s a ripple effect that will impact the entire company, for the better.

The longer you take to embrace DevOps (and continuous delivery), the more your competition has a chance to move in. Take your company to the next level. Be the change!

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices.  Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen.  In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

T: @aahunsberger
L: https://www.linkedin.com/in/ashleyhunsberger

An Efficient Release Made is a Penny Earned

July 9th, 2015 by Greg Sypolt

Introduction

Everyone wants to save money and deliver faster, right? Even so, you would be surprised how many software development organizations still do not practice continuous integration. They still depend heavily on expensive manual testing. Is your software development organization ready to change, to deliver faster with built-in quality and fewer meetings to boost productivity?

Without analysis, it is hard to realize the ROI (return on investment) for CI (continuous integration) and automated tests. Like any type of change, it requires analysis of investment cost, the timeline, and ROI before it’s adopted into your development organization.

Building In Quality

There is one common trend in software development organization — ‘building quality in’ at the beginning rather than considering quality at the end of the process.

EVERYONE owns quality — yes, even the product designers and developers. Everyone has visibility for all types of tests upfront (unit, integration, e2e, and manual).

Why should engineers and designers be involved? They help us understand what is possible at the unit, business, and front-end layers. By having everyone involved, the team can ALL agree on the acceptance criteria.

By including the developers and designers in your test strategy, everyone owns quality. You are there to drive that quality — not just be a tester. Engineers are testers. Designers are testers. Everyone can contribute!

Building quality in will prevent bugs rather than lead to the discovery of bugs at the end of the development cycle. The effect of building quality in therefore raises your ROI, and here’s how:

  • Built-in quality, reducing the number of bugs — since you are now preventing bugs, not just finding them.
  • Reduced manual testing, because testing has become automated, and most importantly, you’ve developed the right tests.
  • Quicker and more confident releases
  • Stabilized product that may give product management a chance to breathe and redefine the product strategy.
  • It is more fun for developers to work on a system that behaves predictably, doesn’t break anytime anyone touches it (and if it does break, you know immediately, not a week down the road).

Building quality in can make your system behave predictably. Slow adoption of continuous integration and automated tests makes it hard to realize ROI in this process. The most common (and hidden) cost in manual testing is finding a bug. So how do these effects translate into a dollar value?

PennyEarned_v2

IBM Research presented at AccessU Summit 2015

Here are perfect examples of the potential for improvement in ROI when ‘building quality in’ after adopting CI and automated tests without immediately seeing the savings:

A developer commits some bad code which breaks the build, and sometimes cripples the entire manual test for hours or days. Adopting continuous integration and automated tests will not cripple your QA teams.

Manual testing takes several days to complete, along with reporting the test results. Manual testing is repetitive (and kind of boring) after performing the same tests multiple times. Human errors result from manual testing. Adopting automated tests allows testing that’s repeatable and provides testing results within minutes, instead of waiting for days or weeks for testing results.

Wasting Time in Meetings

A simple way to boost productivity is to reduce meetings and stop wasting time in meetings by creating simple guidelines.

Let’s acknowledge that meetings do have a place in our development lives. They are great tools for planning, sharing information, and quickly getting everyone on the same page. Then there are those recurring meetings, often scheduled for the sake of having a meeting. At those times, it doesn’t hurt to question why you need to be present.

Example:
You have five employees meeting in a conference room or virtually. Let’s say the average annual salary is $55K and the company’s recurring meeting is once per week for one hour. How much does this meeting cost your company? Approximately $6,500 per year.

Time spent together is not always time spent getting work done. It is important to create simple guidelines to eliminate wasteful meetings and save the company money. Here’s some suggestions to help make meetings more productive:

  • Spend more time on the agenda and allow the attendees time to review.
  • Invite the right people — Don’t waste people’s time.
  • Always start the meeting on time, regardless of who may be late.
  • Avoid one-hour meetings unless they are completely necessary. You can even opt to switch that weekly one-hour meeting to a 10-15 minute stand-up meeting.
  • End the meeting at the agreed-upon time, even if the agenda is not finished.
  • Table any discussion that is not relevant to the agenda.

Conclusion

By following some of these suggested guidelines, reducing the number of meetings, and working smarter, your company will start to see real ROI. An efficient release made is a penny earned.

Greg Sypolt (@gregsypolt) is a senior engineer at Gannett and co-founder of Quality Element. He is a passionate automation engineer seeking to optimize software development quality, coaching team members how to write great automation scripts, and helping testing community become better testers. Greg has spent most of his career working on software quality – concentrating on web browsers, APIs, and mobile. For the past 5 years he has focused on the creation and deployment of automated test strategies, frameworks, tools, and platforms.

[Webinar] What DevOps Is – and Why You Should Care

July 8th, 2015 by Bill McGee

DevOps has become the newest buzzword. You’ve probably seen or heard the term DevOps once, twice, a thousand times. But even the biggest supporters of DevOps will admit that the concept is creating as much noise and confusion as converts.

The practice of DevOps is not new, yet in the last two years it has seemingly dominated chatter within software development. But what does DevOps really mean? And what is the impact of DevOps on QA teams, if any at all?

Join us as DevOps Analyst Chris Riley shares the meaning and history of DevOps, his perspective on the movement, and his ideas about its future. He will share the knowledge that he has gathered from tools vendors and practitioners, all to help you navigate the sea of DevOps conversations to maximize the movement to your advantage.

This webinar will cover:

  • The difference between the practice of DevOps and the movement
  • What the future of DevOps holds
  • The intersection of DevOps and QA

Join us for this presentation next Tuesday, July 14 at 11am PDT/2pm EDT. There will be a Q&A with Chris afterwards.

Click HERE to register today.

Want to read more about DevOps and Continuous Integration? Download this free GigaOm white paper.

Appium + Sauce Labs Bootcamp: Chapter 3, Working with Hybrid Apps and Mobile Web

July 6th, 2015 by Isaac Murchie

Mobile applications can be purely native, or web applications running in mobile browsers, or a hybrid of the two, with a web application running in a particular view or set of views within a native application. Appium is capable of automating all three types of applications, by providing different “contexts” in which commands will be interpreted.

Contexts

A context specifies how the server interprets commands, and which commands are available to the user. Appium currently supports two contexts: native and webview. Both of these are handled by different parts of the system, and may even proxy commands to another framework (such as webviews on Android, which are actually served by a managed ChromeDriver instance). It is important to know what context you are in, in order to know how you can automate an application.

Native contexts

Native contexts refer to native applications, and to those parts of hybrid apps that are running native views. Commands sent to Appium in the native context execute against the device vendor’s automation API, giving access to views and elements through name, accessibility id, etc. As well, in this context commands to interact directly with the device, to do operations such as changing the wifi connect or setting the location, can be used. These very powerful operations are not available within the context of a webview.

In addition to native and hybrid applications, the native context can be accessed in a mobile web app, in order to have some of the methods only available there. In this case it is important to understand that the commands are not running against the web application running in the browser, but rather are interacting with the device and the browser itself.

Webviews

There are two types of webviews. The first is the bulk of a mobile web application. Indeed, all automation of a mobile web application is done within a webview context, though one can switch into the native context in order to take advantage of some of Appium’s features for automating the device and handling the application life cycle. The second type of webview is that part of a hybrid application that is inside a UIAWebView (for iOS) or android.webkit.WebView (for Android). In the webview context the commands that can be used are the standard WebDriver commands, giving access to elements through css selectors and other web-specific locators such as link text.

Mobile web is essentially a specialized version of a hybrid application. What would be the native portion of a hybrid application is the browser itself! As you automate your application you can step out into the native context in order to interact with the browser or with the device itself. But when you begin automating a mobile web application Appium automatically takes you into the webview context. If you have a hybrid application that begins in a webview, you can have the same functionality by automatically entering into the initial webview by using the autoWebview desired capability set to true. Otherwise the automation script will need to first enter into webview before interacting with any elements.

Navigation

To move between contexts there is a method that takes the string name of the context to which you want to switch. The native context will have the name “NATIVE_APP” while the available webview contexts will have a name like WEBVIEW_1 (for iOS) or WEBVIEW_io.appium.android.apis (for Android, where io.appium.android.apis is the running activity). The generic WEBVIEW will choose the first available webview. This is not necessary when automating a mobile web browser

# switch to first available webview
driver.switch_to.context("WEBVIEW_1")

Once in the webview context Selenium commands to interact with a web application can be used.

driver.find_element_by_css_selector('.some_class')
driver.find_element_by_partial_link_text('Home Page')

The source at this point will be the html of the page loaded into the webview view, or the mobile web browser.

To return to the native context (which is not necessary for automating mobile web applications), you use the same command as used to get into the webview, but asking to switch to the native context.

# switch back to native context
driver.switch_to.context("NATIVE_APP")

Now, in the native context, if you get the source you will get an xml document describing all the elements in the view itself, not the html even if there is html being rendered in that view!

Querying contexts

It is possible to get a list of the available contexts, and choose the one to which to switch from those. This has the added bonus of making your tests capable of handling changes in context naming, and being the same across platforms. There will always be one (and only one) native context, named NATIVE_CONTEXT, and zero or more webview contexts, all of which will start with WEBVIEW.

webview = driver.contexts.last
driver.switch_to.context(webview)

Finally, you can retrieve the current context in order to make sure you are in the correct place, and to programmatically switch contexts at the correct time.

current_context = driver.context

# or
current_context = driver.current_context

Multi-tabbed web browsers

If your mobile environment supports tabbed browsing, as mobile Chrome does on Android, the tabs are accessible through the window commands in a webview context, just as in desktop browser automation!

# enter into the webview
webview = driver.contexts.last
driver.switch_to.context(webview)

# cycle through the tabs
for tab in driver.window_handles:
    driver.switch_to.window(tab)

# return to native context
driver.switch_to.context("NATIVE_APP")

Conclusion

The main thing about switching from a native context into a webview is that subsequent commands get proxied to a Selenium WebDriver session automating the browser which backs the webview. This makes it possible to run any webdriver commands that you would like! For instance, in a native context you cannot find an element using a css selector, but in a webview context that is perfectly reasonable. The underlying source for the app at that point is the html of the web page being displayed!

But Appium has a number of methods that are not available to normal webdriver. In order to take advantage of these methods one must be in a native context so that Appium itself handles the request, rather than a proxy.