Dave discussed how to build out a well factored, maintainable, resilient, and parallelized suite of tests that run locally, on a Continuous Integration system, and in the cloud in our recent webinar, “Selenium Bootcamp“.
Following the webinar, we had several follow-up questions. Dave’s agreed to respond to 8. Below you’ll find the third Q&A. Stay tuned next Wednesday for the next question.
3. I would like to see strategies for getting tests to work in multiple browers. For example, if my test works in Chrome but not Firefox, what do I do?
There are two things you’re likely to run into when running your tests across multiple browsers: speed of execution, and locator limitations.
Speed of execution issues occur when things execute too quickly (which indicates that you need to add explicit waits to your test code) or timeout (which indicates that the explicit waits you have are too tight and need to be loosened). The best approach to take is an iterative one. Run your tests and find the failures. Take each failed test, adjust your code as needed, and run it against the browsers you care about. Repeat until everything’s green.
In older browsers (e.g., Internet Explorer 8) you’ll be limited by the locators you can use (e.g., CSS3 nth locators like nth-child, nth-of-type, etc) and likely run into issues with some dynamic functionality (e.g., hovers). In cases like this, it’s simple enough to find an alternative set of locators to use that are specific to this browser and update your test to use them only when this browser is being used.
-Dave Haeffner, April 9, 2014
Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.