It worked on my machine – Communication Trap

July 1st, 2015 by Ashley Hunsberger

“I don’t see it on my machine!” said every developer ever. Every QA professional I have talked to in my career has heard this at least once.

But why?

Have we asked what’s in a bug?

The answer can either be what gets your team on the road to efficiency, or it can become a kink in the delivery hose. Let’s discuss how your QA can help the team deliver faster by providing a consistent language to keep everyone on target.

Don’t Let The Bad Bugs Bite…

Over the last decade, I have seen issues that have almost no noted content in them (seriously, some have just declared something to the tune of “This feature is… not working”). Then there are tickets that are the golden standard, that have all the information you could possibly want (and probably some with more than you need that turn out to be a few bugs in themselves).

But what happens when you don’t have a common way to report a ticket, and why is it important?

I just came across an issue recently that seemed to have some steps to reproduce, but the setup was not included. Try as I might, I could not replicate the bug. The only way that I could come close to the reported result did not match the steps provided, and I could only guess that the setup I created was what the reporter had done. I will let you guess how long this issue took. Hint: It wasn’t a few hours.

Or perhaps you have an offshore team. I’ve seen many, many instances when someone reports a bug that just doesn’t have enough information in it. If the engineer cannot figure out exactly what the issue is, and has to place it on hold, back to the reporter, the engineer waits another night while the person on the other side of the world hopefully notices the ticket is back in his or her queue for more details. That is another full day that the bug exists, delaying when the root cause can be identified and the issue fixed.

Depending on the makeup of your team, and whether you are in an automated or manual setup — you need to consider how the issue will be verified. The person testing the fix (or writing the automated test to ensure the issue does not occur again) may not be the one who reported it. (Again, more time is spent figuring out how to test if the fix is correct.)

The bottom line? The back and forth that occurs from a poorly reported bug is costly in terms of time and resources.

Cut The Chit Chat

Having a uniform language/template will help reduce uncertainty across the board, and reduce the time a bug is spent unresolved. But what should be included in a bug report to cut out this back and forth, and keep the team on track?  There are several other things you may want to consider adding, but these are some of the top things I like to see from a tester:

  • Summary/Title: This should be succinct yet descriptive. I almost try to make these sound like a user story <user> <can/cannot><do x action> in <y feature>. When I sit in a triage meeting, can I tell what the issue is just by reading the summary?
  • Environment: every now and then we come across bugs that are very specific to the OS, database type, browser, etc.  Without listing this information, it’s all too easy to say ‘Can’t reproduce’, only to have a client find it in the field.
  • Build: Hopefully you are testing on the latest build, but if for some reason you have servers that are updated at different rates than others, you need to pinpoint when exactly the bug was found.
  • Devices: if you’re doing any type of mobile testing, what type of device were you using? What version? If you found a bug on the web app, do you see it on the mobile app too? Which one? Android or iOS?
  • Priority: The priorities are all relatively standard across the field — Critical, High, Medium and Low. Have criteria defined up front so everyone is on the same page as to what constitutes each selection.
  • Steps to reproduce: Not just ‘When I did this, it broke.’  Really break it down, from login and data setup to every click you make.
  • Expected Result vs. Actual Result: What were you expecting, and why?  What happened instead?
  • Requirements and Wireframes: This helps to point to why testing occurred, and why someone wrote up a bug and linked it back to the originating artifact, though hopefully you are on the same page upfront, before development begins. Sometimes things slip through and perhaps an engineer has a different understanding of a feature than the tester. Being able to point back to why you think an element is a bug is helpful, and gets you all on the same page.

Of course, there are people other than your traditional testers writing bugs, and it is essential to use your QA to drive conformity. Perhaps your UX team is performing audits, or you have bug bashes where people from other departments are invited to test the system and find bugs, or you have someone new to the team that simply needs training. Having a template will ensure clarity and reduce inefficiencies, regardless of who enters the ticket.

Utilize QA to promote consistency, get bugs out of purgatory, and drive faster delivery.

 

Guest post: Proving that an application is as broken as intended

June 25th, 2015 by Björn Kimminich
Typically you want to use end-to-end (e2e) tests to prove that everything works as intended in a realistic environment. In the Juice Shop application that idea is changed to the contrary. Here the main purpose of the e2e test suite is to prove that the application is as broken as intended!

Juice Shop: Broken beyond hope – but on purpose!

“WTF?” you might ask, and rightfully so. Juice Shop is a special kind of application. It is an intentionally insecure Javascript web application designed to be used during security trainings, classes, workshops or awareness demos. It contains over 25 vulnerabilities that an aspiring hacker can exploit in order to fulfill challenges that are tracked on a scoreboard.

The job of the e2e test suite is twofold:

  1. It ensures that the overall functionality (e.g., logging in, placing products in the basket, submitting an order, etc.) of the application is working. This is the above mentioned typical use case for e2e tests.
  2. It performs attacks on the application that should solve all the existing challenges. This includes SQL Injections, Cross-Site Scripting) attacks, business logic error exploits and many more.

 

When does Juice Shop pass its e2e test suite? When it is working fine for the average nice user and all challenges are solvable, so an attacker can get a 100% on the scoreboard!

Juice Shop logo

Application Architecture

Juice Shop is created entirely in Javascript, with a Single-Page-Application frontend (using AngularJS with Bootstrap) and a RESTful backend using Express on top of NodeJS.

 

The underlying database is a simple file-based SQLite with Sequelize as a OR-mapper and sequelize-restful to generate the simple (but not necessarily secure) parts of the API dynamically.

Test Stages

There three different types of of tests to make sure Juice Shop is not released in an unintendedly broken state:

  1. Unit tests make sure that the frontend services and controllers work how they should. The AngularJS services/controller are tested with Karma and Jasmine.
  2. API tests verify the RESTful backend is behaving properly when running as a real server. These tests are done with Karma and frisby.js for orchestrating the API calls.
  3. The e2e test suite performs typical use cases and all kinds of attacks via browser-automation using Protractor and Jasmine.

 

If all stages pass and the application survives a quick monkey-test by yours truly it will be released on GitHub and SourceForge.

Why Sauce Labs?

There are two reasons to run Juice Shop tests on Sauce Labs:

  1. Seeing the front-end unit tests pass on a laptop already gives a good feeling for an upcoming release. But there they run only on PhantomJS, so not in a real browser. Seeing them pass on various browsers increases confidence in the release.
  2. The e2e tests must be executed before shipping a release. Wanting to make sure they are not skipped due to laziness or overconfidence (“Oh’ it’s such a small fix, what could it possibly break?” – sound familiar?) the e2e suite must be integrated into the CI pipeline.

 

Having laid out the context the rest of the article will explain how both these goals could be achieved by integrating with Sauce Labs.

Execution via Travis-CI

Juice Shop builds on Travis-CI which Sauce Labs integrates nicely with out of the box. The following snippet from the .travis.yml shows the necessary configuration
and the two commands being called to excecute unit and e2e tests.

addons:
  sauce_connect: true
after_success:
- karma start karma.conf-ci.js
- node test/e2eTests.js
env:
  global:
  - secure: <your encrypted SAUCE_USERNAME>
  - secure: <your encrypted SAUCE_ACCESS_KEY>

Frontend Unit Tests

The karma.conf-ci.js contains the configuration for the frontend unit tests. Juice Shop uses six different OS/Browser configurations:

var customLaunchers = {
    sl_chrome: {
        base: 'SauceLabs',
        browserName: 'chrome',
        platform : 'Linux',
        version: '37'
    },
    sl_firefox: {
        base: 'SauceLabs',
        browserName: 'firefox',
        platform: 'Linux',
        version: '33'
    },
    sl_ie_11: {
        base: 'SauceLabs',
        browserName: 'internet explorer',
        platform: 'Windows 8.1',
        version: '11'
    },
    sl_ie_10: {
        base: 'SauceLabs',
        browserName: 'internet explorer',
        platform: 'Windows 8',
        version: '10'
    },
    sl_ie_9: {
        base: 'SauceLabs',
        browserName: 'internet explorer',
        platform: 'Windows 7',
        version: '9'
    },
    sl_safari: {
        base: 'SauceLabs',
        browserName: 'safari',
        platform: 'OS X 10.9',
        version: '7'
    }
};

 

In order associate the test executions with the Travis-CI build that triggered them, some extra configuration is necessary:

    sauceLabs: {
        testName: 'Juice-Shop Unit Tests (Karma)',
        username: process.env.SAUCE_USERNAME,
        accessKey: process.env.SAUCE_ACCESS_KEY,
        connectOptions: {
            tunnelIdentifier: process.env.TRAVIS_JOB_NUMBER,
            port: 4446
        },
        build: process.env.TRAVIS_BUILD_NUMBER,
        tags: [process.env.TRAVIS_BRANCH, process.env.TRAVIS_BUILD_NUMBER, 'unit'],
        recordScreenshots: false
    }
    reporters: ['dots', 'saucelabs']

 

Thanks to the existing karma-sauce-launcher module the tests are executed and their result is reported back to Sauce Labs out of the box. Nice. The e2e suite was a tougher nut to crack.

End-to-end Tests

For the Protractor e2e tests there are no separate configuration files for local and CI, just one protractor.conf.js with some extra settings then running on Travis-CI to pass necessary data to Sauce Labs:

if (process.env.TRAVIS_BUILD_NUMBER) {
    exports.config.seleniumAddress = 'http://localhost:4445/wd/hub';
    exports.config.capabilities = {
        'name': 'Juice-Shop e2e Tests (Protractor)',
        'browserName': 'chrome',
        'platform': 'Windows 7',
        'screen-resolution': '1920x1200',
        'username': process.env.SAUCE_USERNAME,
        'accessKey': process.env.SAUCE_ACCESS_KEY,
        'tunnel-identifier': process.env.TRAVIS_JOB_NUMBER,
        'build': process.env.TRAVIS_BUILD_NUMBER,
        'tags': [process.env.TRAVIS_BRANCH, process.env.TRAVIS_BUILD_NUMBER, 'e2e']
    };
}

 

The e2e tests are launched via e2eTests.js which spawns a separate process for Protractor after launching the Juice Shop server:

var spawn = require('win-spawn'),
    SauceLabs = require('saucelabs'),
    colors = require('colors/safe'),
    server = require('./../server.js');

server.start({ port: 3000 }, function () {
    var protractor = spawn('protractor', [ 'protractor.conf.js' ]);

    function logToConsole(data) {
        console.log(String(data));
    }

    protractor.stdout.on('data', logToConsole);
    protractor.stderr.on('data', logToConsole);

    protractor.on('exit', function (exitCode) {
        console.log('Protractor exited with code ' + exitCode + ' (' + (exitCode === 0 ? colors.green('SUCCESS') : colors.red('FAILED')) + ')');
        if (process.env.TRAVIS_BUILD_NUMBER && process.env.SAUCE_USERNAME && process.env.SAUCE_ACCESS_KEY) {
            setSaucelabJobResult(exitCode);
        } else {
            server.close(exitCode);
        }
    });
});

 

The interesting part regarding Sauce Labs is the call to setSaucelabJobResult(exitCode) in case the test is run on Travis-CI with Sauce Labs credentials which are passed in by the extra config part in protractor.conf.js.

This function passes the test result from Protractor on to Sauce Lab’s REST API:

function setSaucelabJobResult(exitCode) {
    var sauceLabs = new SauceLabs({ username: process.env.SAUCE_USERNAME, password: process.env.SAUCE_ACCESS_KEY });
    sauceLabs.getJobs(function (err, jobs) {
        for (var j in jobs) {
            if (jobs.hasOwnProperty(j)) {
                sauceLabs.showJob(jobs[j].id, function (err, job) {
                    var tags = job.tags;
                    if (tags.indexOf(process.env.TRAVIS_BUILD_NUMBER) > -1 && tags.indexOf('e2e') > -1) {
                        sauceLabs.updateJob(job.id, { passed : exitCode === 0 }, function(err, res) {
                            console.log('Marked job ' + job.id + ' for build #' + process.env.TRAVIS_BUILD_NUMBER + ' as ' + (exitCode === 0 ? colors.green('PASSED') : colors.red('FAILED')) + '.');
                            server.close(exitCode);
                        });
                    }
                });
            }
        }
    });
}

 

This was necessary because there was no launcher available at the time that would do this out-of-the-box.

Determining solved Challenges

How does Protractor get its test result in the first place? It must be able to determine if all challenges were solved on the score board and cannot access the database directly to do that. But: It can access the score board in the application:

Screenshot score board

As solved challenges are highlighted green instead of red some simple generic function was used to assert this:

protractor.expect = {
    challengeSolved: function (context) {
        describe("(shared)", function () {

            beforeEach(function () {
                browser.get('/#/score-board');
            });

            it("challenge '" + context.challenge + "' should be solved on score board", function () {
                expect(element(by.id(context.challenge + '.solved')).getAttribute('class')).not.toMatch('ng-hide');
                expect(element(by.id(context.challenge + '.notSolved')).getAttribute('class')).toMatch('ng-hide');
            });

        });
    }
}

 

When watching the e2e suite run Protractor will constantly visit the score board to check each challenge. This is quite interesting to watch as the progress bar on top moves closer to 100% with every test. But be warned: If you plan on trying to hack away on Juice Shop to solve all the challenges yourself, you will find the following screencast to be quite a spoiler! ;-)

Bjoern Kimminich is responsible for IT architecture and application security at Kuehne + Nagel and as a side job is giving lectures on Software Engineering at the private university Nordakademie. When not working on his ownJuice Shop, Bjoern thinks up Code Katas and regularly speaks at conferences and meetups on topics like application security and software craftsmanship. Twitter: @bkimminich

Immutable Infrastructure

June 23rd, 2015 by Greg Sypolt

Server hugging is a disease

In the dark days before immutable servers, people clung to servers and treated them as untouchable gold. These people still exist, and hang onto their servers instead of moving into the Cloud. They are server huggers. What does the term “server hugger” mean? Its the desire to “touch” servers, “reboot” them on a regular basis, constantly upgrading really old software and hardware needed, etc. Before anyone can help cure the problem, server huggers have to admit it. They struggle to let go and embrace the Cloud. The thought of trashing servers and disposable components is absolutely frightening to server huggers, especially the (unfounded) fear of losing control. Server virtualization and Cloud computing will make these types extinct in the near future. Let’s start unshackling your servers!

Unshackle your servers

It’s all within reach. Start the transition today! Before the unshackling can begin, you need to build a Cloud architecture strategy for predictability, scalability, and automated recovery. The next step is huge: getting buy-in from development and operation teams. It starts by presenting your strategy and demonstrating the importance of virtual servers and containers to make it possible. Let’s start setting your dedicated servers free!

Start small and expand slowly! Always look for ways to simplify your infrastructure — build, measure, and learn. Some people like to learn by jumping directly into writing their own code and others may seek experts. Choose your own adventure! You are making giant leaps towards immutable servers. Do not treat virtual machines in a static way.

Make infrastructure part of the application

What is an immutable infrastructure? It is made up of immutable components that are replaced at every build and deployment, with changes made only by modifying a versioned definition, rather than being updated in place. So when building infrastructure as part of the application, the following three words immediately come to mind:

  • Predictable – Promote the exact same artifact that you tested into your production system.
  • Scalable – Meet user demand so the application can automatically grow or shrink your servers.
  • Recoverable – Using Auto Scaling Group (ASG) should detect the instance termination and automatically bring up a new, identically configured instance. Netflix has taken this a step further with its Chaos Monkey, a service which randomly knocks out production services, forcing developers to build easily recoverable infrastructure.

Why is this important? Without this infrastructure, you can spend hours each week manually maintaining and modifying network configurations and the problems will still continue to grow. It’s just not a smart use of employee resources.

There are many benefits and advantages to moving your infrastructure to the Cloud. Let’s start with the possible cost savings — You can pay for how much you consume, instead of having to invest heavily in data centers and servers. Next, stop guessing your infrastructure capacity and speed needs. You can access as much or as little as you need, and scale up and down as required in only a few minutes. Another plus — Where it matters the most, you can easily deploy your application in multiple regions around the world in a few minutes. This means you can provide a lower latency and better experience for your customers. To ease the management of your Cloud infrastructure, you will benefit from many available Cloud infrastructure automation tools.

When all is said and done, you can release the entire stack faster to market with greater test coverage since you’ve helped the server huggers move on.

How

The Cloud! Start turning your infrastructure into code. Enable developers to build, test and deploy applications on highly scalable and reliable infrastructure. Think about what you want the servers to run, not how to run them.

When starting the shift to immutable infrastructure, consider these steps:

  • Start small
  • Get buy-in
  • Implementation
  • Reinforce best practices and automate the deployment
  • Adopt automation across the deployment process
  • Build on your successes, so you can begin to reap the benefits of immutable servers

The takeaway:

  • Get over static servers, and treat servers as a commodity.
  • Treat all our virtual servers as immutable.
  • The deployment model is to terminate the instance/container and start over from step one: build a new image and throw old instances away.
  • Script your orchestration, and expand your ability to test more and build more comprehensive grids.
  • When designing your Cloud infrastructure from scratch, it makes sense to start small and expanded slowly — don’t try to “boil the ocean.”
  • The Cloud has made immutable infrastructure possible.

By Greg Sypolt

Greg Sypolt is a Senior Engineer at Gannett Digital with 10 years of focus on project quality, results, and customer satisfaction while serving in multiple leadership roles. He is an expert in all areas of testing – including functional, regression, API, and acceptance testing – in both the automated and manual realm. Greg is experienced with test strategy planning, including scope, estimates, and scheduling. Greg also assumed the role of Automation Architect while helping convert a large scale, global test team from a manual to an automated focus.

Moving QA Upstream

June 17th, 2015 by Ashley Hunsberger

Every few years, new development and release methods appear. Lately, most of us are looking at continuous integration and continuous delivery. But what about those teams that are trying to transition to faster releases? That’s going to be a huge hurdle if you save your testing for the end, before release (and in my experience, that has almost always resulted in pushing the release date). There is an important shift that needs to be made in order to be successful: bring in QA early.

Moving Upstream – From the Field

I recently participated in a sprint retrospective. During our ‘lessons learned’ meeting, an engineer said, “I really wish we had had these conversations with QA earlier… they really knew the feature.” In my perfect world, the referenced discussion would have happened before any development had begun. Unfortunately, sometimes factors are out of our control (like the number of resources — I have discovered that I cannot, in fact, clone myself). The end result of this meeting was that the engineer realized that some unexpected new work and some revisions to code he had already written would need to be done. Extra scope aside, the engineer brought up a great point: the tester knows the system/feature, and really does know what questions to ask.

When you bring QA in earlier and discuss a feature up front with the engineer and designer — before any work is done — all questions and assumptions are out on the table. This pre-mortem allows the team to size up possible issues, and prepare for feature testing with better expectations. Together, you eliminate the guesswork that comes from working on your own, which will likely be different depending on whether you are a developer or a tester. When QA and devs work together, engineers know precisely what they have to develop in order for the feature to pass and meet the acceptance criteria, and when it is time to audit, designers should not be surprised.

The benefits of moving QA upstream go beyond getting everyone on the same page. You will also reduce the overhead of over-documenting your project, since the acceptance tests become your specifications. There is no more need to tediously update both test and spec if you are defining them in your acceptance criteria as a team. I wish I had a dime for every time I heard, “Was it updated in the spec so I can update the test (or update the code)?” … Let’s just say I’d have a lot of dimes. Often this conversation happens in various emails or chats where someone thought the behavior was discussed, and may or may not have included all necessary team members.

To me, the biggest reward for defining acceptance criteria first is that you are building quality in from the start. Of course you’ll find bugs — I wouldn’t be where I am today if software came bug free. But you will certainly prevent bugs, and hopefully the bigger ones. Think about it, if you define acceptance criteria first, you build to the test until all tests pass — the bugs found during that process are found early so you can release faster but maintain quality. The cost alone for finding a bug late is so much higher than finding a bug early (spending resources on triage, remembering what was built a sprint or so ago while you’ve possibly moved on to another user story, potentially impacting other code with your fix).

Step 1 – A Strategy

To be truly successful with continuous integration, I think you really do need to look at automation — but the great thing is, moving QA up in your development lifecycle is completely tool and methodology agnostic. What do I mean? It doesn’t matter if you have a manual testing group or a robust automation shop. Getting your QA involved earlier can only help your team. Of course, as you write your acceptance criteria, you may want to also identify how that criteria can be tested (for example, is it a unit, integration, or functional test), but in the grand scheme of things, it doesn’t matter what your testing framework is. Hopefully you quickly identify very little that needs manual testing to free you up to explore the system in other ways.

Naturally, there will be some skeptics. So QA also needs to educate the team in the value of this approach, and why it’s so important that everyone view quality as a team effort. Just because we may help drive the discussion and define acceptance criteria, that doesn’t mean the tester owns the quality. Engineers may need to write more tests than they are used to, but it will only help their code as they build until every test passes. With everyone working together, engineers, QA, and designers all take part in defining quality, owning it, and delivering a better product.

Ashley Hunsberger has worked at Blackboard Inc for the past 10 years, dedicated to the client experience. She is a test expert focused on functional, regression, exploratory, and acceptance testing, working in both automation and manual environments. Ashley is experienced in test strategy planning, project management, team leadership, and process improvement across development teams. She resides in Raleigh, NC with her family.

 

How To Test Responsive Web Apps with Selenium

June 8th, 2015 by Dave Haeffner

The Problem

When testing a web application with a responsive layout you’ll want to verify that it renders the page correctly in the common resolutions your users use. But how do you do it?

Historically this type of verification has been done manually at the end of a development workflow — which tends to lead to delays and visual defects getting released into production.

A Solution

We can easily sidestep these concerns by automating responsive layout testing so we can get feedback fast. This can be done with a Selenium test, Applitools Eyes, and Sauce Labs.

Let’s dig in with an example.

An Example

NOTE: This example is built using Ruby and the RSpec testing framework. To play along, you’ll need Applitools Eyes and Sauce Labs accounts. They both have free trial accounts which you can sign up for here and here (no credit card required).

Let’s test the responsive layout for the login of a website (e.g., the one found on the-internet).

In RSpec, a test file is referred to as a “spec” and ends _spec.rb. So our test file will be login_spec.rb. We’ll start it by requiring our requisite libraries (e.g., selenium-webdriver to drive the browser and eyes_selenium to connect to Applitools Eyes) and specifying some initial configuration values with sensible defaults. (more…)

Test Automation KPIs

June 3rd, 2015 by Greg Sypolt

One of the interesting things about automation is that it frees you up from time-intensive manual testing, allowing you to spend time on strategic elements—because if you do not spend time on strategy, your capabilities as a team will not grow. And part of that growth means focusing on valuable metrics— metrics that will help you learn, and improve your processes.

Once you have processes in place, the next crucial step is to invest in automation. Automation helps you work faster, and makes your work consistent, traceable, and shareable, which is also imperative. All this comes only after establishing the right KPIs (key performance indicators).

Automation: Deliver Faster, from Months to Minutes

Ask yourself this question: without CI (continuous integration), how long would it take your organization to deploy a change that involves just one line of code? For instance, say your organization sets an objective to deploy a change in production within 30 minutes. To achieve this objective, everyone has to agree on the tools and processes that are needed for an easy button approach (aka continuous integration).

Let’s review the roles, team responsibilities, and the CI process. (more…)

Leverage Your QA Team Upfront

May 15th, 2015 by Ashley Hunsberger

All too often, QA is perceived as the bottleneck in getting software out the door.  This makes sense when you only see QA as just the “bug finder” in a world where you build, test, and release. But how can you leverage your QA team to improve your pipeline to delivery? There are many ways that you can do this, but let’s look at using defect prevention and analysis to test wisely and improve your pipeline.

Prevention

The scenario: a designer has sent specifications (and if you’re lucky, wireframes) for a new feature.  Depending on your development process, you would probably see the following happen:

  1. A developer writes some code
  2. A tester goes off and writes some tests
  3. Once the developer finishes code, the tester executes the tests, finds bugs, and tells the developer to fix the bugs that were found.

The above scenario seems simple, however at the later stages often one finds that the test cases do not 100% align with code. To fix this, what if you changed this process to look more like this?

(more…)

Not Just Faster Releases; Better Understanding

May 7th, 2015 by Chris Riley

This is a guest post by Chris Riley, a technologist who has spent 12 years helping organizations transition from traditional development practices to a modern set of culture, processes and tooling.

In DevOps there are several processes and tools that go beyond their face value. System Analytics, Application Performance Monitoring, and Continuous Integration (CI) go far beyond their core abilities. In particular, CI not only changes the speed and quality of releasing code, it improves communication and finds bugs in the software development process itself.

One of my biggest messages for companies moving to modern software delivery is the idea of being deliberate. Being deliberate means picking a culture, process, and tools that focus on results and how to attain them. Too often tools, especially those that are open source, are adopted on a whim without much forethought. This is the converse of being deliberate; allowing the tools to define the process and culture for you.

When organizations follow the ‘deliberate’ approach, they naturally get to a place where they move faster and what they have built is sustainable. A huge component of getting to this place is CI. No DevOps shop can survive without a continuous integration process. It allows front-end developers to participate in quality, find bugs before QA, test new functionality faster than ever. (more…)

Guest Post: All About Mobile Continuous Integration

April 20th, 2015 by Amber Kaplan
mobile_devices_continuous_integration

Image credit: Loris Grillet

CI’s not a new thing. Wikipedia says the phrase was first used back in 1994, way before modern mobile apps. Today it’s commonplace in many dev shops for developers to expect that their code is automatically tested when they commit and even automatically deployed to a staging environment.

For mobile developers though, it’s been a slow road to adopting many of these same practices.  In large part, this is because mobile brings with it a whole set of unique challenges that make implementation tough.  Nevertheless, tools have evolved a lot and mobile dev teams get a lot of value and goodness from having a solid CI system in place. Here are my top 3 reasons for using CI with the mobile products I work on: (more…)

Guest Post: Functional Testing in 2016 – Forecast

April 15th, 2015 by Chris Riley

Massive changes in the development world are good and extreme for devs, but quality assurance (QA) teams are impacted as much, if not more. They may be taking on more tasks, looking at new tools, and thinking about new ways to execute their growing test suites. And looking forward, QA in the future looks much different than it does today. It is moving so fast that the changes – both good and bad – will be even more obvious by next year. Here is what QA looks like in 2016. (more…)