It Takes Time To Go Fast

October 12th, 2010 by Jason Huggins

Since I started my career in software development, I’ve witnessed many changes in how we release software.

My first job was as a technical consultant at a large ERP software vendor customizing their Student Administration package for a large state university. I worked on several things* on that project, and at one point, I was in charge of change management from the Dev to Test system environments. This was the process:

* On a weekly basis, email all the developers on the team, asking them for their finished code projects.
* Create a spreadsheet listing the complete set of projects to be migrated.
* Get various approvals from the team leads for the final migration list.
* Manually migrate each project from the Dev database to the Test database.
* Finally, let the testers know they’ve got new stuff to test.

Waterfall can best be described as a “walled-garden” methodology. Each function — development, test, and operations — mostly worked in silos. Interactions between teams only happened occasionally — weeks at best, months at worst. And even then, only a person with people skills, like me, handled the interaction between the teams.

This was the peak of waterfall software development. A complete code cycle from initial development to production only happened a few times a year. (“Once per quarter” was a pretty aggressive release goal for ERP projects in those days.) No one thought this was a “bad” process. Tedious, yes, but not terrible.

A few years later, I found myself working for a hip, boutique software consultancy in Chicago. I became the technical lead on an in-house project to replace the company’s time and expense system. This company was really into XP and Agile, so we “ate our own dog food,” and ran the project The Agile Way, the same way we ran our client projects. We had weekly iterations, and pushed to production about once a month. Initial project kick-off to first production users was about 3 months.

The big innovation I noticed with Agile was the distinct feeling that development no longer lived in a “walled garden” with regards to testing. The wall that traditionally existed between development and testing no longer existed. In fact, every developer wrote tests. We even wrote cross browser testing tools to make the task easier for the development team. And we used continuous integration to make sure our tests were run all the time.

We were good-to-above-average for a typical agile project. We only pushed to production once a month, but we were proud of ourselves. During this whole time, no one thought this was a bad process. In fact, general consensus was that it was good, bordering on great — especially compared to the Waterfall horror stories everyone knew of.

More recently, I co-founded a startup focused on solving software testing problems. As a small startup with a new Amazon EC2 account, cloud computing provided us direct and fast access for deploying code to production whenever we wanted.

With no more long cycles waiting for IT to procure machines, and no lengthy approval processes for dealing with production DBAs, I could hear another wall collapsing. We had obliterated the line between dev/test and operations. Some people call this “DevOps.” In my opinion, it should be called “DevTestOps,” because no sane person would push to production without running tests first!

At first, even though we now had little process friction to go from dev to test to production, we still did things the Agile Way. We coded features at a weekly or biweekly scale, and pushed to production once a week or every two weeks. This was still an improvement over my first Agile project, though.

With the walls down between development, test, and operations, we’ve started to optimize for speed. As we strive to improve our process, it helps to see pioneers show the way.

We now release at a much faster pace than a typical Agile project. Our internal goal is to *always* push to production at least once a day. We often do better than that. The faster you go, the more motivated you are to go even faster.

One benefit of fast release cycles is that it frees up time to tear down more walls in the development process. We have production metrics to watch user behavior in production — what they do and what they click on — that can inform what we work on next week. And we even have time to talk to the sales team for input on what to work on. Wherever we can, we destroy friction in our development process.

We’ve noticed that as our release cycle speeds up, so does our reliance on fast feedback systems. Our tests and our continuous integration system is the heartbeat of development. If the build is green, we deploy. If not, we have to wait. When you get used to pushing new code to production several times a day, it’s really painful to go back to the old way of doing it. After every process improvement, I look back at the old way we used to do things, and wonder what took us so long to get here.

* My first assignment was writing the program for the Admissions department that had a simple workflow. Input: GPA and ACT/SAT score. Output: a “personalized” (mail-merged) acceptance or rejection letter. I still remember thinking how a bug in my code would alter the course of someone’s life. If you didn’t get into Boise State University around 1999-2000, sorry about that!

Comments (You may use the <code> or <pre> tags in your comment)

  1. Hi Jason,

    Nice post! Interesting to hear what technologies you are using to deploy your code multiple times a day to production.
    Is deployment also run on pre-production environments such as qa, uat, staging, etc.?
    How long does each deployment take?
    Do deployments involve multiple application tiers? If yes, are they delivered in a single deployment?
    How many people are involved in each deployment?


  2. Scott Sims says:

    Excellent Article! I recently attended a user group meeting where feed back loops were the topic of conversation. We discussed technical, product, and customer feed back loops. One of the keys to successfully monitoring feed back loops were metrics tools. For example if a change pushed to production showed a decrease in customer click through then the interface design change was rolled back.
    In reference to your lean start-up diagram, do you use metrics as a feed back loop for customers acceptance of new code? I believe that the tool discussed at the meeting was called Vanity. It is interesting that the growing trend is, lets get new code in front of our customers as fast as possible so that we can learn more of what they want. I believe that the hard part of this is setting up the correct metrics gathering techniques so that you make decisions on accurate customer data. What do you think?

  3. Lisa Crispin says:

    It’s great to see more and more teams getting to this point. When I started on my current team in 2003, our goal was to have a stable build by the end of the sprint. We couldn’t visualize even having a stable build every day. Now we have many green builds every day and can release whenever we want.

    But from the title of the article, I thought you were going to point out that it takes a big investment to be able to master all these skills and practices that let you ‘go fast’. We didn’t get to the point where we are, that lean startup mode (tho I don’t see us as a startup, we’re a successful 10 year old business) where there aren’t walls between teams and where the business sees us, the software team, as a contributor in more ways than just the software. Many companies won’t go through the pain of having to learn things like TDD (or even the non-painful things like implementing CI) so they can get to the long-range goal.

  4. Jez Humble says:

    Interesting post. It shares a lot of common ground with the presentation I gave at JAOO last week on continuous delivery: – and of course the book I co-authored:

    I think that when a bunch of us are all saying the same thing, it means we’re on to something. Exciting times.


  5. Thanks, for the comments, everyone!

    Daniel, Great questions! re: “technologies you are using to deploy your code”.. This is probably worth a follow-up blog post all by itself… But the short summary is: We use Python and BuildBot to run our builds, then report the status to Campfire. We then run one script that ssh’s into our servers and performs the deploy. Our build is done on machines that closely match our production environment. Build takes about 10 minutes, production deploy takes a minute or two. Deploys involve multiple servers (tiers as you call them)… Also, one person is involved in production deploy — whoever the person is that executes the deploy shell script. Generally, anyone’s allowed to deploy if the build is green… but of course, sometimes there are exceptions.

    Scott, we rolled our own metrics data tools. In general, the metrics we monitor are in two categories:
    1) We follow the “AARRR” funnel that Dave McClure talks about ( )
    2) Monitoring usage of specific new features. If noone uses the new feature, then we won’t invest in further dev on it until usage increases… which can sometimes be a bit of a catch-22 situation– would usage go up *if* we invested more in the feature. Not always easy to tell.

    Lisa, thanks for the comments. Yes, it does take an investment, but also takes a general cultural attitude shift in the industry to value the importance of being able to deploy to production daily. Some people have been pushing to production every day for forever (e.g. the Amazons and Googles of the world)… but it’s only recently that the “rest of us” have caught on to it. It’s the cultural shift of valuing going faster (and refining what “fast” is) that takes time.

    Jez, great links! I agree, exciting times, indeed. :-)

  6. Great to see how you optimize not only the engineering part of your organization using agile and devops, but how you take care of the complete value creation chain. That’s forgotten by too many people when they start going agile. It’s not about sub-optimizing engineering but about creating more value, faster.

Leave a Comment