While many companies and teams see the value of assessing their DevOps capabilities, few understand the significance of an assessment as a living document, rather understanding it as an audit result.

An audit is a yes/no answer to a compliance question against a checklist of some quality attributes. Through an audit you can usually demonstrate “conformance to requirements”, the checklist being a list of the latter.

On the other hand, an assessment is a relative measurement against a set of processes and standards that define a capability maturity level, when they exist, or against a set of principles and best practices, when they do not. The latter is the situation today of the DevOps field. Not only there is not an overarching agreement of what DevOps is, it is not clear what it encompasses, other than a generic set of high level principles.

Among those principles is of course that Agile is in the foundations of DevOps. That’s where I usually start my conversations on DevOps, more specifically with the second value: “Working software over comprehensive documentation”, and the principles:

  • “Working software is the primary measure of progress.”
  • “Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.”

Jim McCarthy’s early insight on what working software is has been proven over and over. He said in ’96 that “you can be so smart just by going ‘How is the state of your daily build’” (of course, nowadays replaced by “how is the state of your CI (Continuous Integration) build”). Independent of “Daily Build” or “CI Build”, the fundamental question being asked is whether you have tested software so users and testers can “kick its tires”.

That’s where I start all my DevOps assessments: do you have Working Software to start with? That’s the main foundation, and it has been one of the most neglected ones as of lately.

Of course, if you are just doing an audit, it is easy to mislead the auditor into looking at your build system that conveniently labels your “run-on-commit” builds as “Continuous Integration” ones (vendors love to effortlessly allow developers in achieving important milestones such as CI – “here it is, boss: been here just a day, and we already have CI, give me a raise”). So, auditors will look at the system, see some green status on a mislabeled build and agree, you have CI. Check.

When doing an assessment though, you look at the principles and at the entire system. You can’t know the real status of your project if you don’t have both builds and tests on it. From there you branch out into other questions on the delivery, still following those Agile principles.

Naturally today we talk about Continuous Delivery instead of “from a couple of weeks to a couple of months”. I have been using Humble and Farley’s “Continuous Delivery” book as the “bible” for CD since it came out in 2009. Within it there is a very useful “Maturity Model for Configuration and Release Management”, which really is a CD maturity model:

The authors recommend a self-assessment, and most importantly, an iterative improvement cycle following the Deming PDCA (Plan/Do/Check/Act) cycle (I prefer the Deming PDSA cycle). That is, it is not supposed to be used as a static “audit once and forget”, but rather as a reference for continuous improvement. Here is an example assessment (in this picture, green means that’s where you are, blue means that’s where you want to go):

While using this, the first thing I realized is that a baseline was missing. After all, one of the main reasons you hire someone external to do an assessment is so that you can be benchmarked against other companies.

By 2009  I had already personally worked with more than 200 teams in improving their software development practices and had a reference to where most were in their capabilities, and had helped develop and deliver an ALM Assessment for Microsoft (later made publicly available as the Rangers ALM Assessment), plus had been dabbling into SCAMPI assessments (the heavy waterfall slant of the latter prevented me from delving further).

In all this work I have found some DevOps horror stories, but most of them lived at a “normal” out of which you would want to escape if you are ever to start your ascension to Continuous Delivery heights.

Thus, when doing assessments, I added my own understanding of what “normal” looks like. Along the way I encountered another framework that had an improved perceived industry norm, and a suitable target level based on a minimum amount of investment: Eric Minnick’s Continuous Delivery Maturity Model. What is really useful in Minnick’s view is that he was working with hundreds, if not thousands, of teams using UrbanCode’s Deploy, so his industry norm perspective had more data points than any other at that point. His baselines for industry norm were the following:





His ideas did not directly map to Humble’s maturity models, but based on this and my own experience, you can assign similar baseline industry levels to Humble’s diagram.

By 2013, that data was enhanced by Puppet Lab’s 2013 State of DevOps Report, with four thousand data points. The success of this report led to the creation of DORA – DevOps Research and Assessment, which currently develops it in partnership with Puppet and other vendors.

This has become the de facto standard defining DevOps today. DORA has its own assessment, but again it is recommended that you work with an assessor who has experience across multiple companies and will be able to see the forest for the trees. Most of us are on the other side of the spectrum and miss the obvious low effort/big impact improvement actions.

The DORA reports introduced more parameters than the CD practices as factors for success. Based on that and my own experience with ALM assessments, I have also been assessing Culture and Process Agility as an integral part of the DevOps vision, given that it is more a culture than anything else.

Nowadays there are plenty of reports and self-assessment options, most of them a good starting point to understand your team’s current situation. One free inexpensive alternative to start with is the Microsoft DevOps Self-Assessment. It is technology agnostic and will provide you with a baseline albeit without (presently) clear benchmark reference points.


At the end of the day, an assessment should be less a compliance checklist that will stiffen your initiatives with rigid impositions, and more like Benjamin Franklin’s “Plan for attaining more perfection” incremental and iterative self-improvement directives for a team:

“And like him who, having a garden to weed, does not attempt to eradicate all the bad herbs at once, which would exceed his reach and his strength, but works on one of the beds at a time, and, having accomplish’d the first, proceeds to a second, so I should have, I hoped, the encouraging pleasure of seeing on my pages the progress I made in virtue, by clearing successively my lines of their spots, till in the end, by a number of courses, I should be happy in viewing a clean book, after a thirteen weeks’ daily examination.”

I wish you the best on your DevOps improvement efforts.