This book is a seminal work . . . not only for the content, but for the methodology and rigor by which the conclusions were derived. Anyone can write an extended opinion piece on their 'best way to build software', but this book lays out conclusions derived based on data gathered from a large sample set of organizations of all shapes and sizes. There is a level of sophistication in the statistical analysis of the data that is rarely seen in popular books within our industry, and to their credit the authors openly share their methods. To top it off, there is a really exciting bottom line here:
Organizations applying the key findings in this book are more successful in the market
Disclaimer: The following are my notes, interpretations, and a few choice quotes. They are a poor proxy for reading the book and evaluating everything inside within the context of your organization. I won't be able to help from introducing my bias here, so please go read the book to hear it straight from the horse's mouth in complete form.
Software Delivery Performance Contributes to Market Success
Measuring things in software is notoriously difficult to do in a way that can be applied broadly given the highly contextual nature of every software development project and a varying definition of the term success. The authors used a mix of crafting and conducting yearly surveys of professionals in the industry, in-person interviews, and applying statistical analysis to the data over time to validate assumptions and hypothesis from year to year.
This led to the identification of 4 factors that can be used to measure Software Delivery Performance:
- Lead time
- Deployment frequency
- Mean time to Restore (MTTR)
- Change fail percentage
"The highest performers deploy frequently, and improve, and fix the fastest." - Accelerate
The first part of the book breaks down techniques and practices that are implemented by organizations which perform well in those 4 areas. Afterwards, there is some detailed discussion around the analysis and statistical methods that were used to establish the chains of related practices and their influence on the above factors.
How long does it take to go from code commit to production?
"Shorter product delivery lead times are better since they enable faster feedback on what we are building and allow us to course correct more rapidly. Short lead times are also important when there is a defect or outage and we need to deliver a fix rapidly with high confidence." - Accelerate
The authors of Accelerate measured product lead time as the time it takes to go from code committed to code successfully running in production.
How often are changes deployed?
Higher performers deploy small batch change-sets, frequently, to production. It's challenging to measure batch size in a consistent and meaningful way on a survey, so as a proxy they measured deployment frequency.
It's essential to deliver small chunks of work to production and end users quickly, rather than batching them together in larger, slower releases.
Increased depoyment frequency pushs on the need to automate release and regression testing processes.
For a team that wants to scale team size up, it is even more critical:
"As the number of developers on a team increases, low performers deploy with decreasing frequency, medium performers deploy at a constant frequency, high performers deploy at a significantly increasing frequency" - Accelerate
With decreased lead time and increased deployment frequency, MTTR decreases.
Mean Time to Restore/Repair/Recover (MTTR)
How much time does it take to recover when things break?
Higher performers do it faster.
Decreased lead time, pro-active monitoring, automating repetitive processes like regression testing and releases, and developing a habit and cadence of frequent small releases all contribute to an improved capability to recover faster when something goes awry.
Change Fail Percentage
How often does an attempted change fail, needing rollback or additional fixes?
Higher performers have fewer failed changes.
How to Improve Performance
The book covers a number of interrelated practices which correlate with improved performance in the above 4 areas.
Three which resonated the most with me were:
- Continuous Delivery and Source Control
- Lean Management
- Lean Product Development
How frequently can you deploy? Can you deploy every commit (or merged PR)?
Going full steam ahead into continuous delivery pushes forward the importance of having an automated reliable release process, automated regression testing, and a more loosely coupled architecture. Having something pushing on those areas, forcing improvement, creates a virtuous cycle which enables you to deploy even more frequently with confidence.
Continuous delivery also enables more of a sense of responsibility, shared ownership and empowerment for each development team. With continuous delivery, teams can deploy to production at any time and take more responsibility for the results. They are enabled to test, learn, and adjust as needed in small increments rather than having to make huge course corrections.
Part of a rock solid continuous delivery and release process is having a source of truth for exactly what code deployed and how servers and applications are configured. Without that, you can't automate making changes to deployed applications with confidence or quickly identify and recover from mistakes. Every developer worth their salt knows the value of source control. What about configuration though? I was really excited to read this tidbit about putting server and application configuration in source control:
"...keeping system and application configuration in version control was more highly correlated with software delivery performance than keeping application code in version control. Configuration is normally considered a secondary concern to application code in configuration management, but the research shows this is a misconception." - Accelerate
Waaaaaht? More highly correlated? That's a surprise. I would have guessed an even correlation given my experiences using Infrastructure as Code for provisioning and configuration of applications. Every single time our team has invested in converting a legacy snowflake system to be fully Infrastructure as Code, it has paid dividends. I'm a bit puzzled as to how it rose above storing application code in source control, but maybe that's just my developer bias/experience.
Some key practices from Lean Management contribute to an organizations ability to improve on the 4 factors identified which impact Software Delivery Performance:
- Limiting work in progress
- Visual Management
- Feedback from production
- Lightweight Change Approvals
Limiting work in progress
Reduce the cost of context switching, focus, and galvanize a team around accomplishable goals which they can complete and move on to the next thing.
Create and maintain visual displays showing key quality and productivity metrics and current status of work, making displays available to both engineers and leaders, and aligning metrics with operational goals.
Feedback from Production
Timely, actionable, feedback from production is critical. Using data from application performance and infrastructure monitoring to make business decisions on a daily basis is the level-up of monitoring capabilities here.
Lightweight Change Approvals
Who doesn't love the weight of a heavy change control process? Me.
So many gems in the data here:
"Approval only for high risk changes was not correlated with software delivery performance" - Accelerate
Additionally, approval by external bodies outside the team has a negative correlation with software delivery performance.
For organizations that have developed a heavy weight change management process due to regulatory or industry constraints (i.e. - PCI), it is exciting to hear that Segregation of Duties can be satisfied with approval by another person on same team and a locked down deployment pipeline.
"You can act your way to a better culture by implementing these practices in technology organizations" - Accelerate
One really interesting and unexpected [to me] discovery was that improved tooling and injecting new practices can actually lead to cultural outcomes. Given how often we see the anti-pattern in software development of trying to solve everything with a software solution, or in methodologies trying to solve everything with a defined and fixed process which ignores the strengths and weaknesses of individuals, this was a surprising discovery to me.
Thinking about it holistically, it makes sense though. I think it's all about tightening feedback loops to improve the rate at which people in an organization can adapt and evolve their culture. People are quite adept at adapting to new information over time. So to the extent that new practices and tooling enable tightening a feedback loop, and giving them that feedback faster on a more regular, harder to ignore basis, it seems inevitable that improvements emerge.
Lean Product Development
Taking a Lean approach to product development also feeds into improving Software Delivery Performance
- Work in Small batches
- Make Flow of Work Visible
- Gather and Implement Customer Feedback
- Allow for and encourage Team Experimentation
I think one that is particularly underestimated is the Team experimentation one.
"... ability of teams to try out new ideas and create and update specifications during the development process, without requiring the approval of people outside the team is an important factor in predicting organizational performance as measured in terms of profitability, productivity, and market share." - Accelerate
In my own experience, amazing things tend to happen when you give a couple of talented team members some focused, interruption free time to try new things, or come up with alternative solutions and approaches to problems without overspecifying the 'how'.