Hello all! Here with another article. One of my favourite implementations that I’ve done in my agile software delivery consulting days and where I saw a large driver of culture change. That’s DORA Metrics. I’ve also recently spoken about WIP limits and their general importance so be sure to check that out as well. Off we go.
What are DORA metrics?
The DevOps Research and Assessment (DORA) metrics are a set of key performance indicators (KPIs) designed to measure the effectiveness of software delivery and operational performance. Developed by experts in the field of DevOps, DORA metrics help organizations evaluate their development practices, optimize their delivery pipeline, and drive continuous improvement. Make sure you check out IT Revolution and the State of Devops. A good introduction book on the topic is “Accelerate, the science of lean software and devops”. These reports and books are invaluable and are super necessary to really understanding organizational performance from a software delivery performance. This article will discuss the importance of DORA metrics, why companies might adopt them, and the challenges and considerations associated with their adoption.
DORA Metrics Explained
DORA metrics focus on four primary indicators:
- Deployment Frequency (DF): This metric measures how often a team deploys code to production. A higher deployment frequency typically signifies a more efficient and agile delivery process. In the state of devops, there are tiers of performers (Low, Med, High and Elite). Elite teams push code multiple times per day where as low performers push code to production once every few months (six or less)
- Lead Time for Changes (LT): Lead time refers to the time it takes for a code change to move from committing to deployment in the production environment. Shorter lead times indicate faster feedback loops and more effective development processes. For users of jira, this might be difficult to measure as lead time is a bit tricky especially when the focus is on software engineering teams. What you can do is measure or change this term to “cycle time” (when something hits in progress to done).
- Mean Time to Restore / Resolution (MTTR): This metric tracks the average time it takes to recover from a service outage or incident. A shorter MTTR signifies a more robust and resilient system, with efficient incident management practices. For example, you have an incident on your website and customers cannot access your website. You know the issue happened immediately after you pushed code at 4pm. You’re able to restore service within 30minutes (4:30 pm). Therefore your MTTR is 30 for this incident. A thorough incident process will have incident levels, financial impact + more as well. MTTR is a great metric overall though
- Change Failure Rate (CFR): CFR measures the percentage of deployed changes that result in service impairment, requiring remediation or rollback. A lower change failure rate indicates higher code quality and better risk management. For example, if I was a low performing team and I pushed code only twice a year. On my first push, I failed my deployment and on my second deployment I was successful. My deployment frequency would be 2 with a 50% change failure rate (1 fail)
Why Companies Adopt DORA Metrics
Companies might adopt DORA metrics for several reasons:
- Benchmarking: DORA metrics enable organizations to compare their software delivery performance against industry standards and identify areas for improvement.
- Continuous Improvement: By tracking DORA metrics, companies can pinpoint inefficiencies in their delivery pipeline, implement changes, and measure the impact of those changes over time.
- Enhancing Collaboration: DORA metrics encourage collaboration between development, operations, and other stakeholders by promoting a shared understanding of software delivery performance.
- Objective Decision-Making: These metrics provide data-driven insights to inform strategic decision-making and prioritize investments in technology, process improvements, and skill development.
- Signals and trends – These metrics allow you to determine what is going on with your organization and its general health. This is the core of good software delivery
Challenges and Considerations in Adopting DORA Metrics
- Data Collection and Reliability: Accurate and reliable data collection is crucial for measuring DORA metrics. Organizations need to ensure that they have appropriate tooling and processes in place to gather and analyze data consistently. Data hygiene is incredibly important and a lot of foundational steps must take place before you can fully measure
- Cultural Shift: Adopting DORA metrics may require a shift in organizational culture, as it demands cross-functional collaboration, transparency, and a focus on continuous improvement. This transition can be challenging for companies with traditional, siloed structures.
- Misaligned Incentives: There is a risk that team members might prioritize metric improvement over actual business outcomes. It is essential to align incentives with the overall goals of the organization and emphasize the importance of customer value.
- Context Sensitivity: DORA metrics should be interpreted within the context of the organization’s specific goals, technology stack, and industry. Companies should avoid overgeneralizing and understand that a one-size-fits-all approach may not always be appropriate.
DORA metrics offer a valuable framework for organizations to evaluate and improve their software delivery performance. By adopting these metrics, companies can benchmark their processes, drive continuous improvement, enhance collaboration, and make data-driven decisions. However, it is crucial to consider the challenges associated with data collection, cultural shifts, incentive alignment, and context sensitivity to ensure successful adoption and derive the most value from DORA metrics.
Good luck on your journey!