Measuring the success of an agile adoption

Over the past several years scrum has grown to become the most commonly used product development method at Adobe Systems. Large desktop software products like Premiere Pro and After Effects, Platform tools like Adobe AIR, and Software as a Service products like Acrobat Connect and Omniture SiteCatalyst are using scrum to become more effective at delivering the right solutions to customers with higher quality. In 2011, I presented a paper at the HICSS conference that describes some of the things Adobe teams try to measure when they move to scrum. Details after the break…These days, when someone asks me the question “how do you measure whether scrum is more effective than whatever the team was doing before?”, my immediate response is usually “how do you measure the effectiveness of your current process?” Typically, there is no answer to this question, which allows us to ask how we should measure effectiveness. This is usually a fruitful conversation. It still doesn’t answer the original question, though, and especially for leaders, I think it is a fair question to ask. We are going to invest time and effort in changing, and we ought to be able to show how it is better than what we were doing before.

One of the real challenges with measuring the impact of a large change in approach is that the things you measured before may be completely irrelevant to the new system. For example, if a team was using a more traditional plan-based approach, they might measure things like bugs found between Feature Complete or Beta and Release Candidate. They might measure Lines Of Code as a way to determine the overall size of the project, or, (ugh), the productivity of any given engineer. If that team then adopts an agile approach like Scrum or Kanban, those things suddenly have no real meaning – they start tracking Velocity or Throughput, passing acceptance tests, or defects found early – a completely different set of metrics, so it is difficult to make any consistent comparison between the old approach and the new with simple metrics.

At Adobe, there are a few things that remained constant for many of the teams adopting scrum. The paper describes a few categories of such data, like open bug counts mapped over the course of a release cycle, and the total length of time required to prepare a product/project for release once all of the features are coded. We would expect that a move to scrum would cause the overall bug counts to drop over the course of the cycle and the length of time required to ship a “feature complete” product would be significantly reduced, if not eliminated. Other things that teams have measured are subjective survey data of team members and customer satisfaction metrics like Net Promoter Score.

I’ve attached the full paper with all of the details. Feel free to comment on what you see or ask any questions about what we’re tracking and why.

Measuring the Impact of Scrum on Product Development at Adobe Systems

 

Comments are closed.