This article is about an activity you can do with a team to find spots for improvement regarding agility. As an extra, during this activity there is a chance to deepen knowledge about the Agile Manifesto. Read it, try it, and please share whether your outcome was worth it!
What you need
The 12 principles of the Agile Manifesto printed out on cards (feel free to use this template: AgilePrinciples.pdf) *
A flip chart or wall, if you can’t stick the cards to a wall just use the floor
About 30 minutes with your team for this activity
How to play
The goal is to have a distinctly ordered list of the agile principles. The most important part is to ask the best question, which we found to be “how big is our demand of improvement regarding this principle?”, because it makes people think about their situation and yields actionable insights.
Start with a random principle, discuss what it means and how big your demand may be, and place it somewhere in the middle.
Pick the next principle, discuss what it means and sort it relatively to the other principles. Use bubble sort or any other algorithm, I tend to propose a position depending on the discussion and move from there by comparison.
Repeat at 2. until all cards are sorted.
Now that all cards are set, consider the card on top: this is the most needed and most urgent principle you should work on, so prepare to get into “generate insights”:
does everyone still agree?
how do you feel about it?
what are the reasons there is the biggest demand for change here?
should you compare to the second or third most important issue again?
if someone would now rather choose the second position, why?
Put up the result up in the team space. This way, everyone can always re-check and start a conversation about them. You can also get it back into another retrospective to see what changed after measures for the first insight were done. As with any retrospective, it is very important to follow up on measures.
In this activity, you sort the principles along one dimension, the question “how big is our demand of improvement regarding this principle”. You could add another dimension “urgency” or “effort”, so that as a result you get a map of principles with most demanded and urgent or easy to fix principles on the top right. But then, how do you estimate effort without even thinking about measures first, and why should urgency be so much different to demand? This modification didn’t help us, but maybe it helps you.
A better modification might be to sort only the top three principles. Compare each picked principle with the current top three, and if it does not yield more demand of improvement, don’t spend your time deciding whether it is on position 10 or 11. You might lose some insights on what people actually think about these principles, probably also letting down on the educational aspect of the activity. Still, having focus on the top principles should be fine.
No team is perfect, we all know that. And it’s nothing to be ashamed of. We all strive for better agility, because we believe that iterative software delivery in close contact with the customer is a better way of building actually valuable software than waterfallishly keeping progress (or lack thereof) secret until the time frame for learning how to do it better is long gone. In agile teams we do retrospectives in regular intervals to identify measures on how to improve our team performance.
It is important to not do the same retrospective over and over again. It is very valuable to try different formats every now and then, to let the team take different view points and identify different impediments. Diversity is your friend! An excellent resource to find new things to do at retrospectives is the Retromat by Corinna Baldauf. It not only helps to be inspired for new activities to try in retrospectives, it even let’s you design whole plans for retrospectives and share these with your colleagues.
This article describes the “Find your focus principle” activity. The goal is to identify a teams weakest spot regarding the principles of the agile manifesto. A sub goal is to make the team aware of these principles. While the 4 values of the agile manifesto are quite present and understood by most of the people I worked with, the 12 principles seem to be a list of things one might consider once in a while. Some people call these principles “the better manifesto” (although being part of the original one), so take your time to consider them.
This activity covers the “gather data” part of a retrospective and shouldn’t take longer than 30 minutes. You can combine it with any activity for setting the stage, generating insights, deciding what to do, and closing the retrospective. If you need ideas on how to do that, either use the Retromat or read the book on Agile Retrospectives by Esther Derby and Diana Larsen.
* We found it useful to put a tag line on each card which is easier to read from a distance than the whole text of the principle. It is important to not confuse the tag line with the principle itself, it just makes the card easier to handle (read / talk about).
No, not people, people aren’t resources, they’re people. I’m talking about our favorite books, websites, and blogs related to agile. Read on and feel free to recommend additional sources in the comments.
These align nicely with Dan Pink’s autonomy (decentralized control), mastery (technical competence), and purpose (clarity of the mission). Thanks for the excellent analysis! I’ll be sharing this.
The funny thing is, I’ve probably watched the RSAnimate video of Dan Pink(@DanielPink – check it out below – well worth the time) more than 100 times since I often show it in training classes, and I’ve read the book (Drive) twice. I’m a big fan of Dan’s podcast (Office Hours) and his other books, and I still didn’t make that link! Thanks to Rob for connecting the dots for me between Turn the Ship Around (experience) and Drive (behavioral economics/neuroscience).
Here’s the Dan Pink Ted talk animated, just in case you haven’t seen it yet!
I recently finished reading former U.S. Navy Submarine Commander David Marquet’s book “Turn the Ship Around”. It is a powerful story of learning what leadership means and the struggles Marquet had putting it into place in his role as commander of the Los Angeles-class fast attack submarine USS Santa Fe (SSN 763). Marquet proposes that Leadership should be defined as:
“Embedding the capacity for greatness in the people and practices of an organization, and decoupling it from the personality of the leader”.
The paradox is that more traditional leadership creates more unthinking followership; less top-down leadership creates more engaged leadership – at every level of an organization.
Commander Marquet with Stephen Covey aboard the Santa Fe
Leadership and productivity guru Stephen Covey took a tour of Marquet’s submarine in 2000, a couple of years into Marquet’s command, and reported that it was the most empowered organization he’d ever experienced, of any type, and wrote more about it in his book “The 8th Habit”.
The hyper-quick summary of Marquet’s approach involves three pillars: Control, Competence, and Clarity. These form the basis for what he calls “Leader-Leader” behavior, as opposed to the much more common “Leader-Follower” culture found in most organizations.
Marquet talks about shifting the psychological ownership of problems and solutions using a simple change in language. I’ll attempt to illustrate the evolution of leadership behavior through a series of conversations:
Traditional leader-follower pattern:
Captain: “Submerge the ship”
Subordinate: “Submerge the ship, aye”
To push Control down in the organization, Marquet began using the following speech pattern:
Captain: “What do you think we should do?”
Subordinate: “I think we should submerge the ship, sir”
Captain: “Then tell me you intend to do that”
Subordinate: “Captain, I intend to submerge the ship”
Captain: “Very well”
Giving control without an assurance of competence could lead to disaster on a nuclear submarine, and so over time, the pattern evolved to include an assurance of technical Competence, becoming:
Subordinate: “Captain, I intend to submerge the ship.”
Captain: “What do you think I’m concerned about?”
Subordinate: “You’re probably concerned about whether it’s safe to do so”
Captain: “Then convince me it’s safe”
Subordinate: “Captain, I intend to submerge the ship. All crew are below decks, the hatches are shut, the ship is rigged for dive, and we’ve checked the bottom depth.”
Captain: “Very Well”
The final evolution of the language added the third pillar – Clarity of mission, becoming:
Subordinate: “Captain, I intend to submerge the ship. All crew are below decks, the hatches are shut, the ship is rigged for dive, and we’ve checked the bottom depth.”
Captain: “Is it the right thing to do?”
Subordinate: “Yes sir, our mission requires that we submerge now in order to (classified reason (-: ) ”
Captain: “Very Well”
The book is highly engaging and I found it to be a fascinating model of leadership extremely well-tuned to the needs of leading complex organizations in the knowledge work era.
So, what does this have to do with Agile?
Empowerment is a core concept of agility, and specifically the scrum framework, but it is something that can be a major challenge to get working well in organizations without decentralized control, insurance of competency, and clarity of mission. Marquet’s approach provides a simple pattern to follow in empowering teams.
Interestingly, empowerment is a term that Marquet dislikes, since it implies that individuals can only be “powerful” once it has been granted by a leader. His claim, and one that I agree with, is that all human beings are naturally powerful, they don’t need to be “empowered”. Rather, leaders simply need to remove cultural norms and processes that are meant to exert control, resulting in people tuning out and becoming disengaged. When the right leadership behaviors are in place, people will naturally bring their whole selves to their jobs. From a lean standpoint, such controls can be viewed as creating waste – people that show up and go through the motions, rather than devoting their creativity and energy to their jobs, and the lean leader’s job is to remove waste from the system.
Empowered Product Owners & Teams
Scrum is fundamentally based on the idea that a Product Owner is the single accountable person for setting the priorities of the team(s). Leaders can ensure that Product Owners have this accountability by using the “Intend To” language.
Product Owner: VP, I intend to move this new feature to the top of the Product Backlog and deprioritize this other feature that was in our original plan. Customer validation tests indicate that the new feature would increase retention of existing users by around 4%, more than any other feature we’ve tested, aligning with our highest priority goal for this quarter of increasing existing subscriber retention rates. The team has done some high level scoping and forecast that this feature would be completed within two sprints, a similar size to the feature that we’ll be cutting.
VP: Very Well
The leader gets what they really want: an assurance that the Product Owner is aware of the business concerns and have done their due diligence to address those concerns. The Product Owner gets what they want – mentoring to understand what business leaders are most concerned about (a great career development aspect of this approach), with the autonomy to meet the business need however they see fit.
Agile Leadership is the missing link for many organizations
Agile has had a major impact on some organization’s capability to balance delighting customers, keeping people engaged at work, and delivering great business results. It has, however, struggled to make an impact in many organizations where Cargo Cult Scrum, “scrum-but” and other half-hearted implementations of agile are the norm. The difference is in the leadership of these organizations. Where agile is seen as the latest trend, something the developers do, or a bandaid to fix some specific annoyance, agile will have a marginal (if sometimes still improved) result. Where agile is viewed as a mindset for both teams and leaders, it can have a profound impact. Marquet’s book provides some simple rules that leaders can apply to start seeing that bigger impact of agile at the organizational level.
Check out the animated talk on this topic by Marquet:
Scrum teams conduct a Sprint Retrospective at the end of every Sprint to find ways to improve the way they work together. It turns out, having really effective retrospectives requires some specific conditions to exist to allow improvement to happen. All too often, the team’s environment doesn’t support them expending effort to make real, substantial improvement. Here’s a simple test for a Scrum Master: “Is my team improving every sprint?” Sounds obvious but it needs to be answered, and answered honestly. If the answer is “no” or the answer is unclear, there are some simple things that can be done to help (well, some are not so simple).
Common Retrospective Patterns
First, I want to share some of the anti-patterns I have experienced with some teams I’ve worked with:
There is already too much work in the sprint to make improvements.
The team rushes into problem solving right away without much discussion leading to weak outcomes.
The same impediments are discussed at every retrospective.
The same activities/questions are used for each and every retrospective; they have become boring and repetitive.
The impediments that are discussed are too big for the team to solve.
The team has stopped having retrospectives.
There are many others, but this is a good, common list of retrospective anti-patterns that I often hear about when I work with teams.
To help alleviate some of these problems, here are 6 actions you can take:
3 actions to allow continuous improvement to happen and
3 actions that help retrospectives work better.
3 Actions to Allow Continuous Improvement to Happen
These 3 are the “not so simple” actions since they require a real change to how work is planned and who plans it. This requires a commitment from leaders to create an environment of continuous improvement.
Allow the team to determine their Sprint capacity.
I know many teams, including their managers, who say that the team determines their own capacity, but when you dig into it, we see that there are other things at work undermining the team’s autonomy in determining their capacity. Many teams are hesitant to push back on the Product Owner (PO) who is asking for a particular story in a particular sprint, even though they don’t believe they can finish it. In fact, any time the team is taking on work that they are not convinced they can get to “done” in a sprint, that is a sign that they are not truly determining their own capacity. The Scrum Master plays a key role in helping the team only agree to the work they can complete and no more.
I often tell teams “you should be happy when Sprint planning is over, convinced you will get everything done by the end of the Sprint.” When I say this, I get puzzled looks or disbelief. If your team isn’t leaving Sprint Planning very excited, convinced they will succeed with their Sprint plan, this is a clear sign that they don’t have control over the amount of work they commit to.
I recently surveyed a large group at Adobe that has many teams, distributed around the world. One question I asked was whether they put all work that the team does in the Product Backlog and the answer was mostly “yes”. I was very encouraged that they were making the team’s work visible. However, the results of the next question were very telling about what was really going on. “Is work added during the Sprint?” The answer was also a resounding “yes”! What was going on? I asked around and discovered that planned work went in the backlog, but many unplanned items were creeping into the Sprint and coming from many directions; sometimes from the PO, sometimes from a manager, and sometimes from the business-side. This is in direct contradiction to how Scrum is supposed to work. We make all the work visible by putting it in the Product Backlog and we let teams focus on their Sprint goal by not interrupting them. This also undermines item #1 above since the team now has no way to truly determine their capacity if they aren’t the ones pulling the work into the Sprint during Sprint Planning.
In a talk I attended some years back by Jeff Sutherland, the co-creator of Scrum, he stated that one of the primary reasons for having uninterrupted periods of work (Sprints) was to stop the thrashing that often goes on within teams and derails their focus and productivity. Teams who can really focus become much more productive, not to mention, happier and more fulfilled in their work. If you want to understand this concept more, check out this TED talk by Mihaly Csikszentmihalyi on Flow: http://www.ted.com/talks/mihaly_csikszentmihalyi_on_flow and also, The Progress Principle by Teresa Amabile and Steven Kramer.
The team has permission to add work to the Sprint to make improvements.
I regularly facilitate retrospectives for teams and when we get to the part where they are deciding what they are going to improve in the next Sprint, they often express concern that there is no time since they have too much work. How do they know how much work they will have in the next Sprint if they haven’t done Sprint Planning yet? This concerns me because in Scrum, self-organizing teams determine what gets into the Sprint Backlog so they should have direct control over that.
Also, if the leadership has not given the team permission to spend time improving, then when will it happen? Later? In the competitive world we live in, there is no “later”. We will always have very urgent and important features to implement. Improvement is a investment you make to go faster later. If you don’t pay it and your competitors do…well, you know what happens.
Here is a simple agreement that teams and leaders should discuss: every Sprint, the team puts one improvement at the top of the Sprint Backlog before any product work is planned. This reinforces its importance and ensures there is time to make it happen.
3 Actions that Help Retrospectives Work Better
I too frequently encounter a level of frustration with retrospectives from teams who regularly do them, but have found them unsatisfying. What I often discover, after discussing how they conduct them, is a lack of the experience and skills required to do them well. For my first 3 years of implementing Scrum, I followed the usual retrospective pattern and asked the following 2 questions:
What went well?
What do we want to improve?
After a while, these retrospectives become mind-numbingly boring. The stubborn keep at it and the disenchanted abandon them altogether. I was one of the stubborn ones 😉 Others learn a few basic activities like sailboat or +/delta and then repeat them, over and over again, every retrospective.
I had the wonderful privilege to learn retrospective facilitation from one of the co-authors of Agile Retrospectives book, Diana Larsen. This class completely changed how I saw retrospectives and when I put into practice the techniques I learned, I saw much better outcomes.
Here are 3 things you can try to improve your retrospectives:
Follow the retrospective pattern.
I won’t go into too much detail about what pattern Diana Larsen and Esther Derby have defined for good retrospectives. You can read it yourself in their book. I will focus on the most important step so that you may understand why the basic “two-question” approach is lacking in effectiveness. Here is the basic pattern they define:
Set the Stage
Decide What To Do
The “two-question” approach mentioned above are two questions that should be asked in the “Generate Insights” portion of the retrospective. Diving right into those two questions skips right past “Set the Stage” and “Gather Data”. What exactly is “Gather Data” then? This is where the team answers the “What?” question. In other words, “What happened in the previous Sprint?” Not “What do we think about what happened?” or “What do we want to do about it?”, just the facts, the events, the good, the bad, and the indifferent. Some examples might be: the build broke in the first week, bug counts went up, Sue got sick, Jayanth and Alexei paired on a really difficult story, or whatever. Just record the things that happened. You should spend the bulk of the time (~40%) on this step. Why so much? In order for groups of people to come to a common understanding of what happened and to make good decisions about what to do next, they need time. Lots of it. Spend the time on this part and the next two stages move more quickly. Which also means that you need to make sure your retrospectives are long enough for this to happen. How long? 75-90 minutes is a minimum for good improvements to emerge. Don’t believe me? Try it and find out for yourself.
Mix things up.
At Diana Larsen’s urging, I went on to become a Human Systems Dynamics Professional with the Human Systems Dynamics Institute (HSD) led by Glenda Eoyang. The organizational change work they do is rooted in the idea of human systems as complex systems. They have learned from complexity science about how attractors work in complex systems. There are a number of attractors in complex systems but one type is called a periodic attractor. Scrum is a periodic attractor because of the iterative nature of Sprints. Every two weeks (or so), we review, we plan, and we work. There is a danger with periodic attractors though. In Coping with Chaos, Glenda Eoyang writes that periodic attractors build resistance. When I read that, my head nearly exploded. Does this mean that holding regular retrospectives can cause resistance to change? Then it dawned on me why the Derby and Larsen book recommends changing the activities for retrospectives on a regular basis; to mix things up a bit. The analogy that came to mind is of someone rubbing the same spot on their arm. After a while, they don’t feel anything. Doing the same thing for each retrospective, over and over again builds resistance in people. They become immune to taking real action and getting to the heart of their problems. They go through the motions and get out of the meeting as quickly as possible. Groups like these often rate retrospectives as their least favorite Scrum meeting.
(Aside: HSD has an activity called “Adaptive Action” which is virtually identical to retrospectives, following the same basic pattern, albeit with different terms.)
A good Scrum Master will seek out training and other resources for retrospective activities to continuously improve their retrospective facilitation. As with all things agile, we continuously seek ways to improve our implementation of agile methods. A good starting place is the Agile Retrospectives book but there are other resources you can draw on such as Retr-O-Mat, Tasty Cupcakes, Retrospective Wiki, and many others. I strongly suggest learning the pattern and the purpose of each step first. The other resources then can fill in when you are designing your retrospectives.
Focus on one improvement at a time.
In fast-paced, complex work, there will always be many problems uncovered, issues to resolve, and improvements to be made. It is the very nature of the work itself. During retrospectives, teams will regularly identify many issues. In fact, the list can be quite long and daunting, making it seem as though nothing will ever improve. A very common pattern with team retrospectives is that they choose a list of actions to take, sometimes 5, or even 10 actions. Sadly, because of time pressure, many of these actions are not taken, often none at all. During the next retrospective, the team finds themselves talking about the same issues, Sprint after Sprint. This can really demoralize a team that is struggling to improve.
The concept of “small batch size” from the Theory of Constraints by E. M. Goldratt teaches us that to increase speed (and here, I am referring to the speed of improvement), small batches move faster through a system and don’t cause the work in progress to become a bottleneck. Small incremental improvements are easier to focus on, as well.
As an experiment, for your next few retrospectives, choose only one improvement. If you want, you can place all the other actions in an “improvement backlog” but only commit to one action for the next Sprint. Most importantly, make it actionable by being very explicit about what exactly the team is committing to try. Don’t commit to things like “the team will write more unit tests” or something as equally fuzzy.
Answer these 5 questions to make a weak retrospective outcome stronger:
What? – identify exactly what experiment you are going to run (e.g. Dave and Jo will pair program on the riskiest story in the next Sprint)
Why? – state the hypothesis of why this experiment can be an improvement (e.g. Pair programming will reduce the risk of error)
Who? – what single individual will ensure that this happens (e.g. Jo will own this action)
When? – what date will this be done by (e.g. Jo will invite the team to a meeting at the end of the first week)
How? – how will the team know that this happened (e.g. at the meeting, Jo will demo all the passing tests and discuss other technical details at the meeting)
Some will recognize that this is very similar to SMART goals. I prefer this set though because they are easier to remember (for me at least) and they make it clear that this is an experiment, i.e. it can fail, too! We learn the most from those experiments.
Choosing only one action increases the chance that it will actually get done and over time, an accumulation of single actions start to add up to a lot of improvement. Choosing only one improvement aligns very well with one of the 5 Scrum values: Focus.
Leaders who make it clear that investing in continuous improvement is extremely important and not optional enable their teams to creatively accelerate their work and increase speed to value. Scrum Masters who invest in improving their retrospective facilitation skills will increase the speed at which the team learns.
Lastly, since you read all the way down to the bottom, here is a funny video from the Netherlands on improving your retrospectives:
So I started thinking about recent conversation I had with someone regarding the discipline with adoption of Scrum within a large group, with 5-7 teams. As we touched on her observations and current condition as experienced by the teams, she pointed out the fact that these teams valued autonomy as such one expression of this was they had freely established their own sprint cadence and were unlikely to give this up. This was in the context of the suggestion that it would be valuable to establish a single sprint cadence across the group, be it mapped out across two or more weeks, as it will remove all the wasteful activities needed to synchronize across these teams.
Steven Johnson, author of How We Got to Now was being interviewed by Jon Stewart. He explained that his book was about the history of inventions and a look back as to how we got to now given an outcome we see today and often take it for granted. He points out, as in the case of clean water, that for most of us is a simple act of filling a glass at the faucet, is build on top of hundreds of inventions that proceeded in history. He then pointed out that we wouldn’t be able to TiVo shows like this had it not been the simple invention of standard time.
It works out that until 1918 every town in America had defined there own time standard, there was no concept of standard time across U.S.A. Would you believe if it wasn’t for the railway system we probably wouldn’t have seen the need for standard time. Even then, it wasn’t until 1883 every railroad had their own time standard, not to mention every town on the line defining its own to complicate the simple matter.
People regularly lament about meetings. I am guilty of it myself. I used to cynically and sarcastically say, after spending time in a long and fruitless meeting, “Meeting rhymes with beating!” Many would often shake their head in agreement. “There are too many meetings” is a common refrain at many companies. Interestingly, I often hear this after groups have adopted Scrum, “Why are there so many meetings in Scrum?” “Isn’t Scrum supposed to be lightweight? Agile?” Let’s quickly review what meetings Scrum defines and then we will see what happens in practice for many teams, especially when starting out with Scrum.
Purpose: Development Team commits to work they plan to get to the Definition of Done by the end of the Sprint (the Team’s Sprint Goal).
Outcome: A solid Team commitment to a Sprint Goal represented by their Sprint Backlog (containing all the Product Backlog Items (PBIs) and, possibly, task breakdown).
Duration: 2 hours per week of sprint, often less, as teams mature and improve.
When: At the start of a Sprint (after the Sprint Review and the Sprint Retrospective).
Who: Development Team, Product Owner should be available to clarify PBIs, Scrum Master facilitates.
Purpose: Daily meeting for the Development Team to inspect and adapt how to best achieve their Sprint Goal.
Outcome: Impediments get surfaced and improvements for the day’s work are agreed upon.
Duration: 15 minutes per day.
When: Every day of the Sprint.
Who: Development Team, others can attend but only listen, Scrum Master facilitates.
Purpose: Product Owner helps the Development Team to understand the work coming in the next Sprint or two. Sometimes this involves writing acceptance criteria, slicing items smaller, sizing, estimating, or anything that helps the team prepare for the next Sprint Planning.
Outcome: Upcoming PBIs are ready for the upcoming Sprint Planning Meeting.
Duration: Varies by team but is often 1-2 hours per week of the Sprint.
When: Varies, but preferably not immediately followed by Sprint Planning. Try to allow a day or two in between these meetings.
Who: Development Team and Product Owner, anyone else who can increase the Team’s understanding, Scrum Master facilitates.
Purpose: The Development Team demonstrates the PBIs they believe have achieved the Definition of Done to the Product Owner and other stakeholders.
Outcome: Feedback for the Development Team on what they built, often resulting in the generation of new PBIs. Also, a broadly understood view of current progress.
Duration: One hour per week of Sprint, but can vary depending on the number of teams demoing and how big the audience is.
When: At the end of the Sprint, before the Sprint Retrospective and the Sprint Planning Meeting.
Who: Development Team, Product Owner, generally, anyone else can attend, especially desired are stakeholders, Scrum Master facilitates.
Purpose: The Development Team gets together to examine how they have worked together during the previous Sprint and what they will try in the next Sprint to improve.
Outcome: One specific experiment the Development Team will put at the top of their next Sprint Backlog.
Duration: One hour per week of Sprint.
When: At the end of the Sprint, after the Sprint Review and before the Sprint Planning Meeting.
Who: Development Team, Scrum Master facilitates.
There are a large number of variations on the above descriptions but I think of these as a good starting point. If I customize the process, I don’t lose sight of the purpose or the “why” of each meeting.
The Meetings of a Typical 2-Week Sprint
Special thanks to ICAgile and Ahmed Sidky for the inspiration for these two graphs.
If you add up all the meeting time vs. time for building the product or service (or doing agile vs being agile), you get: That’s larger than a 6:1 ratio. Looks very lightweight to me. Where does the complaint about too many meetings come from then?
Common Scrum Meeting Anti-Patterns
There are many symptoms of dysfunctional Scrum meetings. Here are just a few.
Symptom: Planning takes too long
When Teams start out, this meeting seems to take forever, wearing the Team down so they agree to any amount of work. My first Sprint Planning meeting dragged on for 3 days!!!
Cause: Poorly refined Product Backlog
A Product Backlog that is not well-refined will not be well-understood by the Team. They will need time to gain enough understanding to make a solid commitment to the Sprint Goal. This refinement ends up taking up most, if not all, of the planning time. Time usually runs out and a rushed Sprint Goal is created. So much work gets packed into the Sprint, during the Sprint there is no time to refine the Backlog for the next Sprint, perpetuating the problem. Sprint Goals in this circumstance are rarely reached.
Cure: Weekly Product Backlog Refinement Meetings
Make a serious commitment to a timeboxed, Product Backlog Refinement meeting every week. Make sure the right people are in the room. Ensure that the goal of this meeting is to be ready for the next Sprint Planning.
Symptom: Lasts longer than 15 minutes
Cause: Poor meeting facilitation and/or weak team agreements
This meeting is truly for the Team to synchronize and coordinate their efforts for the day so that the Team can be as effective as possible. But, other items leak in like reporting status to a leader, deep-diving on problem solving, staff meeting agendas, etc.
Instead of trying to describe what needs to happen, let Jeff Sutherland, the co-founder of Scrum, describe the Daily Scrum in a two-minute video:
Notice he mentions the type of agreements Teams can have, e.g. “We can talk about one issue for no more than 60 seconds…” When Teams hold each other accountable to their own agreements can lead to much more effective and dynamic Daily Scrums and eliminate much waste.
Symptom: Team demos unfinished, non-shippable, product increments
The Team shows everything they have completed whether it meets the Definition of Done or not.
Cause: Not focusing on the Agile principle of “working software as the primary measure of progress”
Many traditional project management views about reporting progress are rooted in the idea that you can estimate the percentage of completion of work. Unfortunately, reporting that something is 50% finished lacks information needed to use for decision making. It makes it difficult to answer the “Are we ready to ship?” question. Also, people want to be recognized for the effort spent on the work to date so they hesitate leaving anything out.
Cure: Only demo “potentially release product increments”
By only demoing the PBIs that meet the Definition of Done, it is very clear to everyone in attendance what real progress has been made. A decision to ship is now much easier to make. This gives the organization real business agility. Teams improve from rarely completing anything in their Sprint Goal to completing nearly everything after making this change.
Symptom: Too many improvements to make
When presented with the real challenges facing the Team, they will regularly want to solve many problems right away. They often will commit to too many improvements.
Cause: Not strictly prioritizing improvements
One of the most difficult activities is deciding what is not going to get done. Being crystal clear on what is most important is difficult when faced with so many choices.
Cure: Choose one actionable improvement per Sprint
By only choosing one experiment to try in the next Sprint and making it specific and actionable, we increase the odds of actually making improvements. It gives Teams something to reflect on during the next Retrospective. Over time, repeatedly making small improvements will add up to many significant improvements for the Team willing to learn and experiment.
Good meeting facilitation skills are critical for keeping meetings focused on their purpose. Training and supporting Scrum Masters to develop these skills has invaluable benefit to the Team and the organization as a whole.
While there are many other dysfunctions, I’ve highlighted the ones I hear about most often. If there is enough interest, I’ll add more.
When it comes to estimation in Software development, the difficult question to answer is when can we go to market? and when can we release the product or service? These are difficult questions, both from the business perspective as well as the development perspective.
There has been a lot of conversation in the agile community around User Stories and use of Story Points to size the product backlog items. What is often lost when teams adopt this technique is the value of team heuristics, as in team experience and the importance of relative sizing. These put together and followed with some discipline allow the team with the ability to forecast – as in over discrete periods of time, sprints, express an ever accurate answer to the questions:
How much by this date?
All this by when?
If no one is asking these types of questions at the end of each sprint then you may be in danger of falling into the cargo cult scrum trap, as in following a ritual of sizing product backlog items for ritual sake.
So what does heuristics really mean?
In a nutshellheuristics refers to experience based techniques for problem solving, learning and discovery, one which is good enough approach to finding an answer, where the perfect answer is not knowable or would be too expensive to learn.
So, if you are familiar with how most sciences work, you have probably encountered the use of heuristics before. For instance, rather than predicting exact ratios of chemicals for a reaction, chemists use heuristics as guidelines in predicting how various chemicals will react. Similarly in engineering for instance aircraft designers use heuristics by considering coefficients of lift and drag to point them in the right direction, with the design empirically refined based on experimental evidence.
In essence to answer either of the questions stated above there is a heuristic alternative to the careful reasoning thatoccursin the long cycles of phased approach in gathering requirements, analysis, design, development and test. Thisheuristic alternative works well for estimation and this is in spite of sometimes serious errors. The key is to ensure the errors are bounded across short time intervals, with frequent pauses assess the outcomes and inform what to do next. Of course Scrum provides for a natural cycle of pauses at the end of each timebox, where the team can take stock of the outcomes as well as assess the steps the team actually took vs. assumed what they took, yes essentially these being the sprint review and end-of-sprint team retrospective.
First Order Estimation
So what is the first order of estimation, well it is something we all do so naturally and effortlessly when it comes to driving a car to riding a bike to walking in a busy mall, else surely we would be running into each other most of the times. I mean when is the last time you got out a measure tape when changing lanes on a freeway or overtaking to ensure you have the most accurate measurement in distance or speed.
In essence the first order estimation I refer to is a relative measure, in this case of work items (user stories) to one another. To me it is similar to coefficients of lift and drag considered in aircraft or even car design. Its just that in this case these estimates can be now be thought of as being the teams coefficients, one that corresponds to the team that is going to do the work based on their perspective for product backlog items as opposed to anyone elses.
There are teams that only look at their teamcoefficients, user story points, when planning a sprint. This may well be all that is needed for planning, but in truth this is only the case for a teams that have been together for a long time, have experience of working with each other and have established team heuristics, their rules of thumb.
Teams that are new or new to working as a scrum team often miss out the need to assess the team capacity at a more granular level of planning, one that breaks a product backlog item (user story) into actual tasks that need to be performed in order to get to done!, before making team commitments. So it is best for a team to be mindful and disciplined towards developing the team heuristics and continued reliance on wisdom of the crowd to decipher difficult problems that includes estimation.
To do this start with having a stable team one that takes a disciplined approach in their scrum ritual meetings during the sprint. This includes when refining backlog items and/or sizing using story points to when planning out by defining tasks along with task level time estimates as this is what will help shape the teams commitment to the plan that appears in the form of a sprint backlog.
Daniel Kahneman, in his book Thinking, Fast and Slow, points to how judgement happens, andwhile it is a complex function of the brainwhat he points to is what Psychologist have to offer based on their observations supported by what Neurologist tell us about functioning parts of the brain. In essence we have evolved to with two modes of thinking, called System 1 and System 2 as coined by psychologist Keith Stanovich and Richard West.
Where System 1, evolutionary the oldest, the part that is responsible for fight or flight and ingrained to our survival instinct, operates continuously and generates assessments of various aspects of the situation with little or no effort. This basic assessments plays a vital role in intuitive judgement, as this routinely substitutes the more difficult question being asked of by System 2, the essence of heuristics.
Where as System 2 thinking considers, only if it must, the difficult to answer questions the one that System 1 doesn’t readily offer an answers to, and it triggers many other computations including basic assessments acting as a mental shotgun according to Kahneman. He points out the word heuristics comes from the same root word as does the word eureka, and is technically defined as a simple procedure that helps find adequate, though imperfect answers, to difficult questions.
Recently I’ve been hearing people use the term technical debt to describe all sorts of things that are related to system improvement. However, used properly, technical debt is not a catch-all phrase for system improvement work, but a subset of that work.
“Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.”
To better understand technical debt, let’s explore the analogy to financial debt.
Four out of five people buying a home in the United States in 2013 took out a mortgage to do so. For the most common type of mortgage (30 year fixed with 20% down) on the average home price in the U.S. (~$200k), the total cost of the loan is 175% of the cost of just buying it outright. That’s right, for a $200,000 house, you’d pay about $350,000 over the life of the loan, or $150,000 just in interest. Why would anyone do that? Of course the answer is simple – it might take you years to accumulate the cash before you could own a home, so people make a decision to accumulate debt in order to get the home sooner, knowing full well that they’ll need to pay more in the long run for that early entry. They are trading an advantage (early entry) for a disadvantage (higher total cost).
Technical debt, then, is the conscious choice to get to market faster by skipping some steps required for long term code sustainability. Just like financial debt, we know that in the long term it is more expensive (we’ll need to pay interest on top of the principle), but we do it because the advantage of getting to market sooner outweighs that cost. And just like financial debt, we need to create a budget for paying it back down.
Are All Bugs Technical Debt?
Let’s use three of example defects to answer this question.
Defect A is an error that was reported by a customer a few months after a release. It happens when a specific workflow is followed that the team didn’t anticipate, and causes the program to freeze. The team was using good automated testing, but we didn’t catch this one due to the unusual customer workflow.
Defect B is an error that was discovered in a regression test, and was determined to not be a “release-blocking” bug – in other words, we knew about it but decided to release anyway. Now customers are complaining about the bug and the team decides to go ahead and fix it.
Defect C is a display problem that happens on a new version of a browser that was released after our software was released. It didn’t occur in previous versions of the browser.
In this example, only Defect B is technical debt, Defects A and C are not because the business never made a conscious choice to ship with those defects. In any moderately full featured software product, the level of complexity involved results in some defects making it through to customers. Some of those may have been known prior to shipping and others weren’t. If we didn’t consciously choose to release with the bug, or were consciously skipping some defect prevention steps in order to get out the door sooner, these are just defects, not technical debt.
Is Refactoring Technical Debt Reduction?
As in the case of defects, it all depends on whether we were making conscious trade-offs on architectural and design approaches in order to release sooner. If we know that there is an area of the code that is a mine field, and no one wants to touch it (except for that one coder that’s an expert), but we chose not to refactor due to the amount of time & effort involved, then we have technical debt. If, on the other hand, new code has been added, the system is becoming more complex, and we just need to do some refactoring as part of the standard craft of software development, that type of refactoring is not reducing technical debt, it is simply the good practice of continuous system improvement. Think of it as entropy reduction, not technical debt reduction.
Lack of robust test automation is probably one of the most common instances of technical debt. In the craft of software development, using automated tests is equivalent to a surgeon counting the sponges prior to a surgery to make sure we don’t leave anything in the body of the patient. If we’re not doing it, we are really being irresponsible. Now, I’m not stating that we need 100% code coverage for unit tests, or 100% functional coverage for automated regression tests – there is a point of diminishing returns. Again, it comes down to a case of intention – do we want much better coverage but just don’t have time to do it? If we don’t have time, that is just another way of saying it is not as high a priority as shipping the feature. In other words, we are consciously choosing to get to market faster by skipping automation that the team thinks would be helpful. In such a case, we are creating technical debt.
Technical Debt reduction is one important category of system improvement. To lump all system improvement under the banner of technical debt does us a disservice because it seems to imply that any problems or inefficiencies in the code were just a conscious choice by the team. That is not the case. Sometimes we make decisions to ship a less than optimal product in order to get earlier feedback or a market advantage. Even if we aren’t doing that, there will be ongoing improvement required. Separating out Technical Debt as a specific category helps us acknowledge the prioritization decisions we’re making regarding quality vs. speed, and watering that term down by lumping it together with everything else muddies the waters and can lead to disengagement by the team that didn’t “get it right the first time”, an impossible task in a complex domain.
It is critical that Technical Debt be paid down as soon as possible. It follows the rules of compound interest – the longer we wait to pay it off the more it accrues, and if it’s not paid off in time, eventually it leads to a bankrupt code base. It simply needs to be abandoned and started over from scratch, an extremely costly result. Clarifying when we are making a conscious choice to accumulate technical debt needs to have a payment plan associated with it to avoid these risks. Lumping ongoing system improvement into that category makes the payment plan much more difficult to plan for.