Positive Incline Mike Burrows (@asplake) moving on up, positively

August 4, 2011

Kanban in its portfolio context (idea for LESS 2011)

Much as I love hands-on development, I can’t help but bring a manager’s perspective to Kanban.  When I see situations suffering with the all-too-common “massive WIP” problem (usually coupled with slow delivery and shared bottlenecks), my attention turns very quickly from team level to the management support systems that are failing to bring high level control to the overall burden of work that teams are expected to deal with.

Hence a growing interest in the field of portfolio management.  It’s not that large organisations don’t have the systems (I’ve had to provide data to enough of them!), it just seems that they’re so focussed on accounting for the past that they have little influence on future delivery.

Here are just some of the things that I believe a good portfolio management system should offer, and some pointers to how they might be turned into levers for improvement.

Point-in-time financial measures

1. “Inventory”: Money spent on work that has not yet been released.  Accountants might call this “Work in progress” (WIP) instead, but to the Kanban community this refers to the number of items under development.

2. “Required” (my word, perhaps there’s a accepted term): Further money to be spent before work currently in progress will generate value to the business.  Inventory + Required make up the total expected cost of building and releasing something.

These are point-in-time measures that can be trended over time and projected into the future.

Central to Lean and Kanban is the belief that managing down inventory (whether measured in items or in money terms) goes hand-in-hand with improving flow, and from improved flow we should expect to see growth in business capability and value.

But money spent stays spent. The only way to reduce inventory in the short term is to make releases, which means spending yet more money! The key to reducing inventory in the longer term is to plan (or replan) to release more incrementally, reflected in reductions in the “Required” measure.  Implicitly or explicitly, to achieve this across the board requires changes in policy (e.g. limits, risk appetite) &/or practice (e.g. how work is structured).

Rate-based financial measures

3. “Burn Rate”: How much money spent per month on the project or portfolio in question.

4. “Throughput” (again, there may be a better word for it): Completed work released (out of inventory) per month.

Like the point-in-time measures, these can be trended over time and projected into the future.

A gap between current burn rates and projected burn rates could be a sign of trouble, though the direction of the gap is crucial.  If to meet our commitments we must spend at a rate significantly greater than our current capability (i.e. because we can’t ramp up quickly enough), we have an overcommitment problem.  If the gap is in the other direction, it means we’ve kept out options open, a good thing so long as the result isn’t needless starvation caused by a lack of preparation.

Assuming that all projects survive until completion, the long term averages of throughput and burn rate will be equal.  High month-by-month variability of throughput could however be seen as an indication of the lack of flow.  It could even be an impediment in itself if (say) a business function is to be impacted with a short-term rate of change that it is unable or unwilling to sustain.

Non-financial measures

5. Headcount: Self-explanatory, often tracked alongside financial measures.  Ramping headcount up or down can be painful (and I say that from the heart!).

6. Work items: Features etc as managed in Kanban systems.  Slightly problematic though – whilst it clearly makes sense to track features at project level, can we be sure at portfolio level that one project’s work items (let alone their states) are comparable with another’s?  Although they don’t flow very fast, projects can of course be treated as work items too (and worth limiting in number as well as financially).

7. Lead times: How long projects take, based on actuals from completed projects, planned dates, or estimates based on budgets and burn rates.  It hardly needs to be said that shorter is generally better.

Reporting dimensions

8. Initiatives: The “why” behind the work; used as a reporting dimension it shows how effort aligns to strategy. Too many of these may indicate a lack of focus or alignment.

9. Organisation/sponsor/funding source/customer/market segment: Dimensions based on by whom and for whom work is done

10. Classes of service: see some of my previous articles.  See that we’re investing sustainably and that our development systems are robust.

Where next

I’m considering attending the LESS 2011 conference in Stockholm in late October.  What would absolutely make me go is the thought of exploring the boundaries between Kanban and surrounding systems (portfolio management and other existing support systems that might be turned to create “pull” for positive change) with like-minded people active collectively across these diverse areas (quoting the conference website):

  • Lean and Agile Product Development
  • Complexity and Systems Thinking
  • Beyond Budgeting
  • Transforming Organizations

But I can’t make this happen on my own.  Who else would be up for it?

Or you may have a portfolio problem (perhaps a “massive WIP” problem) of your own.  Get in touch!

July 28, 2011

A funny thing happened to my ROI

Filed under: Kanban,lean,Portfolio,Project Management — Tags: , , , , , , — Mike @ 4:56 pm

Over time, exciting things happen to ROI when you remember to ignore sunk costs.

Consider three projects with lead times of 12, 6 and 3 months respectively, each having an ROI of 33% (no IRR calculation here, just a simple payback of 1.33 for every 1 invested).  Comparable, right?

For the sake of simplicity, let’s assume (i) that burn rates are constant on each project, and (ii) that any uncertainty in the ROI figure derives entirely from the payback element and not from the cost part.

What do these projects look like after a month of progress?

The 12 month project is now an 11 month project.  For a further investment of just 11 months’ worth of work we will get the original payback (1.33 times the 12 month cost), giving a return of about 45%.  Pretty decent!  Or not…

Our 6 month project will return 60% on its remaining 5 months work, and with only 2 months left to run, our 3 month project has an ROI of very nearly 100%.  We will double the money we’re about to spend!

Amazingly, even if the 3 month project had started with an ROI of 0, after a little less than a month it would still look better than the 12 month project.  This is not say that we would want to start a project with an ROI of 0 (though we might), but low worst-case estimates should be much less of a concern on projects of short duration.

Corollaries

  1. If you’re into ROI comparisons, don’t make the mistake of comparing the historical ROIs of current projects.
  2. If adding a new project to your portfolio will delay projects already running, the economic impact may be very much worse than you realise.  Same goes at task level of course.  Limit your work in progress!
  3. The economics of splitting out partial, earlier deliveries from your long project may be very much better than you think, even if the highest value parts of the project are still some way off.

I could go on.  Short is good here people!

July 12, 2011

Intangibles, value and risk (or: Portfolio thinking)

Filed under: Kanban,lean,Portfolio,Project Management — Tags: , , , , — Mike @ 2:58 pm

A bit technical this one, bringing together some loose strands after the excellent Kanban Leadership Retreat in Reykjavik, Iceland (#klris):

Strand 1: This begins with Patrick Steyaert and I in conversation in the #klris hotel bar (the bar conversations alone were worth the plane fare!). I was talking about the way business valuations (as performed by financial analysts and as observed in the markets) depend on capability. Patrick uttered the one word “Intangible”, in reference both to the Kanban class of service (a heading for improvement work, experiments etc) and to the part of a company’s valuation that derives from brand and capability.  Aha! Now I’m completely reconciled to the David Anderson terminology (whether or not he consciously intended this interpretation), thank you Patrick!

Strand 2: Maarten Volders repeatedly encourages me to bring Kano analysis and other models of product development risk into the way I teach classes of service. This idea resurfaced in a small session in Reykjavik on Kanban and Complex Systems (or “Mega projects and all the crap that goes with them”) led by Rich Turner. Perhaps you wouldn’t apply Kano analysis to (say) a defence project, but it is certainly true that the risks of these large projects are multi-dimensional. Maarten, you are right of course, thank you for forcing me to make the connection!

Strand 3: The timely publication of Alistair Cockburn’s post Agile in Tables. and in particular the first figure (under “Risk-Value-Tail table”). I will continue to draw a more S-shaped curve than Alistair’s but I like (and will use) his names for the curve’s three regions:

  1. Pay to learn
  2. Build business value
  3. Shine & gloss (aka the tail)

I like too Alistair’s risk categories:

  • Business risk: Are we building the right thing?
  • Social risk: Can these people build it?
  • Technical risk: Will all the parts of our idea work together?
  • Cost/schedule risk: Do we understand the size and difficulty?

To bring these strands together:

Alistair’s “Shine & gloss” might seem hard to justify in pure dollar terms, but try saying that to Apple! And history seems to be more on the side of the Toyota way than the Motorola way [1] when it comes to improvement.  More broadly, the mindset of striving to minimise all work outside the “Build business value” category seems at best simplistic, at worst blinkered both to risk and to the bigger economic picture.  Rather than minimise or ignore these other aspects, it seems much more useful to make choices and policies explicit and invest carefully across classes.  A portfolio-based approach if you like.


[1] “No Six Sigma project is approved unless the bottom-line impact has been clearly identified and defined” – Pros and cons of Six Sigma: an academic perspective /via Wikipedia

May 27, 2011

Ask the wrong question…

Filed under: Kanban,lean,Portfolio,Project Management — Tags: , , , , — Mike @ 10:45 am

In what order should projects be tackled in order to minimise WIP[1]?

I was thinking about the tendency of project portfolios to contain a very wide range of project sizes. There is for example a 2-orders-of-magnitude difference between a 1-person-month project and a 10-person-year project, and this is by no means extreme.

Let’s look at a concrete scenario. Imagine a project portfolio consisting of a single 1-person-year project and twelve 1-person-month projects, a relatively mild example. Attempting to do everything at once would lead to a worst-case peak WIP of 24 person-months. Let’s not do that!

Some more sensible approaches might be to:

  1. Do the large project before the smaller projects
  2. Do the smaller projects before the large project
  3. Run the two halves of the portfolio in parallel

But however you cut it, you have a WIP of 12 (or 13 if you’re careless) person-months near the end of the big project; the smaller projects are almost irrelevant. The best you can do is to control is when and for how long the WIP remains high.

Yes, I really did ask myself this question. D’oh!

A better question

How can we rework the portfolio in order avoid (or get us out of) a high WIP situation?

Put this way, the answer is more obvious. The maximum WIP is seen at the end of the large project. What if we could make it smaller? Splitting it into two equal phases with a meaningful interim deliverable (no cheating!) would halve the maximum at a stroke.

Lesson

You cannot easily manage your way out of a high WIP situation; you need pre-empt it, attacking it at source, here the wide variation in project sizes.


[1] Premise: at the level of the project portfolio, the money spent on unfinished projects is a good analogue to the number of unfinished features at team level. In fact, by accountants and Kanban practioners respectively, both are often called “work in progress” (WIP). Accountants of the Lean variety will agree with the Kanban practitioner that it is not a good idea to treat WIP as valuable assets. I am considering the merits of tracking portfolio WIP and wondering what we would do with the information.

April 29, 2011

Lines not boxes

Richard Veryard’s recent post on Emergent Architecture reminds me of the architectural meme “lines not boxes”. It’s a powerful approach that I followed explicitly in much of my time in enterprise architecture and web-centric development (“trust in open protocols and formats rather than closed technologies”) and I believe that it has value as a metaphor for process and organisational design too.

People aren’t boxes

Traditionally, we organise people by assigning them roles defined in terms of skills and tasks. Whilst some people seem to need the certainty that goes with this, it’s a practice I have actively resisted, whether as team member or manager. Putting people into boxes constrains opportunity, responsibility and creativity.

It seems to me more humane and more supportive of learning and growth if instead we make visible what needs to be done, define what good results looks like, maintain the minimum set of policies needed to ensure reliability, then create the space in which people can perform. And it can work for whole teams, where responsibility and creativity become manifested in self-organisation.

“Done” is only the start

Where there are different teams supporting different parts of a process, an over-emphasis on “what done looks like” has the effect of holding work back even when unfinished work could have considerable value downstream. In our “lines not boxes” metaphor, this is like defining the interchange formats to be used between systems but neglecting the communications protocols that carry them. An extreme example is the stage-gated waterfall approach to projects, where documents need not only to be completed but also reviewed and signed off before they may be acted upon in later project phases.

Under time pressure and faced with document-centric hurdles, smart teams learn to reach out and collaborate outside of the formal process. Smart organisations encourage this – making collaborative problem solving part of the process, building on successes rather than merely defending uneconomically against every eventuality (not to mention protecting every rear end). Once this is allowed to happen, it is my experience that artefacts start to get delivered in negotiated chunks and lead times take a significant turn for the better.

This is good news indeed: organisations build structures and introduce process overheads as they grow and rarely do they encourage flow. It is a relief to discover that bottom-up, flow-based approaches such as Kanban can prove effective even in the face of functional silos, not only helping teams to work more effectively within their functions but highlighting where a small investment in collaboration between silos will reap big dividends.

February 28, 2011

Positive Incline Limited

Filed under: Kanban,lean,Project Management,Work — Tags: , , , , — Mike @ 9:38 pm

Not just a blog, from March 1st Positive Incline is a multi-client management consultancy :–)

My specialties:

  • IT/product development process design, organisation, operation and improvement
  • Kanban, Lean, Agile development
  • The design and implementation of business process and quality improvement initiatives

My biases will be familiar to regular readers:

  • Speed, flow
  • Transparency, visualisation
  • Observation, evidence, feedback loops
  • Continuous learning & relentless improvement
  • Customer focus (getting right behind the sources of demand)
  • Sound project/process economics

I come from a development background (I still love programming); a former development manager, product manager and leader of change initiatives.

My job is to help you understand your organisation’s systems more deeply (not just conceptually, but how they really behave in practice) and to help find both improvements and ways to keep them improving for the longer term.  Look elsewhere if you want an off-the-shelf solution laden with jargon or lots of acronyms!

February 26, 2011

Kanban prioritisation and scheduling with classes of service

Filed under: Kanban,lean,Project Management — Tags: , , — Mike @ 6:50 pm

A couple of weekends ago I had the pleasure and privilege of attending one of David Anderson’s Kanban Leadership Workshops.  Spending a long weekend with a group of motivated people openly sharing their wide range of experiences was a joy!  It helped me to crystallise further a few thoughts that I have touched on before, and I write them down while they are still fresh in my mind.

I came away from that weekend more determined than ever to dispel the notion that the conventional project should be the default approach to managing and controlling software development work.  My aim here is to show that a Kanban-based service delivery approach can accommodate the project plan when needed but has the flexibility to manage work that isn’t necessarily schedule-driven (most work, really) in a much more effective and transparent way. Moreover, it can work without sophisticated tooling and will serve very well as a mental model, guiding improvement and risk-management activities and informing those key interactions that take place at the sources of demand into your system.

And this is where we start.

On priorities and prioritisation

It is vital to have priorities, and just a few of them.  Priorities (the themes or imperatives of your business or product) must drive a short list of items to be worked on soon; these are more likely to be found in the heads of your key customers than by grooming your backlog.

Getting to this “sufficiently short”® list in a robust and timely fashion is a high value (and ongoing) activity.  Perhaps inevitably, it is also a rather context-specific one, so for the purposes of this article I will assume only that you have at least embarked on establishing such a process.

This is not to say that there is no place for lists, plans (conventional project plans, roadmaps, capacity plans etc), and analyses (of requirements, risks, markets, competitors etc), but that these are merely inputs into an agreed method for making a distilled set of prioritised work items available to the development team as and when needed.

On scheduling

Now inside the system[1], scheduling is the process – and it’s an ongoing and dynamic thing – of producing economically optimal results from the sequencing of work items.  It’s a big responsibility, so let’s try to do it in a robust and transparent way.

Thanks to Don Reinertsen[2], I see in this context “economically optimal” and “minimised cost of delay (CoD)” as equivalent for practical purposes, and this has become my robustness test.

The transparency test is a little harder though, because delay costs are not always easy to pin down.  Where they are quantifiable, they’re not just numbers, they’re functions of time; moreover they are often estimated more easily in “frustration points” or “difficult meetings” than in dollars!  Fortunately, we can easily recognise different types of work items whose CoD functions tend to behave similarly.  This turns out to be a very useful thing to be able to do, because the different types place different requirements on the system.

A rough guide to work item types

In increasing order of urgency, work items tend to fall into one of these four types:

1. Important[3] but not immediately urgent[4] – the “slow burn” work as I like to call it.  Since the cost of a short-term delay is low for these items, it can be hard to resist the temptation to put them off, by failing to allocate effort to them or by shying away from make the case for them so that they never even appear on the priority list.  But that would be foolish.

Our aim in managing these slow burn items is typically to achieve a combination of the following:

  • Avoiding a future (but somewhat far off) pile-up of now-urgent work that would compromise our ability to deliver timely business value, potentially at a critical time.  Medium-term capacity improvements and supplier-necessitated platform refreshes are good examples of IT-driven work that carries this kind of risk.
  • To create future development capacity, to reduce future costs or to create future options.  In other words, work that increases capability – whether people, process or platform-centric, local to the team/product or broader in scope, mainstream or experimental.

Given that timeliness is not a primary concern, it suffices that we ensure that we are delivering these work items at a suitable rate.  This rate allows for the aggregate risk associated with the work items in question (e.g. to prevent a future problem), the need to protect and promote the long-term health of the system, and to provide a reserve of short-term slack in the system.

2. Increasingly urgent – the bread-and-butter work items that for many teams dominate the working week.  Their common characteristic is a cost of delay – whether measured in dollars, reputation or frustration – that increases with each passing day.  Typically their delay costs don’t merely increase linearly with time but have a nasty habit of accelerating up a steepening curve as the effects of compounding, lost opportunity, dependencies, competition, psychology and politics kick in.

From a system perspective, we look to deliver these items with short and reasonably predictable lead times.

3. Deadline-driven work items, where a sudden and material impact to the organisation will be the result should we fail to deliver on time.  Unfortunately, many organisations overuse this category, either as a tool to “encourage” urgent work or unthinkingly as the default delivery model.  Used more sparingly, coupled with a good understanding of capability, good upstream relationships and adequate notice, we would hope to deliver these on time with a minimum of drama.

The performance requirement placed on the system here is simple: don’t be late!

4. Expedited work items whose immediate value to the business clearly trumps other considerations, justifying the exclusion or detriment of other urgent or deadline-driven work.  Customers might raise new items with this priority, or they could be generated internally (in response to an outage or a newly-discovered system vulnerability, say).  Through risk management, we may sometimes choose to escalate existing in-progress items to an expedited status.

Here the performance requirement placed on the system is stark: expedite it (i.e. just get it done), on the basis of critical need!

Managing work with Kanban’s classes of service

Kanban’s classes of service combine scheduling policies with capability measures.  Seen as a systematic response to the varying needs of the different types of work items we have discussed, they are actually very simple to understand and operate in practice.

Prerequisites:

  • WIP under control (this is Kanban, right!)
  • A “suitably short” list of work items to select from, i.e. the input queue

All we need to do is to do is to apply some simple rules when selecting work items from the input queue or from the internal queues between activities (again, this is Kanban: all else being equal we much prefer to finish something than to start something new):

  1. We pull an expedited item if there is one (assuming that we haven’t “dropped everything” already!)
  2. We take a deadline-driven item if failing to do so would increase unacceptably our risk of not delivering on time
  3. After that, we balance the rate of delivery of the “slow burn” items with the need for speed on the increasingly urgent items

Striking a balance is never easy under pressure; explicit policies to protect non-urgent work (medium term delivery rates &/or effort budgets) can be very helpful.

The table below summarise the needs of our work item types and how they map to the recommended classes of service in Kanban:

Work item type Managed for Class of service[5] Typical Service Level /
Capability Measures
1. Important but not urgent (“slow burn”) Rate Intangible Items per month

Effort budget

2. Increasingly urgent Speed Standard 80th or 95th lead time percentile
3. Deadline-driven Timeliness Fixed date On-time delivery
4. Expedited Criticality Expedited As soon as possible

Why it works (1)

In reality, work items don’t sit forever in fixed categories but move inexorably along a cost of delay curve that sits in a space rather like the one shown in the figure below.  The shape of the CoD curve is unique to each item – driven by a combination of the time-sensitivity and business impact of the item – but they do share some common patterns. “Slow burn” follows “ignore” and migrates into “increasingly urgent” and perhaps “deadline-driven” as time passes and the cost of delay rises, perhaps steeply.  Urgent and late items may be “expedited”.  Failure may lead to “world of pain” or “diminishing benefit” then “missed opportunity” depending on whether the CoD function either continues to grow or declines over the longer term.

Cost of delay map

Our scheduling policy is designed to complete (at a minimum) enough non-urgent work to prevent pent-up demand later becoming an impediment to urgent work, to deliver deadline-driven work on time, and to accommodate a certain amount of expedited work without too significant an impact on other priorities.  The remainder (a significant proportion if the demand is well-balanced) goes to urgent but not necessarily deadline-driven work.  It is here where we expect much of the immediate business value to be delivered.

Why it works (2)

Kanban encourages us to make explicit both the scheduling process and its governing policies.  It promotes upstream collaboration not just in the individual work items but in the overall mix of the demand.  A balanced mix provides a degree of slack where it is most needed, so that short-term shocks can be absorbed and predictability maintained, facilitating medium-to-long-range planning.  Performance transparency enables service levels and schedule risks to be analysed historically and managed proactively.

Hints & tips

  • Don’t let anyone persuade you that the slow burn items of the kind I describe don’t have significant business value.  Quoting (roughly) Ronald J Baker, “the value of an organisation or product is no less than its ability to create future value”.  That said, a day’s slippage here or there isn’t a big concern, and (as described) they may be de-prioritised temporarily to create the slack needed to allow more urgent items to flow.
  • You might like to leave some of your slow burn budget to be spent at the discretion of team members (but do discuss the risks involved).
  • Sequence the “increasingly urgent” items according to cost of delay (or value, however measured) in relation to remaining lead time (delays included) rather than development cost.  For a given set of prioritised items the overall cost expended will be the much the same regardless of how they’re sequenced, so optimising for it is pointless. Much better to pursue fast feedback on high value items.
  • You can move items between categories as their risk profiles change.
  • Less WIP and shorter lead times (which we know go hand-in-hand) means fewer opportunities for scheduling conflict.  So much easier!
  • And finally: I only hinted at the gold that is to be found upstream of the development process.  Go find it!

Acknowledgements

A big thank you to David (@agilemanager) and his class of Feb ’11.  My stated personal goal for that weekend was to explore and express service-oriented alternatives to project-based thinking.

And to @benjaminm, @AGILEMinds and @blubberplinth for their encouragement and thoughtful input as this article took shape.


[1] The system is not limited to the development team – the scope of the system and its scheduling policies can grow to influence and encompass upstream activities too

[2] I reviewed Don’s book here

[3] Let’s not talk about unimportant work!

[4] Nod to Stephen R Covey and his 7 Habits here, hat tip @blubberplinth for the reminder

[5] I’m using here the names of the classes of service as described in David’s 2010 book, reviewed here

December 3, 2010

The A3 challenge

My previous post has been sitting there at the top of my front page for too long now, and it doesn’t reach my usual levels of positivity!  So let’s change that with a quick challenge:

  1. Can you relate the features in your development backlog (or at least the prioritised and in-progress portion thereof) to an identifiable business initiative?
  2. If – for real or in your imagination – you justified (compellingly), scoped and planned each of these business initiatives in just two sides of A4, how many of the features in your backlog would deserve a mention?

I take it for granted that you organise most of your development work by feature.  Large initiatives are allowed sub-initiatives, each also described (compellingly still)  in no more than two sides of A4 (real or imaginary).

The “A3” of this post’s title refers to an A3 report, presented on a sheet of A3 paper (the size of two sheets of A4).  I wholeheartedly recommend John Shook’s Managing to Learn if this concept is new to you.

June 21, 2010

Learning together: Kanban and the Twelve Principles of Agile Software

This post is a spin off from the recent Scrum/Kanban debate.  Not wanting to let a situation go to waste, it seems a good time to affirm shared values, which I do here via the Twelve Principles behind the Agile manifesto.  I’m grateful to Joshua Bloom for his excellent input.

Commentary on the twelve principles of agile software

Our highest priority is to satisfy the customer through early and continuous delivery
of valuable software.

Kanban: Check. We pull business value through the system, creating flow.  It should be recognized however that sometimes we create value by means other than delivering software (sometimes even by not delivering software!).  Furthermore, the act of improving the system generates value as it increases the capability of the wider organisation to generate value.

Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.

Kanban: Check. We actively limit work-in-progress (WIP), facilitating late prioritisation and minimising the impact of change on lead times.  We actively work to clarify the customer’s priorities so that the team can manage risk by properly sequencing work.

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

Kanban: Check. We make deliveries at intervals consistent with customer need and transaction cost.  We seek to minimise transactions costs attributable to the software development process, thereby making shorter delivery intervals economically optimal.  Highly advanced teams look towards continual deployment concepts to limit the inventory of complete yet not deployed software. We believe the best requirements come from software already depoyed being exercised by the customers/users. Achieving flow to the end-user generates higher value faster.

Business people and developers must work together daily throughout the project.

Kanban: Check. The development team and customer must learn together, in relation to both the problem domain and the delivery process.  The visual element of Kanban promotes transparency and creates triggers for customer interaction.

Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

Kanban: Check. Build processes that respect individuals; empower them to learn and to improve the system.

The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

Kanban: Check. Visualization and models allow face-to-face conversations to scale effectively. Limiting WIP prompts teams to have conversations DURING difficulties.

Working software is the primary measure of progress.

Kanban: Delivered business value is the primary measure of progress.

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

Kanban:  We work within and share responsibility for the capability of our system; sustained long-run capability cannot be built at the short-term expense of the individual.

Continuous attention to technical excellence and good design enhances agility.

Kanban: Check.  And we look to increase capacity by identifying and reducing the failure demand that results from inattention to quality.

Simplicity – the art of maximizing the amount of work not done – is essential.

Kanban: Check.  Furthermore we actively manage work-in-progress, minimizing work not finished.

The best architectures, requirements, and designs emerge from self-organizing teams.

Kanban: Check, and process too. Leaders (inside and outside the team) must foster emergence, not squash it.

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Kanban: Check. Continually even.

Conclusions

So there you have it, no fundamental conflict but a couple of clarifications and some changes of emphasis too.  It has to be said that these small differences do add up to a shift of mindset, but it is one that much of the Agile community has taken on board as a result of the increasing influence of Lean.

If I were to pick out a key thought it would be this:

The development team and customer must learn together, in relation to both the problem domain and the delivery process.

This lesson would be recognised by much of the Agile community I’m sure.

June 16, 2010

Book review: “Kanban”, by David J Anderson

Filed under: Books,Kanban,lean,Project Management — Tags: , , , , — Mike @ 5:07 pm

A big thumbs up for @agilemanager‘s book!  If Don Reinersten’s “Principles of Product Development Flow” (which I raved about here) provides the foundations, this is the practical, experienced-filled go-to book.   It feels very authentic, full of relevant examples and managing to be both measured and positive at the same time.  It will be the definitive Kanban book for a long time to come I’m sure; I sincerely hope that it goes on to achieve the status of “Agile classic” too.

Key chapters for me:

Chapter 3, A recipe for success.  The recipe has been developed significantly since this blog post (which is still worth a read).  I’ve followed much of recipe myself (some of it pre-Kanban), and it rings true to my own experience.  But full credit to David – if there exists a more effective version of the recipe than his, I would very much like to see it!

Chapter 11, Establishing service-level agreements.  I had heard about this and the related concept of “classes of service” before but it’s all a lot clearer now since reading the book.  It’s a thought-provoking chapter but I had the nagging feeling that it was just a stepping stone to something else (a prioritisation and scheduling mechanism based on risk-adjusted cost of delay) and David kinda confirms this.  That’s not to play down the importance of this chapter though – it definitely adds real sophistication to the generally-accepted core of Kanban, and if a practical guide can lead on to new thinking, that’s a thoroughly good thing.

Chapter 14, Operations review.  This sounds mundane, but it illustrates how a single Kanban implementation can seed something much bigger right across a business unit, breaking out of the confines of IT.  A very timely read!  Taken with the surrounding chapters we have here as good a guide to scaling agile development as you’re likely to read, and it reaches much further than that.

The final few chapters (Part four, Making improvements) are also worth mentioning together.  They take us back to Kanban’s roots in TOC and Lean (particularly Lean product development à la Reinertsen) and the influence of Deming, all in the process of giving yet more great advice.  Nicely done!

« Newer PostsOlder Posts »

Powered by WordPress