13
Dec 13

How To Make Meaningful Estimates For Software Products, Part 3

Iron Triangle – solid stuff. By DaveBleasdale, CC 2.0 licensed

If you take the Iron Triangle to heart – quality, scope, time, choose two – then there’s no way to deliver high quality software features on any accurate estimated-beforehand timeline. That’s what the Iron Triangle means. So something has to slip – either time, or quality, or scope. What usually happens is that two things slip initially – scope and quality. And when the organization realizes that quality is sucking and that scope is not good enough, then time slips. Unfortunately, because of the previous rush to get something incomplete and not very good out, you’re way behind the 8-ball by this time, and you end up releasing something that’s not very good, doesn’t meet the desired scope, and is late anyway.

I have heard many people tell me that they could get software products out on time with good quality, but I think those people did something slightly different (see the possibilities below). The nature of software is that it’s unpredictable. The nature of estimates is that they are always too low. If you combine those two things then there’s no way to get the scope you expect out at the time you predict. Unless you do one of three things:

  • “Sandbag” your schedule. This means putting enough slack in the schedule that the unexpected difficulties that arise when you work on new and innovative things don’t cause the schedule to overrun. This is very difficult to do, because management pressure against it is intense, but can be done. By the way, this is also what professional project managers do, even for predictable things like building houses and bridges. There’s always a buffer (and those projects are still often late). It’s not usually what these people mean, though, when they say they always release their software on time.
  • Take the Iron Triangle to heart, and let scope slip. What this means in practice is usually what’s called a “release train.” You schedule releases every so often, like every three months, or every month, and whatever is done with high quality in time for the upcoming release gets to go on the train, and anything that’s not done simply waits for another train. At its most refined, this becomes a continuous deployment model, augmented by Kanban, perhaps. Each feature is worked on until it’s finished, and when it’s finished, it’s released.
  • Deliver features incrementally. This is a variation on the release train model, where instead of waiting to release a feature until it’s finished, you decompose the feature into smaller parts and release them onto the trains as they are finished, based on timeboxes of development effort. There’s still a decision on whether the increment that’s been finished can be released, based on two criteria: 1) does it do anything at all useful? 2) Is it high enough quality? There are often situations where a partial feature can still provide value, even if it’s not everything you want from a feature. Often these incremental features are called “beta” or “early release” to distinguish them for the customer.

The reality is that even though you can’t, by definition, give an accurate estimate for anything interesting, it is possible to finish interesting things, and release them. The main problem is that you can’t really say when they will be finished. And so as a product person, because you are expected to be able to provide a roadmap, you need alternative ways to talk about it versus simply saying “We’re releasing feature A at this time, and feature B at this later time.”

I encourage people who think they can release a software product with a set scope and a set level of quality by a set time to try a) to convince me that they did it, and b) to teach me how to do it, because I’d like to know. I’ve never seen it happen in my experience, either with my own products or with the products of other teams and companies. I’ve only heard about it, from friends of friends. It sounds like an urban legend to me. 


20
Nov 13

How To Make Meaningful Estimates For Software Products, Part 2

Last week I left you hanging. In How To Make Meaningful Estimates For Software Products I basically said that estimates don’t work for software projects. That’s still true. But the fact that estimates don’t work doesn’t mean that features don’t get completed and delivered. They do get finished, you just can’t  predict with accuracy when that will happen. So the question arises – can we get to any degree of predictability, despite that?

There are a few different ways you can approach predictability, and you can use them in combination:

  • Don’t ship until it’s finished – that is, make the prediction about the quality and the scope, but don’t predict the time
  • Ship on a regular basis, including only what’s finished – that is predict the time and the quality, but not the scope
  • Ship partial features – predict the time and the quality, and accept partial scope
  • Ship tiny features – only ship features you can estimate reliably, which means (remember this from last week’s post) they are not interesting

And there are some mitigations for the fact that estimates don’t work. The most obvious is the one that Steve Johnson (@sjohnson717) mentioned in a comment on last week’s post:

  • Estimate by comparison – “this feature seems about as big as that feature, which took us four weeks to implement”

The smaller the feature, the better this works, of course, because uncertainty grows with the value of the feature. 

I have more thoughts on estimates and product planning predictability in my next post.


13
Nov 13

How To Make Meaningful Estimates For Software Products

A few thoughts on estimating. I had a conversation with someone yesterday who asked me how I worked with the engineers on estimates. My answer shocked him, I think. I wanted to expand here on what was a throwaway conversation:

  • My favorite story about estimates is about the Sydney Opera House, as told by Nicholas Nassim Taleb in The Black Swan. First, you should know that construction is incredibly well-understood and for some types of projects builders can repeatedly complete them within 5% of the estimated time.

    The Sydney Opera House, started in 1959, was scheduled to be completed in 1963 for $7M (Aus). Actual construction took nearly four times the original estimate – it actually finished in 1973 (10 years late!) – and it cost more than 12 times the original budget at $104M (Aus). And of course, the Opera House was only 1/3 of the original project. If builders can be that far off, simply because it’s never been done before, why should we think that we should be able to estimate software, which always by definition has never been done before?

  • There is a fundamental disconnect between estimates and interesting things. Interesting things are unpredictable. User stories are estimatable, therefore not interesting. 
  • Estimates are not a standard distribution. They are really screwed up distribution where the likely value is way the heck out there beyond the value you think it should be. (And very occasionally, extremely rarely, things go a lot faster than you expect.)
  • I prefer timeboxes, and for interesting things, we get done what we get done in the timebox. The art of product management is figuring out what to do in the timebox. Note: this works much better in software than in construction. Buildings have to obey the laws of physics, but software doesn’t. There is no such thing as a Minimum Viable Product in construction – you can’t build a fancy roof until you build the structure to support it. But you can do that in software. There’s a lot of software out there that is essentially fancy roofs floating in the air.
  • Think about failure, which is so important in innovation. Failure is of course immune to estimates, by definition.

    For example, let’s assume I can get a decent estimate for doing something interesting (which we know I can’t, but hang on). Then we do it. It only takes twice as long as we estimated! (That’s a great result.) Unfortunately, given reality, it’s wrong, and has to be done again. It was a failure, but it was a productive failure. We learned a lot. We didn’t get the feature to market when we expected to, but if we’d put that version into the market, it would have been bad in oh so many ways.

    So we start doing it again, and mostly we have to start from scratch, but we did learn some things in version 1. We also realize we can get a little bit of version 2 out to early adopters. It’s definitely not a full feature – they have to do manual work to get the value, but they are willing because it’s so useful to them. And we learn some stuff, and we end up building version 3, instead of version 2, because we got some great feedback that makes it even better. Versions 1 and 2 are sunk costs, and they are PAINFUL, but because we did them, we have version 3, and it’s beautiful. And it only took us four times as long to get the feature out as originally estimated, which is actually a pretty good result. 

The title of this post might have been a little misleading. I suspect I may have created a firestorm. I can’t wait to hear what you think!