22
Jan 15

A VALUABLE Rubric For Product Requirements – Podcast Episode #5

In this podcast I talk about a problem that afflicts many product companies – poor communication between product management and developers. And I describe an approach that can help improve the communication, and improve everyone’s motivation – a new rubric for writing good requirements which I call VALUABLE. That’s an acronym for: Valuable, Aligned, Loved, Understood, Acceptance tests, Bounded, Leverages, and Expected Usage. It will all become clear when you listen to the podcast – or download the infographic.

The infographic I mention in the podcast is here – please feel free to download it and print it out and put it up on your wall. Or whatever you want to do with it.

In the podcast I mentioned a number of earlier posts, a book, and some useful posts on other peoples’ blogs.

Links

If you like this podcast, please subscribe via iTunes (search for “responsibility authority” to find the listing) or your favorite subscription method via this feed. And please consider rating and reviewing the podcast on iTunes. The feedback is very helpful for me.


09
Aug 13

Requirements, Cynefin, Complexity, and the Lean Startup

A few days ago I suggested that product managers could get a lot of insight from the Cynefin model, especially the Complex region of the model. Liz Keogh said this well:

When you start writing tests, or having discussions, and the requirements begin changing underneath you because of what you discover as a result, that’s complex. You can look back at what you end up with and understand that it’s much better, but you can’t come up with it to start with, nor can you define what “better” will look like and try to reach it. It emerges as you work.

The fundamental challenge with product management is that your first solution to a problem sucks. And it can suck a lot of different ways. It might not actually be a problem. Your solution might make the problem worse. Your solution might make that problem better, but cause another worse problem. Your solution might be boring. It might break. It might give the wrong answer. It might give the right answer, but too late. There are many different modes of failure.

A strange attractor – complexity in action. Image by Josephiah CC 2.0 license.

In short, you start by thinking you know something, but the only way to tell if you really know it is to do a test. ([tweetthis]In #prodmgmt, you start by thinking you know something, but you have to test to be sure[/tweetthis]) The test will likely tell you that you don’t know anything. So then you respond by creating a new test.

Getting back to the Cynefin model the key words are Probe, Sense, Respond in the Complex region. And is it just me, or does this process also sound just like the Lean Startup Build, Measure, Learn cycle? Roughly speaking, you get these equivalences:

  • Create and run test = Probe = Build
  • Understand results = Sense = Measure
  • Change assumptions = Respond = Learn

The key points:

  • A key characteristic of complexity (in the Cynefin sense) is that you don’t know much. There are no guideposts, and what you think you know is likely to be wrong.
  • You need a methodology that lets you explore the territory
  • There are dangers – of finding local maxima – because there’s no map, you don’t know if the hill you’re climbing leads to a mountain, or even if there are any mountains
  • Everything you find out will change what you thought you already knew

Does this sound like the real experience of product management and product design to you? If not, let me know.

Next: What do you need from a tool to handle this situation?