Monday, July 30, 2007

Metrics: avoiding pitfalls

Karl E. Wiegers, who wrote a great book I've used called "Peer Reviews In Software", published these ten traps to avoid when implementing a metrics program. Anyone who has implemented a metrics program likely has experienced every one of these. It's a handy reference to put on your wall. Most are self explanatory and intuitive, once you see them.

The one I'm not quite sure I'm on board with is Trap #7: Using Metrics to Motivate, Rather than to Understand. Since an objective of measurement is to make data driven decisions, this is indeed, providing motivation to act. If the metrics show you spend a lot on rework, or it is trending up, then you and your team will be motivated to do something to improve. Conversely, if the metrics are trending down at a rate that is acceptable to you, you won't be motivated to do more. Maybe I am being picky here, in that you clearly don't want to punish with metrics (c.f. trap #6), and maybe "motivation" is along this same line. I'd be more in line with the trap if the word "Manipulate" replaced "Motivate", but maybe that's just plain obvious.

If we must have ten traps, I would replace #7 with "Automate everything" - a bold statement but once automated there is little to no long term cost, and the metric can be gathered even once it stops being useful - in case a change causes it to me meaningful once again.

Monday, July 23, 2007

There is no magic.

That's the conclusion of Terry Coatta's writeup on the demise of CORBA:

http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=491

We think of SOA as being a paradigm shift; it's actually an improvement which provides greater flexibility than CORBA and others, but the old rules still apply. Distributed computing is still hard, in general.

Sunday, July 15, 2007

Deliberate Practice`

If you Google those two words, you'll find the term ascribed to Dr. K Anders Ericson. His research indicates that expert competence in an area is learned, not gifted, and that it requires special attention to what a person doesn't yet do well. He quotes Sam Snead:
It is only human nature to practice what you can already do well, since it's a hell of a lots less work and a hell of a lot more fun.
He says that practicing what you do well doesn't lead to extraordinary performance; this makes sense intuitively. He must be a golfer since he gives lots of golf analogies, and these do work for me. Apparently I might be a better golfer if, instead of trying a single shot for each opportunity, that I was able to try multiple shots and compare the results.

My problem with his advice, is that in business, as in golf, there are very few "do-overs". In particular if you are leader and you mess up, not only do you forgo an opportunity to fix that opportunity, but you may give up other opportunities as a consequence. That is often true with your customers, your board of directors, your boss, and your employees.

It's great to know that research indicates that you can learn to become great with deliberate practice, but it's disturbing to note that in so doing, you may not be able to practice any longer. His recent article in HBR talks about simulations, such as reviewing past histories of medical information with proven outcomes, and attempting to reach the correct conclusion based on the evidence. Unfortunately I can't think of an analogy for business decisions.



Monday, July 09, 2007

Disruptive Change

Clayton M. Christensen, author of The Innovator's Dilemma, along with Michael Overdorf, wrote a paper in 2000 about why companies succeed or fail at disruptive innovation. They argue that established companies by definition can't do disruption - the processes and values of the organization keep them on a track of evolutionary changes, which Christensen calls sustaining innovation. The idea is that the very things that make a company successful are at odds with creating a disruptive innovation. The combination of values and process determine what decisions will be made in the organization, at every level. They point out that a key metric to good management is whether distributed decision making can be made consistently with the strategic direction and business model of the company. That behavior is what makes up the value system -- the answer to why we do it this way; the processes are how we do it. To be disruptively innovative, why and how have to be free to move.

They show a simple 2x2 table which indicates various ways of separating innovative work so that it is most likely to succeed, with value and process change as the axes. Where there is little change to either, go ahead and keep the formal team together. All other changes they recommend factoring the team out for success, so that it can break free from the constraint.

Sunday, July 01, 2007

Another dimension in innovation measurement

Morten T Hansen and Julian Birkinshaw describe yet another method of analyzing and improving a company's ability to innovate. They describe an "Innovation Value Chain" which gets evaluated in its ability to Create, Convert and Diffuse innovation. In other words we often just think about the ideas, but many great ideas go un-monetized because they get lost in the shuffle, are not properly funded, or not properly socialized.

  • Create means to generate ideas.
  • Convert means to focus on the right subset of ideas to convert into practical products/services.
  • Diffuse means to line up the rest of the organization and customers to adopt the new product or service.

They have a fairly simple tool to gauge an enterprise in the three blocks of the chain, and give some reasonable examples and remedies for what to do for your "weak link" in the chain. They make a compelling case that a company will tend to under invest in the weak link, but continue to invest in areas where they are strong, and how that might ironically crush the innovative spirit.