Monday, January 05, 2009

Earned Value Management has been around for years as a tool for improving insight into the status of technology projects, especially for DoD, NASA, DoE, DoT, and other large-scale federal efforts.

Unfortunately, its entire premise, namely that the work done has “value” equal to its budget, is (it would seem quite obviously) fundamentally and completely flawed.

There is no useful correlation between cost and value.

Further, for these terms to have any sensible meaning, they must be interpreted in the eyes of the customer. This is the entity paying the bills (incurring the cost), and who only receives value from their investment when the resulting solution begins to deliver tangible business value to that customer—increased revenue, market share, productivity, quality, responsiveness, or lower costs, delays, and waste. The fact that x% of the budget has been consumed is a useful cost accounting metric (burn rates, variances, etc.), but has nothing to do with any value received by the customer. In fact, often no real tangible business value can legitimately be booked until huge chunks (or, even all) of the budget has been spent.

This fatal disconnect from reality has been highlighted in an earlier posting, "Earned Value" has nothing to do with value, nor with "earning" anything except more cost., and a related posting Value, not Cost, Accounting: The Only True Window of Progress.

When you examine EVM you see that its principal judgment is the %-complete judgment. This is an often arbitrary and certainly highly subjective assessment of how much “real work” has been done to-date when compared with how much was “planned” to be completed by now. In the topsy-turvy EVM world where cost=value, this is used as a proxy for progress.

For those of you who feel that EVM is a helpful tool, our solution is that you simply define %-complete to be the percentage of the requirements that have been validated, accepted by the customer, and delivered to production.

This is simple. Everything else stays the same. All the existing EVM formulae remain unchanged. The only change is that the proxy for value and progress is not how much of the budget has been spent, but how much of the requirements have been delivered.

Of course, this does not address the fundamental flaw. But, this simple change not only radically simplifies a key EVM black box (the %-complete judgment), but also begins to subtly shift the project planning and management to think more in terms of exactly how do we deliver requirements to our customers more quickly, more frequently, and much earlier in the life-cycle.

In other words, how can we actually earn real value.

POSTED BY WAYNE SMITH AT 08:34 PM  |  25 COMMENTS

Thursday, September 11, 2008

One of the things that you can say about technology projects, of all kinds, is that the industry has overwhelmed us with a plethora of systems, technologies, and spreadsheets that inundate us with massive amounts of data about our projects. Unfortunately, even with all this data we seem no closer to answering very simple questions about these projects and their true progress.

True progress must be a measure that gives the customer (i.e., the buyer and user of the solution being delivered) a meaningful picture of when the benefits offered by this investment will begin to be realized. To the customer, the project is simply an investment vehicle. The result of that investment is a solution that when implemented will generate benefits to the organization. These benefits can be anything that the customer believes will help optimize the organization’s performance and assist it in more effectively achieving its goals. These benefits range from lower costs, less waste, faster responsiveness, simplicity of operation, increased market share, reduced outages, etc.

The point is that from the customer’s point of view, the only measure of progress that matters is ROI. In other words, when will our investment start paying off? When will we see the business benefits of the solution we are investing in?

This is what we mean by value.

The vast majority of project management systems, books, and classes never dwell on this shortcoming, but instead impress on us the importance of the vast analytical array of data that they can collect. Instead of getting insight into what customer’s really want to know, they shower us with “%-completes”, “actual vs. plan”, and other cost accounting analyses. These are easy and look impressive and have a lot of analytical sizzle, but are essentially meaningless when it comes to understanding, much less predicting, when the customer will begin to see real business value.

The reason for this is simple. Cost accounting is not value accounting.

Further, while we have for decades tried to forge a useful link between cost and value, we have nevertheless been left with very unsatisfying results. In fact, it is not too difficult to conclude that the knowledge, however deep we may have, of costs and burn rates brings us no closer (and is, in fact, highly misleading—one has only to look at the typical phenomenon of a task taking three times as long to finish the last 20%, than it did the first 80%) to understanding when the system will be delivered. That is, when we will actually realize the promised business value.

Fortunately, the answer is really quite simple. The elusive value metric we are seeking is right in front of us. It is requirements. More accurately, validated and delivered requirements.

See Golden Triangle diagram.

To see how this works, the diagram illustrates what one could call the golden triangle of value. This golden triangle is true of all technology projects of all types and sizes, and is completely independent of all methodologies, software engineering approaches, or project management styles.

What the golden triangle says is that the way to manage value delivery to your customer is to aggressively manage these three artifacts and their connections.

We start with the premise that customers are seeking not just solutions, but quality solutions. That is, solutions that perform as they expect, all the time, every time. We know from the quality industry that quality is not a subjective sense of “relative goodness” or some arbitrary opinion, but is rather simply meeting the requirements that the customer has laid down for that solution. The more effectively the solution meets those requirements, the higher quality the solution.

Period.

So, as we see in the diagram, if we can say with precision that the requirements fully define the solution we are seeking, and we can say that we have test cases that cover those requirements. Then, when we execute all those test cases against this evolving solution without generating any failures, we can say with confidence that the solution meets the requirements, … that we have a quality solution.

Accordingly, the value accounting metrics become:

  • Productivity, the number of validated and delivered requirements per labor hour (or labor dollar)
  • Cycle Time, the number of calendar days necessary to validate and deliver a requirement
  • Earned Value, the ratio of the number of requirements that have been validated and delivered to the total number of requirements in the solution
Naturally, there are more value metrics than these three, but they provide the foundation.

The key take-away is you should augment your cost accounting with value accounting if you truly want a window into how your project is doing and when it will be done successfully. And, the key to value accounting lies in understanding requirements and how effectively they are being validated and delivered to your customers.

POSTED BY WAYNE SMITH AT 06:18 PM  |  1 COMMENT

Thursday, August 09, 2007

The sole purpose of measurement is learning. It is one of the highest forms of inquiry. Measurement offers reliable and often unexpected insights into the true nature of things. Further, this knowledge (true knowledge, if you will) is the key to improvement. Trying to improve something (a process, a product, yourself, …) when you lack true knowledge only makes things worse, not better. It is meddling, not managing.

Measurement is a process, of course, and as such requires all the necessary constituents of any process, namely method, tools, talent, etc.  In addition, for the measurement process to yield reliable insights, rather than distortions and “noise”, this process must itself be stable.

A common obstacle to effective measurement is the complexity of the financial and statistical concepts and in the understanding of how and when to apply these ideas for optimum  return to the enterprise. This ability to coalesce and distill the multitude of ideas, formulae, and tools into a simple pragmatic approach for inquiring into the dynamics and operational behavior of a process or operation is a hallmark of effective management and governance.

We have implemented many such measurement and governance approaches for our clients, in a variety of settings and contexts. We have found that Doug Hubbard’s new book, How To Measure Anything, is an important step in providing a window into many of these complexities and offers a variety of straightforward approaches to measurement that we feel preserves its fundamental role as a vital instrument of inquiry and knowledge.

Finally, when we refer to measurement as a form of inquiry, we mean a very special type of inquiry, namely inquiry into variation and its root causes. It turns out that isolating the underlying drivers of variation is the first step to any sustainable, predictable improvement program. Without an effective measurement process, management lacks the knowledge necessary to successfully guide the enterprise from where it is now, to where it desires to be.


POSTED BY WAYNE SMITH AT 05:30 PM  |  0 COMMENTS

Monday, March 20, 2006

To improve customer perception of our products and services we often ask the customer directly – what do you want?  

There are many common sources of such feedback: focus groups, surveys, call-backs, etc.

Most often these actions are very biased and therefore misplaced.

  • Focus groups with ‘pre-selected’ customers give responses biased in favor of persons doing the selection.
  • Surveys get replies from people who ‘like’ surveys – i.e. more bias.
  • Call-backs typically reach the complainers – did I hear bias.
  • Etc.
So how do you get unbiased feedback?   You can’t.   By definition human feedback is biased.  

Customer feedback should be reserved for addressing specific narrow issues: complaints, failures, accidents, etc.  

If you want broadband quality improvement you go to the experts – Deming, Juran, Crosby, Feigenbaum, Ishikawa, Garvin, Taguchi, etc.   All mention customers in passing, but get down to business with “PRODUCT”: Zero Defects, Six Sigma, quality circles, TQM, testable/tested requirements, ‘ilities, SPC/mature processes, focused metrics, trained workers, and continuous-continuous improvement.

Biased customer feedback leads to gimmicks, fads, crazes, etc., which move markets in the short-term.   Good, value-priced products survive and win long-term.

To summarize, we are faced with the old dilemma: opinion vs. measurement.   Customers provide opinion.    Product needs can be identified and measured only through rigorous/objective analysis.

POSTED BY MICHAEL WALDMANN AT 10:53 PM  |  0 COMMENTS

Thursday, December 22, 2005

Measurement of technology project progress has always been a somewhat unsatisfying proposition. It has been (and still is) dominated by the accounting principles of actuals versus plan. These remain valuable tools. But, it is important to understand what these tools do: They measure burn rate. That's it.

Now, burn rate is a useful metric, as well as its derivative, burn acceleration. That is, the change in burn rate over time can be a helpful index into the project's overall stability, and thus its predictability. (Process stability, by the way, is an absolute necessity for true actionable knowledge from any metric. Otherwise, it is just meddling---and making things worse.)

Again, metrics based on actual-plan variances are all very useful, but none of these metrics says anything about real tangible progress. All they do is tell you how fast the project is eating dollars and schedule at this moment in time. None of them answers, or even gives you any insight into, the only important questions that customers ever have: When can I start using it? And, how much will it cost me?

It has always struck me as odd that such a simple set of questions has been so elusive.

However, we see repeatedly that software engineering measurement has been governed more by the principle of what is easy to measure and simple to compute, rather than what is important to our customers.

Real earned value, the only "earned value" that has any meaningful economic or business underpinning, is validated, delivered requirements.

Further, the ratio of validated, delivered requirements to the total number of requirements (and the cost, velocity, and acceleration of that ratio over time) is the only measure of true progress.

In other words, this represents a fundamental change in the nature of software engineering. A change from a view based on artificial proxies of value like planned costs or planned duration, or even worse, lines of code produced, and even function points (although that is much closer to a proxy for value than what has gone before), to a view of software engineering whose value base is requirements.

Certainly we know by now (or should know) that a plan is no reliable proxy for value---just because we have "earned" our way through the plan (yes, and even "completing" all those tasks---are they even the right tasks?)---does not tell us anything about how close, or far, we are from delivering any value to the customer.

Any performance metric that does not incorporate validated, delivered requirements is not (can not, in fact) be a meaningful measure of progress, health, or anything else approximating a useful answer to the questions that customers really have.

My hope here is not that we throw out all these other metrics---that is too unrealistic a hope given the entrenchment of the current accounting biases---but, that we finally come to recognize the central role that requirements play in everything we do, and thus reintroduce requirements into the software engineering vocabulary and process in an orderly and disciplined manner.

Requirements are, after all, our only basis for quality and customer value. Everything else we do in software engineering must be derived from this base. Or, it should be. Remember that time honored maxim: GI GO.


POSTED BY WAYNE SMITH AT 03:17 PM  |  4 COMMENTS