Or

Value-Driven, Risk-Adjusted, Solution Delivery

Over the years there have been many discussions as well as actual offerings that purport to be methodologies or processes for delivering technology projects. It has long been our view that all of these seemingly varied techniques are simply special cases of a single common general process. This super class of software engineering process has never been afforded much discourse. This, in spite of the continuing ineffectiveness of all the current variant approaches. Yet, it is only by understanding the dynamics and performance of this natural super class can we begin to learn how to improve our ability to deliver value to our customers.

Value.

That is the key to understanding this software engineering super class. In other words, the primary directive (if you will) must be to deliver value to customers. Otherwise, why are we doing any of this?

Consequently, our label for this super class of software engineering process is value-driven, risk-adjusted, solution delivery.

Let's look at this label more closely. We've mentioned already the importance of value. This is the idea that the customer must be the sole arbiter of quality and fitness, and that this externally focused (i.e., external to the project team) business value perspective must drive all project decision-making. In other words, when we are deciding delivery sequences, priorities of requirements, etc., we should be guided by what delivers the most value to our customer.

The second part of the label, risk-adjusted, refers to the importance of understanding that unmanaged risk is the source of all project problems. Technology projects are, after all, business investments. And, as with any business investment, its return (that is, its value in the form of benefits like increased revenues, reduced operating costs, higher utilization of resources, etc.) must be sufficient to compensate the investor (the customer, sponsor, or owner) for the risks they are taking. Accordingly, an effective software engineering process must assist project management and the project team in continuously identifying risk and then provide mechanisms for aggressively removing or mitigating these solution acquisition risks. If this isn't done properly, then the economic value of that investment can be substantially impaired.

Finally, we come to the phrase, solution delivery. This phrase emphasizes that the primary goal of the software engineering process is to actually deliver tangible usable value in the form of complete business solutions to the customer. Not simply building or installing software. It is the idea that delivered solutions that operate in the customer's real world is the key objective. That is, enabling and enhancing a company’s ability to grow and compete in its marketplace through the acquisition (whether built or bought) of complete, fully integrated, and organizationally unified business capabilities.

So, in our view, value-driven, risk-adjusted solution delivery is the mission statement for all meaningful software engineering processes.

When viewed in this context, it becomes clear that a critical success factor for such a process is the ability to rapidly and reliably make risk unambiguously visible to the project team and then to provide the means to mitigate that risk so that it can be managed. By the way, by managing risk, what is meant is that

  • We know what the exposure is (what creates the risk in the first place)
  • We know its severity, should it occur (i.e., how it reduces, or delays, the benefit stream, increases the costs, etc.)
  • We know the likelihood of it occurring
  • We know the remedial actions that can be taken that will preserve the investment's return potential as compared with alternate uses of those resources (money, talent, technology)

Risk is essentially a measure of the variability of the project's outcomes (principally seen or felt in its benefits, costs, quality, etc.). The greater this variability, the greater the overall project risk. Consequently, risk is a measure of uncertainty. The less certain we are regarding a project's outcome (or any single dimension of its outcome, say, its total cost), the greater the likelihood that particular outcome will not meet its target. Thus, the greater the risk.

So, one important question that arises is how do we reduce uncertainty?

It turns out that this is a telling question. Because, when we look carefully at uncertainty, we see that uncertainty in the project's outcomes is zero (or, pretty dog-gone small anyway) at the end of that project. This is because we now know (since we are done) exactly how much it will (i.e., did) cost, and whether the benefits were actually realized at the targeted levels and time frames, etc. Accordingly, the day before the last day, the risk is higher but only slightly higher; a week before, the risk is even higher, and so on, until the risk reaches a maximum on day one of the project, where uncertainty is typically the highest. (This is not to imply that risk is a linear function of elapsed project time. It is, in fact, quite non-linear. But rather, to say that risk can only be reduced by capturing the appropriate learning that can only come from executing the project. But, if that learning is not captured, then uncertainty may not be reduced---may actually be increased---as the project advances.)

Just to push this argument a bit farther, if we accept that uncertainty (and thus risk) are at their lowest when the project is completed, then we could say that if we find another customer with exactly the same needs, then we could simply deliver that same solution to them as well and be confident that it will be an essentially zero risk undertaking. And if we find a third such customer, then we can deliver the same solution to them as well, and so on. This, of course, is nothing more than reuse.

Consequently, a key learning is that reuse reduces uncertainty, and thus reduces project risk. This seems very intuitive. Yet, as we have pointed out before, reuse is still a woefully underutilized practice. Stated differently, an effective software engineering process must make it easy to assemble solutions rather than build them.

But why should this be so?

One answer is that reuse is failure-free. (Remember that we have assumed that we are referring to the reuse of a solution for a customer with exactly the same needs as the initial customer. While recognizing that this condition---that is, both customers having exactly the same needs---is extremely unlikely, we can agree that to the degree that the targeted customers for reuse have similar needs, then the more certain one can be that the solution will be as a failure-free as possible.)

The fact that reuse for other customers exploits the investment that the team had already made in removing defects from the solution for the initial customer, yields another learning: Bugs, by their very nature, increase uncertainty and thus increase project risk.

Consequently, an effective software engineering discipline must (a) make it difficult to insert defects into the work-products in the first instance, and (b) failing that, make it easy to locate them so they can be removed before they are shipped downstream (i.e, to the next stage, process, or customer) where they have exponentially larger impacts.

Well, now we are getting somewhere. So, to reduce risk we must reduce defects. OK, but what is a defect?

To properly understand the answer to this question we must first examine the nature of the software engineering process itself. That is, what is its essence? All software engineering processes (regardless of their vocabulary, tool sets, etc.) can be essentially viewed as a sequence of translation steps, where each translation step attempts to elaborate the problem and solution domains to lower levels of refinement (starting with some usually informal narrative description of the problem or opportunity) until a refinement level is reached that can be directly implemented (typically a very rigorous unambiguous software specification or source code). At this point, and only at this point, can the results of all this translation actually deliver any tangible value to the customer.

The precise number of these steps, their exact method of translation, the nature of these refinement levels, and the format and structure of each step's output is defined by the particular software engineering process being used. But, all such processes, nevertheless, share this step-wise translation and refinement characteristic.

Further, each of these translation steps comprises two distinct activities: representation and validation.

The representation activity involves the elaboration of the work products from their current level of refinement to the next lower level of refinement. For most software engineering processes, a typical elaboration sequence is analysis, design, specification, coding, etc.

The validation activity involves ensuring that each such translation step is complete, error-free, and relevant. That is, ensuring that the meaning of a representation at any level is exactly semantically equivalent to the meaning of the representation at the prior level. Typical validation activities include walk-throughs, inspections, testing, operational use, etc.

This step-wise translation and refinement technique is also referred to as the levels of abstraction technique since its primary engine is the refinement of the "problem" statement, starting with the level closest to the external customer and the operational world, and proceeding level by conceptual level until an abstraction level is reached that is sufficiently complete, unambiguous, and precise that it can be implemented on some processor.

Now, finally, we can get an answer to what is a defect, anyway.

A defect is simply the result of an error in one of these translation steps. (Where error means that the translation did not preserve the semantic integrity of the prior step.)

At this point it might be useful to point out where we believe the industry has diverged from an optimal path over the last few decades of research. Given this model, one can argue that there are two avenues of research that could prove helpful in improving the effectiveness of the software engineering process: (1) Improve the various translation techniques at each step, or (2) use fewer translation steps. While overwhelmingly the industry has focused on more and better translation techniques and tools (that is, research avenue 1), very little has been done to find better methods that would actually require fewer steps (and thus fewer opportunities for errors), with of course, the goal of reducing the steps to only one: Problem = Solution. This is the avenue we feel that will offer the greatest potential for our industry.

Recapping the story so far,

  • Risk is a measure of uncertainty (reducing uncertainty reduces risk)
  • Reuse increases certainty (assemble rather than build)
  • Bugs decrease certainty (never ship defects downstream)
  • Fewer translation steps = fewer opportunities for error

These all appear to be important software engineering principles.

But, there is one more principle, perhaps the most important principle, in describing what we mean by value-driven, risk-adjusted solution delivery:

Customer usage of the solution increases certainty.

Everything else (meetings, prototypes, inspections, reviews, testing) is but a weak approximation to actual customer usage. We saw this principle in action earlier when we commented that uncertainty (and thus risk) is at its lowest when we have actually delivered the solution to the customer and they are using it to realize the benefits of its operation.

Now, of course, this occurs at the end of the project. But, why should it only happen then? Why not deliver all the way through the project's duration, starting at the very beginning? Certainly, the earlier we do this the better, right? And when we say deliver, we mean deliver customer value, not designs, or specs, or the myriad other work products that we have contrived to gain customer "buy-in" or approval. We mean deliver actual operational functionality. We mean deliver tangible solutions that they can start using right away.

Think how the software engineering world must change if we established an iron-clad rule that every project must deliver tangible operational value to the customer every 6 weeks? Every 2 weeks? Every day?

(This is very practical by the way for both very large and very small technology efforts. All that is required is for the project team to understand that they must begin on day one to think in terms of customer value and how to break that value up into chunks that can be incrementally delivered and continuously integrated into the evolving total solution.)

We have found that the higher the (value) delivery frequency, the greater the reduction in uncertainty and risk. As a result, a key question our project managers ask is "How many delivery cycles (iterations) are necessary to maximize value and reduce risk commensurate with the corresponding increased energy expenditure due to each iteration?" That balance becomes the vital optimization issue for the project team.

Finally, frequent customer delivery and usage also helps transfer ownership of the solution very early to the customer (which has been found to be a critical success factor) as well as helping to close expectation gaps (the gap between what a customer thinks or believes a solution will do for them based on their current understanding of the discussions and documents they have seen together with their own biases and paradigms, and what the solution actually delivers in reality---which can only be ascertained by using it).