Thursday, December 22, 2005

Or

Value-Driven, Risk-Adjusted, Solution Delivery

Over the years there have been many discussions as well as actual offerings that purport to be methodologies or processes for delivering technology projects. It has long been our view that all of these seemingly varied techniques are simply special cases of a single common general process. This super class of software engineering process has never been afforded much discourse. This, in spite of the continuing ineffectiveness of all the current variant approaches. Yet, it is only by understanding the dynamics and performance of this natural super class can we begin to learn how to improve our ability to deliver value to our customers.

Value.

That is the key to understanding this software engineering super class. In other words, the primary directive (if you will) must be to deliver value to customers. Otherwise, why are we doing any of this?

Consequently, our label for this super class of software engineering process is value-driven, risk-adjusted, solution delivery.

Let's look at this label more closely. We've mentioned already the importance of value. This is the idea that the customer must be the sole arbiter of quality and fitness, and that this externally focused (i.e., external to the project team) business value perspective must drive all project decision-making. In other words, when we are deciding delivery sequences, priorities of requirements, etc., we should be guided by what delivers the most value to our customer.

The second part of the label, risk-adjusted, refers to the importance of understanding that unmanaged risk is the source of all project problems. Technology projects are, after all, business investments. And, as with any business investment, its return (that is, its value in the form of benefits like increased revenues, reduced operating costs, higher utilization of resources, etc.) must be sufficient to compensate the investor (the customer, sponsor, or owner) for the risks they are taking. Accordingly, an effective software engineering process must assist project management and the project team in continuously identifying risk and then provide mechanisms for aggressively removing or mitigating these solution acquisition risks. If this isn't done properly, then the economic value of that investment can be substantially impaired.

Finally, we come to the phrase, solution delivery. This phrase emphasizes that the primary goal of the software engineering process is to actually deliver tangible usable value in the form of complete business solutions to the customer. Not simply building or installing software. It is the idea that delivered solutions that operate in the customer's real world is the key objective. That is, enabling and enhancing a company’s ability to grow and compete in its marketplace through the acquisition (whether built or bought) of complete, fully integrated, and organizationally unified business capabilities.

So, in our view, value-driven, risk-adjusted solution delivery is the mission statement for all meaningful software engineering processes.

When viewed in this context, it becomes clear that a critical success factor for such a process is the ability to rapidly and reliably make risk unambiguously visible to the project team and then to provide the means to mitigate that risk so that it can be managed. By the way, by managing risk, what is meant is that

  • We know what the exposure is (what creates the risk in the first place)
  • We know its severity, should it occur (i.e., how it reduces, or delays, the benefit stream, increases the costs, etc.)
  • We know the likelihood of it occurring
  • We know the remedial actions that can be taken that will preserve the investment's return potential as compared with alternate uses of those resources (money, talent, technology)

Risk is essentially a measure of the variability of the project's outcomes (principally seen or felt in its benefits, costs, quality, etc.). The greater this variability, the greater the overall project risk. Consequently, risk is a measure of uncertainty. The less certain we are regarding a project's outcome (or any single dimension of its outcome, say, its total cost), the greater the likelihood that particular outcome will not meet its target. Thus, the greater the risk.

So, one important question that arises is how do we reduce uncertainty?

It turns out that this is a telling question. Because, when we look carefully at uncertainty, we see that uncertainty in the project's outcomes is zero (or, pretty dog-gone small anyway) at the end of that project. This is because we now know (since we are done) exactly how much it will (i.e., did) cost, and whether the benefits were actually realized at the targeted levels and time frames, etc. Accordingly, the day before the last day, the risk is higher but only slightly higher; a week before, the risk is even higher, and so on, until the risk reaches a maximum on day one of the project, where uncertainty is typically the highest. (This is not to imply that risk is a linear function of elapsed project time. It is, in fact, quite non-linear. But rather, to say that risk can only be reduced by capturing the appropriate learning that can only come from executing the project. But, if that learning is not captured, then uncertainty may not be reduced---may actually be increased---as the project advances.)

Just to push this argument a bit farther, if we accept that uncertainty (and thus risk) are at their lowest when the project is completed, then we could say that if we find another customer with exactly the same needs, then we could simply deliver that same solution to them as well and be confident that it will be an essentially zero risk undertaking. And if we find a third such customer, then we can deliver the same solution to them as well, and so on. This, of course, is nothing more than reuse.

Consequently, a key learning is that reuse reduces uncertainty, and thus reduces project risk. This seems very intuitive. Yet, as we have pointed out before, reuse is still a woefully underutilized practice. Stated differently, an effective software engineering process must make it easy to assemble solutions rather than build them.

But why should this be so?

One answer is that reuse is failure-free. (Remember that we have assumed that we are referring to the reuse of a solution for a customer with exactly the same needs as the initial customer. While recognizing that this condition---that is, both customers having exactly the same needs---is extremely unlikely, we can agree that to the degree that the targeted customers for reuse have similar needs, then the more certain one can be that the solution will be as a failure-free as possible.)

The fact that reuse for other customers exploits the investment that the team had already made in removing defects from the solution for the initial customer, yields another learning: Bugs, by their very nature, increase uncertainty and thus increase project risk.

Consequently, an effective software engineering discipline must (a) make it difficult to insert defects into the work-products in the first instance, and (b) failing that, make it easy to locate them so they can be removed before they are shipped downstream (i.e, to the next stage, process, or customer) where they have exponentially larger impacts.

Well, now we are getting somewhere. So, to reduce risk we must reduce defects. OK, but what is a defect?

To properly understand the answer to this question we must first examine the nature of the software engineering process itself. That is, what is its essence? All software engineering processes (regardless of their vocabulary, tool sets, etc.) can be essentially viewed as a sequence of translation steps, where each translation step attempts to elaborate the problem and solution domains to lower levels of refinement (starting with some usually informal narrative description of the problem or opportunity) until a refinement level is reached that can be directly implemented (typically a very rigorous unambiguous software specification or source code). At this point, and only at this point, can the results of all this translation actually deliver any tangible value to the customer.

The precise number of these steps, their exact method of translation, the nature of these refinement levels, and the format and structure of each step's output is defined by the particular software engineering process being used. But, all such processes, nevertheless, share this step-wise translation and refinement characteristic.

Further, each of these translation steps comprises two distinct activities: representation and validation.

The representation activity involves the elaboration of the work products from their current level of refinement to the next lower level of refinement. For most software engineering processes, a typical elaboration sequence is analysis, design, specification, coding, etc.

The validation activity involves ensuring that each such translation step is complete, error-free, and relevant. That is, ensuring that the meaning of a representation at any level is exactly semantically equivalent to the meaning of the representation at the prior level. Typical validation activities include walk-throughs, inspections, testing, operational use, etc.

This step-wise translation and refinement technique is also referred to as the levels of abstraction technique since its primary engine is the refinement of the "problem" statement, starting with the level closest to the external customer and the operational world, and proceeding level by conceptual level until an abstraction level is reached that is sufficiently complete, unambiguous, and precise that it can be implemented on some processor.

Now, finally, we can get an answer to what is a defect, anyway.

A defect is simply the result of an error in one of these translation steps. (Where error means that the translation did not preserve the semantic integrity of the prior step.)

At this point it might be useful to point out where we believe the industry has diverged from an optimal path over the last few decades of research. Given this model, one can argue that there are two avenues of research that could prove helpful in improving the effectiveness of the software engineering process: (1) Improve the various translation techniques at each step, or (2) use fewer translation steps. While overwhelmingly the industry has focused on more and better translation techniques and tools (that is, research avenue 1), very little has been done to find better methods that would actually require fewer steps (and thus fewer opportunities for errors), with of course, the goal of reducing the steps to only one: Problem = Solution. This is the avenue we feel that will offer the greatest potential for our industry.

Recapping the story so far,

  • Risk is a measure of uncertainty (reducing uncertainty reduces risk)
  • Reuse increases certainty (assemble rather than build)
  • Bugs decrease certainty (never ship defects downstream)
  • Fewer translation steps = fewer opportunities for error

These all appear to be important software engineering principles.

But, there is one more principle, perhaps the most important principle, in describing what we mean by value-driven, risk-adjusted solution delivery:

Customer usage of the solution increases certainty.

Everything else (meetings, prototypes, inspections, reviews, testing) is but a weak approximation to actual customer usage. We saw this principle in action earlier when we commented that uncertainty (and thus risk) is at its lowest when we have actually delivered the solution to the customer and they are using it to realize the benefits of its operation.

Now, of course, this occurs at the end of the project. But, why should it only happen then? Why not deliver all the way through the project's duration, starting at the very beginning? Certainly, the earlier we do this the better, right? And when we say deliver, we mean deliver customer value, not designs, or specs, or the myriad other work products that we have contrived to gain customer "buy-in" or approval. We mean deliver actual operational functionality. We mean deliver tangible solutions that they can start using right away.

Think how the software engineering world must change if we established an iron-clad rule that every project must deliver tangible operational value to the customer every 6 weeks? Every 2 weeks? Every day?

(This is very practical by the way for both very large and very small technology efforts. All that is required is for the project team to understand that they must begin on day one to think in terms of customer value and how to break that value up into chunks that can be incrementally delivered and continuously integrated into the evolving total solution.)

We have found that the higher the (value) delivery frequency, the greater the reduction in uncertainty and risk. As a result, a key question our project managers ask is "How many delivery cycles (iterations) are necessary to maximize value and reduce risk commensurate with the corresponding increased energy expenditure due to each iteration?" That balance becomes the vital optimization issue for the project team.

Finally, frequent customer delivery and usage also helps transfer ownership of the solution very early to the customer (which has been found to be a critical success factor) as well as helping to close expectation gaps (the gap between what a customer thinks or believes a solution will do for them based on their current understanding of the discussions and documents they have seen together with their own biases and paradigms, and what the solution actually delivers in reality---which can only be ascertained by using it).


POSTED BY WAYNE SMITH AT 05:32 PM  |  1 COMMENT

Measurement of technology project progress has always been a somewhat unsatisfying proposition. It has been (and still is) dominated by the accounting principles of actuals versus plan. These remain valuable tools. But, it is important to understand what these tools do: They measure burn rate. That's it.

Now, burn rate is a useful metric, as well as its derivative, burn acceleration. That is, the change in burn rate over time can be a helpful index into the project's overall stability, and thus its predictability. (Process stability, by the way, is an absolute necessity for true actionable knowledge from any metric. Otherwise, it is just meddling---and making things worse.)

Again, metrics based on actual-plan variances are all very useful, but none of these metrics says anything about real tangible progress. All they do is tell you how fast the project is eating dollars and schedule at this moment in time. None of them answers, or even gives you any insight into, the only important questions that customers ever have: When can I start using it? And, how much will it cost me?

It has always struck me as odd that such a simple set of questions has been so elusive.

However, we see repeatedly that software engineering measurement has been governed more by the principle of what is easy to measure and simple to compute, rather than what is important to our customers.

Real earned value, the only "earned value" that has any meaningful economic or business underpinning, is validated, delivered requirements.

Further, the ratio of validated, delivered requirements to the total number of requirements (and the cost, velocity, and acceleration of that ratio over time) is the only measure of true progress.

In other words, this represents a fundamental change in the nature of software engineering. A change from a view based on artificial proxies of value like planned costs or planned duration, or even worse, lines of code produced, and even function points (although that is much closer to a proxy for value than what has gone before), to a view of software engineering whose value base is requirements.

Certainly we know by now (or should know) that a plan is no reliable proxy for value---just because we have "earned" our way through the plan (yes, and even "completing" all those tasks---are they even the right tasks?)---does not tell us anything about how close, or far, we are from delivering any value to the customer.

Any performance metric that does not incorporate validated, delivered requirements is not (can not, in fact) be a meaningful measure of progress, health, or anything else approximating a useful answer to the questions that customers really have.

My hope here is not that we throw out all these other metrics---that is too unrealistic a hope given the entrenchment of the current accounting biases---but, that we finally come to recognize the central role that requirements play in everything we do, and thus reintroduce requirements into the software engineering vocabulary and process in an orderly and disciplined manner.

Requirements are, after all, our only basis for quality and customer value. Everything else we do in software engineering must be derived from this base. Or, it should be. Remember that time honored maxim: GI GO.


POSTED BY WAYNE SMITH AT 03:17 PM  |  4 COMMENTS

Wednesday, December 14, 2005

If reuse has any potential as a silver bullet for successfully delivering business and technology value, it will almost certainly rely on our ability to clearly and unambiguously characterize the problem space so that the corresponding subset of pre-built validated solutions that target this domain (the solution space) are visible. Only then can we even make the choice to reuse.

In other words, we can't reuse something if we don't know it is a fit for our needs, or worse, that a previous solution even exists.

Consequently, lack of a regular reuse routine starts with the fact that we have, as an industry, such poor requirements discipline and rigor, if we do them at all. It is hardly surprising, therefore, that we fail to recognize how closely our current problem may actually be to one that has been solved before. This lack of awareness together with our innate invention-is-more-fun bias quickly justifies our need to rebuild the solution.

Another invention … another solution that will not be saved and not catalogued for reuse. And, of course, another solution whose cost and risk are exorbitantly and needlessly high.

So, it again comes back to requirements. How often do we find that the root cause or critical success factor in some important software engineering issue rests with this thing called requirements (and the process for capturing and integrating them into the necessary work-products)? And yet, this facet of software engineering (one might even say, seminal, or foundational facet) is given embarrassingly short shrift.

No area of software engineering has received less attention than requirements.

Yet it is central to everything we do. It defines quality. It defines success.

(A digression may be helpful. Requirements and its role in software engineering is not well understood. But, the term as used here is not intended to imply any particular representation or manifestation, or even logical sequence, but rather simply a characterization of the problem or opportunity that is sufficiently unambiguous and complete so that someone other than the requirements owner can reliably obtain and validate a successful solution, and know it. This characterization is independent of tool sets, database repository designs, document templates, and may be captured before, during, and after the software solution has been approximated. Two crucial issues remain, however: (1) they are captured in a rigorous way, and (2) they are assets of, and owned by, the corresponding user or customer.)

On the positive side of the reuse equation it turns out that there are only a handful of unique business problems and a correspondingly astonishingly small set of software solutions. Most problems that any of us will ever see have been successfully solved before, often many times. (We will perhaps talk more later about this relatively small universe of business and technology frameworks, architectures, classes, templates, etc. that, in fact, span 99% of all business and technology situations that any of us will ever encounter. Recognizing this catalog of templates will be an important piece of the puzzle.) This says, however, that if we ever get the chance, virtually every technology project, if not---soon in fact---every one, can substantially benefit from reuse.

Further, we had observed earlier that one enormous advantage of the reuse idea is that it reduces the number of steps between problem or opportunity recognition and tangible value experienced by the targeted customer or user. We also pointed out that fewer steps are always better.

So, let's start with zero steps.

In other words, the definition of the problem is the solution. Problem = Solution. Done.

And, of course, if we did in fact capture the requirements in a rigorous way and were also in possession of a special computer whose instruction set included the requirements "language", then in one step we capture the requirements which can now be executed on this special computer to directly solve this problem and deliver its intended value.

Wow.

Talk about silver bullets.

All we need now is to focus our energy and talent on finding a way to rigorously capture requirements and then on defining this special computer that will execute these requirements.

We are closer than you might think. In fact, we may already be there.


POSTED BY WAYNE SMITH AT 06:04 PM  |  0 COMMENTS

Monday, December 12, 2005

Because we aren't sure it will work.

Why aren't we sure it will work?

Because it is an invention.

Why is it an invention?

Because we don't save.

Why don't we save?

Because it is more fun to invent than to reuse.

That's it.

If we saved more of the stuff we build, and were motivated to reuse-assemble-deliver rather than design-build-assemble-deliver, then lots of problems become much smaller. In particular, the number of steps between problem (or opportunty) recognition and tangible value experienced by (not just delivered to) the targeted customers is dramatically reduced. And, everyone knows that reducing the number of steps required to do something is always better. Fewer opportunities for errors, lead to fewer actual errors, lead to fewer defects inserted into the software (yes, someone is putting them in), lead to shorter time-to-market (even your "internal" market of users within your company), etc. All of which accelerates benefit steam recognition---and, reduces waste, failure costs, and lost or dissatisfied customers, while simultaneously increases return on everyone's investment. Removing unneeded steps helps everybody.

So, why don't we save what we build? Why don't we naturally leverage our investment in time and energy and money, so the next time we just pick it up and reuse it?

The technology and the tools are there to do all this. They have been for years.

It would not be too much of a stretch to say that this bias, this tendency when faced with a challenge to view it as a completely new event, represents our single largest impediment to reliable, high-performance, low-cost, rapidly delivered technology solutions that work the first time, every time.

(By the way for those of you keeping track of such things, this is, in fact, the elusive silver bullet you have heard so much about in the past. You know, the "thing" that nothing ever is? At least, this is as close as we are likely ever to get.)

You can see this bias firsthand in the tools that the industry provides the technology community. These tools aren't being forced on developers. The tool companies are responding to demand. Accordingly, overwhelmingly the industry is dominated by tools (and practices) that focus on design-build, rather than reuse. This is because reuse is still considered an interesting, but ultimately second-tier, idea.

So, again, why don't we simply put to better and more comprehensive use the few reuse tools we have, or demand more reuse capabilities from our tool suppliers?

The answer may have something to do with a little thing called requirements.


POSTED BY WAYNE SMITH AT 03:53 PM  |  1 COMMENT