Saturday, February 21, 2009

Continuous integration is a powerful iterative process of incremental delivery in which an entire business solution is constructed, validated, and accepted package by package in a series of iterations, where each iteration delivers an operational and functional subset of the total evolving solution. When the last package has been integrated and delivered in this fashion the application is considered complete.

But, why is continuous integration and incremental delivery so powerful?

It turns out that unmanaged risk is the source of all project problems. And, since risk is a measure of uncertainty, our focus must be on reducing this uncertainty.

After reuse, nothing reduces uncertainty as simply and as effectively as customer usage.

Everything else is merely a weak approximation.

As a result, the use of high frequency iterations in the manner illustrated above has the following benefits:
  •  Allows progress to be defined in terms of actual customer capabilities that can be touched and experienced, rather than by often abstract technical or project oriented stages, and thus serves as a practical progress management tool
  • Creates a momentum of success since new tangible value is available (typically) every few weeks
  • Ensures an operational subset of the solution is always available, i.e., the most recent failure free iteration
  • Allows the proposed benefit stream to be realized as early as is feasible
  • Facilitates the transfer of ownership from the developers to the customer—a critical step for success
  • Closes expectation gaps—which may have nothing to do with the stated requirements but only surface when customers actually experience something
  • Identifies defects very early in the project life cycle which substantially reduces project costs while improving customer satisfaction
  • Increases scheduling flexibility by permitting the delivery sequence to be more easily altered as conditions warrant
  • Isolates problems to the current package being integrated, thus dramatically reducing correction and reintegration costs and schedule delays
  • Promotes a high degree of parallelism in scheduling that can dramatically reduce overall project calendar time
But, how do we define these iterations?

This process starts with the packaging plan which captures the size, scope, dependencies, and complexity of the packaging structure and from which the resulting iterations can be dynamically assembled. (For more, see the discussion of Glue and its information model.)

Packages should be small, functionally independent business-centric feature sets. In general, it is important to define packages and the packaging plan so as to maximize:
  • Tangible business value to the enterprise
  • Momentum of success, especially early in the project
  • Early problem detection and especially the validation of architecturally significant requirements and solution components
Finally, it is important to organize and sequence the packages so that they maximize the total return across all benefit streams for the enterprise.

One observation on the size and scope of a package: Size is important, because ideally a project manager will almost always prefer a greater number of smaller packages, rather than only a few large packages. This is for three reasons:
  • Allows packages to move more rapidly through the implementation process (typically, every two to four weeks), which creates a momentum of delivery and success
  • Presents a smaller conceptual “footprint” to the customer so that it’s content is easier to understand, embrace, and accept
  • Increases delivery sequence flexibility by permitting more granularity for rearranging packages to better accommodate changes in project and user priorities, unplanned scheduling contingencies, delays in other packages, etc.
While there are many benefits to this incremental approach to solution delivery, one simple and very powerful benefit is that by chunking the solution delivery one can dramatically reduce sizing (or scoping) risk. This risk is particularly prevalent in situations where there is a high degree of solution complexity, inexperienced teams, unknown or unproven technologies, etc.

The key principle is that by increasing the customer delivery points we increase the effective resolution of the solution delivery process, which quickly and routinely flushes out illogical, missing, or incorrect requirements and expectation gaps, so that rework, delay, and overruns are substantially reduced, while significantly increasing the ultimate quality of the total delivered solution.

Finally, this approach of continuous integration and incremental delivery implies two distinct project management skills:
  1. How to manage each cycle, i.e., the requirements specification, solution construction, and validation of the feature set or functional chunk associated with that cycle
  2. How to manage across cycles, i.e., the synchronization, coordination, and integration of each new chunk into an evolving and conceptually whole solution

Unmanaged risk is the source of all software engineering problems. Risk is a measure of uncertainty in the project’s outcome. The greater this uncertainty, the greater is the risk.

Consequently, an IT organization’s quality efforts must be focused on best practices that reduce this uncertainty.

If we look at history we can make a few obvious observations. First, we can say that this project uncertainty can vary significantly over the life cycle of the effort. In general, it is at its highest when the project begins, since this is where we least understand the project and its dynamics.

As the project advances our degree of uncertainty about the project’s outcomes (e.g., delivery date, cost, quality, adoption rate within the enterprise, ROI, etc.) generally decreases—although in fits and starts and with setbacks and leaps forward—until we reach the end of the project, when our true understanding is at its highest, and our uncertainty is close to zero.

This leads us to instantly draw several conclusions about how to reduce uncertainty and risk. For example, if risk is at its minimum at the end of any project, and if we happen to come across another customer with exactly the same needs as that first customer, then it should follow that the project to deliver the first solution, again, to the new customer can be carried out with very little risk.

Thus, our first axiom: Reuse reduces uncertainty.

Wherever possible, solutions should be delivered by assembling pre-built and pre-validated components, rather than by constructing them. The fastest, cheapest way to build anything is to not build it all. This is the primary learning that arose out of the many quality efforts during the last fifty years in engineering and manufacturing. Reuse is so powerful that it absolutely demands emphasis. Very few actions that an organization can take will have this level of ROI.

Reuse derives its power because after we build that first solution we have two critical pieces of knowledge that we didn’t have before: (1) A customer together with the requirements, needs, constraints, and expectations of a successful solution, and (2) an actual instance of such a successful solution.

So, whereas at the project’s beginning, we were uncertain about these two points, we now, at the end, have much greater clarity, understanding, and certainty.

This leads us to our second axiom: Customer usage reduces uncertainty.

This is the primary learning from reuse. Everything else (better requirements, inspections, reviews, testing, more effective tools, etc.) are all merely weak approximations to simply getting the solution in front of a customer. And, then let them experience it in some organized way.

Remember, the quality principles require that any effective process (in this case, the SE process) must listen to the voice of its customer and then match that voice with the voice of the process (the solution). Further, we know from experience that in SE projects, the voice of the customer cannot be completely captured in the documented requirements, but only fully surfaces and reveals itself when customers actually experience something. These missing requirements are often called expectations, and must be identified, captured, and met for the solution to be considered successful. Moreover, the best way to flush out these missing or misunderstood requirements and expectations is to give customers something to touch and use and interact with.

Consequently, if after exploiting reuse as best we can, we find that we must still build all or portions of the solution, then we must exploit the second axiom, customer usage.

We do this by finding ways to structure and package the total solution into a sequence of component sub-solutions, and then incrementally deliver these sub-solutions to the customer throughout the project’s lifecycle, not just once at the end. Each one of these incremental sub-solutions is simply an opportunity to directly engage the customer with a chunk of the evolving total solution and to get feedback on how well that chunk meets the customer’s needs and expectations. But, because these chunks (customer interaction opportunities) are spread out over the duration of the project, we quickly learn and can reduce uncertainty rapidly.

So, whereas, the first axiom, reuse, is based on delivering the same solution to a different customer, this axiom, is based on delivering different solutions to the same customer. The first reduces uncertainty by leveraging the certainty in the solution, the second by leveraging the certainty in the customer.

An important conclusion to all this is that the higher the delivery frequency (the more chunks the business solution is composed of), the greater is the reduction in uncertainty. However, since each delivery incurs a necessary cost in energy utilization (labor and resources are consumed to deliver each chunk to the customer), the question that matters, is “How many delivery cycles are necessary to reduce risk commensurate with the proposed energy expenditure?” In other words, how do we balance the dramatic reduction in risk associated with each subsequent customer delivered chunk, with the incremental additional expenditure of resources?

This is, in many ways, the key trade-off for IT leaders and their project managers to more fully understand.


Monday, January 05, 2009

Earned Value Management has been around for years as a tool for improving insight into the status of technology projects, especially for DoD, NASA, DoE, DoT, and other large-scale federal efforts.

Unfortunately, its entire premise, namely that the work done has “value” equal to its budget, is (it would seem quite obviously) fundamentally and completely flawed.

There is no useful correlation between cost and value.

Further, for these terms to have any sensible meaning, they must be interpreted in the eyes of the customer. This is the entity paying the bills (incurring the cost), and who only receives value from their investment when the resulting solution begins to deliver tangible business value to that customer—increased revenue, market share, productivity, quality, responsiveness, or lower costs, delays, and waste. The fact that x% of the budget has been consumed is a useful cost accounting metric (burn rates, variances, etc.), but has nothing to do with any value received by the customer. In fact, often no real tangible business value can legitimately be booked until huge chunks (or, even all) of the budget has been spent.

This fatal disconnect from reality has been highlighted in an earlier posting, "Earned Value" has nothing to do with value, nor with "earning" anything except more cost., and a related posting Value, not Cost, Accounting: The Only True Window of Progress.

When you examine EVM you see that its principal judgment is the %-complete judgment. This is an often arbitrary and certainly highly subjective assessment of how much “real work” has been done to-date when compared with how much was “planned” to be completed by now. In the topsy-turvy EVM world where cost=value, this is used as a proxy for progress.

For those of you who feel that EVM is a helpful tool, our solution is that you simply define %-complete to be the percentage of the requirements that have been validated, accepted by the customer, and delivered to production.

This is simple. Everything else stays the same. All the existing EVM formulae remain unchanged. The only change is that the proxy for value and progress is not how much of the budget has been spent, but how much of the requirements have been delivered.

Of course, this does not address the fundamental flaw. But, this simple change not only radically simplifies a key EVM black box (the %-complete judgment), but also begins to subtly shift the project planning and management to think more in terms of exactly how do we deliver requirements to our customers more quickly, more frequently, and much earlier in the life-cycle.

In other words, how can we actually earn real value.


Thursday, September 11, 2008

One of the things that you can say about technology projects, of all kinds, is that the industry has overwhelmed us with a plethora of systems, technologies, and spreadsheets that inundate us with massive amounts of data about our projects. Unfortunately, even with all this data we seem no closer to answering very simple questions about these projects and their true progress.

True progress must be a measure that gives the customer (i.e., the buyer and user of the solution being delivered) a meaningful picture of when the benefits offered by this investment will begin to be realized. To the customer, the project is simply an investment vehicle. The result of that investment is a solution that when implemented will generate benefits to the organization. These benefits can be anything that the customer believes will help optimize the organization’s performance and assist it in more effectively achieving its goals. These benefits range from lower costs, less waste, faster responsiveness, simplicity of operation, increased market share, reduced outages, etc.

The point is that from the customer’s point of view, the only measure of progress that matters is ROI. In other words, when will our investment start paying off? When will we see the business benefits of the solution we are investing in?

This is what we mean by value.

The vast majority of project management systems, books, and classes never dwell on this shortcoming, but instead impress on us the importance of the vast analytical array of data that they can collect. Instead of getting insight into what customer’s really want to know, they shower us with “%-completes”, “actual vs. plan”, and other cost accounting analyses. These are easy and look impressive and have a lot of analytical sizzle, but are essentially meaningless when it comes to understanding, much less predicting, when the customer will begin to see real business value.

The reason for this is simple. Cost accounting is not value accounting.

Further, while we have for decades tried to forge a useful link between cost and value, we have nevertheless been left with very unsatisfying results. In fact, it is not too difficult to conclude that the knowledge, however deep we may have, of costs and burn rates brings us no closer (and is, in fact, highly misleading—one has only to look at the typical phenomenon of a task taking three times as long to finish the last 20%, than it did the first 80%) to understanding when the system will be delivered. That is, when we will actually realize the promised business value.

Fortunately, the answer is really quite simple. The elusive value metric we are seeking is right in front of us. It is requirements. More accurately, validated and delivered requirements.

See Golden Triangle diagram.

To see how this works, the diagram illustrates what one could call the golden triangle of value. This golden triangle is true of all technology projects of all types and sizes, and is completely independent of all methodologies, software engineering approaches, or project management styles.

What the golden triangle says is that the way to manage value delivery to your customer is to aggressively manage these three artifacts and their connections.

We start with the premise that customers are seeking not just solutions, but quality solutions. That is, solutions that perform as they expect, all the time, every time. We know from the quality industry that quality is not a subjective sense of “relative goodness” or some arbitrary opinion, but is rather simply meeting the requirements that the customer has laid down for that solution. The more effectively the solution meets those requirements, the higher quality the solution.


So, as we see in the diagram, if we can say with precision that the requirements fully define the solution we are seeking, and we can say that we have test cases that cover those requirements. Then, when we execute all those test cases against this evolving solution without generating any failures, we can say with confidence that the solution meets the requirements, … that we have a quality solution.

Accordingly, the value accounting metrics become:

  • Productivity, the number of validated and delivered requirements per labor hour (or labor dollar)
  • Cycle Time, the number of calendar days necessary to validate and deliver a requirement
  • Earned Value, the ratio of the number of requirements that have been validated and delivered to the total number of requirements in the solution
Naturally, there are more value metrics than these three, but they provide the foundation.

The key take-away is you should augment your cost accounting with value accounting if you truly want a window into how your project is doing and when it will be done successfully. And, the key to value accounting lies in understanding requirements and how effectively they are being validated and delivered to your customers.


Thursday, August 09, 2007

The sole purpose of measurement is learning. It is one of the highest forms of inquiry. Measurement offers reliable and often unexpected insights into the true nature of things. Further, this knowledge (true knowledge, if you will) is the key to improvement. Trying to improve something (a process, a product, yourself, …) when you lack true knowledge only makes things worse, not better. It is meddling, not managing.

Measurement is a process, of course, and as such requires all the necessary constituents of any process, namely method, tools, talent, etc.  In addition, for the measurement process to yield reliable insights, rather than distortions and “noise”, this process must itself be stable.

A common obstacle to effective measurement is the complexity of the financial and statistical concepts and in the understanding of how and when to apply these ideas for optimum  return to the enterprise. This ability to coalesce and distill the multitude of ideas, formulae, and tools into a simple pragmatic approach for inquiring into the dynamics and operational behavior of a process or operation is a hallmark of effective management and governance.

We have implemented many such measurement and governance approaches for our clients, in a variety of settings and contexts. We have found that Doug Hubbard’s new book, How To Measure Anything, is an important step in providing a window into many of these complexities and offers a variety of straightforward approaches to measurement that we feel preserves its fundamental role as a vital instrument of inquiry and knowledge.

Finally, when we refer to measurement as a form of inquiry, we mean a very special type of inquiry, namely inquiry into variation and its root causes. It turns out that isolating the underlying drivers of variation is the first step to any sustainable, predictable improvement program. Without an effective measurement process, management lacks the knowledge necessary to successfully guide the enterprise from where it is now, to where it desires to be.


Wednesday, December 06, 2006

Every process has a customer. The customer consumes or utilizes the outputs delivered by that process.

The voice of the customer is their expectations regarding what these outputs must look like and what they must do. The voice of the process is its output.

The goal of the process owner is to align the voice of the process with the voice of the customer. This alignment is what we mean by process management. One of its primary tools is continuous improvement.

Consequently, in order to “write better requirements”, we must first examine the process that produces these requirements, and then seek to improve that process.

Let’s start at the top. The solution delivery process delivers a business solution to its customer (the so-called end customer). We can think of this process has having two sub-processes: Requirements engineering and solution construction. (Note, that these two sub-processes can be executed once for any given solution, or if more iterative approaches are being used, they can be executed multiple times, where each cycle carves out functional subsets of the evolving total solution. In either case, the sub-processes remain the same.)

The requirements engineering sub-process generates the requirements specification, in other words, the requirements in our discussion. The solution construction sub-process, in turn, translates that requirements specification into an operational solution that delivers the intended value to the end customer. For a project to be considered successful this delivered value (the output of solution construction) must match the needs and expectations of the end customer as represented by the requirements specification (the output of requirements engineering).

Consequently, to write better requirements we must first seek to improve the requirements engineering process. This means aligning the voice of its process, the requirements specification, with the voice of its customer, in this case the needs and specifications of its downstream process, solution construction.

The needs and expectations of the solution construction process are indeed straightforward: A rigorous, unambiguous specification of the needs and expectations of the end customer suitable for design, construction, and implementation. Consequently, the voice of the requirements engineering customer, solution construction, consists of two required elements:

  • A clear specification of exactly what the end customer is expecting
  • A specification suitable for implementation

These two required elements imply that an effective requirements engineering process must not only capture what the end customer wants, but must capture it in a manner so the downstream process can implement a solution that delivers the intended value. This is why the quality of the requirements specification is so vital to project success. Since the solution (for it to be a success) must actually deliver tangible value in the real world, then to the extent that the requirements specification has gaps or inconsistencies, they will be “corrected” by downstream workers (designers, developers) in order to “complete” the solution so that it can be implemented. In other words, the choice is not whether one will add the necessary details and rigor needed for implementation (since nothing can be implemented without it), the only choice is who will do it and when.

Decades of research and analysis have demonstrated the overwhelming desirability of the sequence: First clearly and unambiguously understand the problem, and then, and only then, seek to solve it. In other words, requirements first, then solution.

Regardless of tools, formats, and practices, the objective of any requirements engineering process is to understand (as best we can) exactly what is needed and then to be able to capture that understanding in some tangible mode (write it down somehow) so that it can be shared and understood by others so that semantic integrity is preserved. In other words, the voice of the end customer (their needs and expectations as they understand them) must match the voice of the process (the requirements specification).

We should hasten to add that a primary way anyone understands anything is by means of successive approximations: An initial level of understanding is achieved, which is then “tested” in some way. The results of that testing are then incorporated into a next level of enhanced understanding. The cycle continues until either we run out of time or interest (the typical case, but certainly not desirable), or until we have reached a targeted level of requirements quality that characterizes the degree of understanding that we believe is necessary for success (the more desirable alternative).

In this represent-test-refine-retest cycle there are many methods for representing and testing these successive approximations to understanding.

Representation methods include English text, use cases, entity-relationship and related diagrams, UML-style object models, rigorous specification languages, and software iterations and prototypes. Testing methods typically accompany each representation technique. But, dialogue-based review sessions are by far the most prevalent---we sit down in front of the subject matter expert or knowledge worker and ask them questions and let them elaborate and clarify our understanding. Accordingly, since natural language (English in our case) is the most prevalent form of problem representation, as well as the basis for the corresponding review (“testing”) sessions, let’s examine more carefully exactly what we mean by writing better requirements in English.

A quality requirement exhibits the following four attributes:

  • Completeness. The requirement must contain all the information necessary for full understanding by the downstream customer. All assumptions must be made explicit.
  • Relevance. The requirement must exclude any information not necessary for full understanding by the downstream customer. Impertinent or extraneous material must be removed.
  • Precision. The requirement must have only one interpretation by the downstream customer.
  • Context. The requirement must explicitly identify all intended variations in usage and meaning. All pertinent meanings must be clarified. For example, the term order may have several contexts: Customer order, supplier order, purchase order, back order, etc. To the extent that these represent legitimate contexts and have different meanings, then they must be explicitly distinguished in the requirements materials.

Note that there are two sources of information that are necessary for full and unambiguous understanding: The requirements specification itself and the memories of the downstream customer. One need not document and explain everything. One only needs to capture those terms and phrases that are not part of the collective knowledge of the downstream customer. This is especially true of the many common words that are intended to retain their common and plain definitions. This determination, namely, which terms have common meanings that are also the intended meanings, and which terms require explicit clarification and elaboration for the downstream customer is a crucial role for the requirements analyst.

It should also be pointed out that quality requirements are also testable requirements. This testability criterion is another perspective on what we mean by a quality requirement. Testability is a characteristic of a requirement in which someone other than the author (for example, a test analyst, another type of downstream customer) can engineer a set of test cases and expected results that will validate that requirement. That is, determine whether it has been successfully implemented by the target solution. Testability is an exit criterion for the entire requirements engineering process.

We would like to acknowledge our colleague David Gelperin for many insights related to this posting.


Monday, March 20, 2006

To improve customer perception of our products and services we often ask the customer directly – what do you want?  

There are many common sources of such feedback: focus groups, surveys, call-backs, etc.

Most often these actions are very biased and therefore misplaced.

  • Focus groups with ‘pre-selected’ customers give responses biased in favor of persons doing the selection.
  • Surveys get replies from people who ‘like’ surveys – i.e. more bias.
  • Call-backs typically reach the complainers – did I hear bias.
  • Etc.
So how do you get unbiased feedback?   You can’t.   By definition human feedback is biased.  

Customer feedback should be reserved for addressing specific narrow issues: complaints, failures, accidents, etc.  

If you want broadband quality improvement you go to the experts – Deming, Juran, Crosby, Feigenbaum, Ishikawa, Garvin, Taguchi, etc.   All mention customers in passing, but get down to business with “PRODUCT”: Zero Defects, Six Sigma, quality circles, TQM, testable/tested requirements, ‘ilities, SPC/mature processes, focused metrics, trained workers, and continuous-continuous improvement.

Biased customer feedback leads to gimmicks, fads, crazes, etc., which move markets in the short-term.   Good, value-priced products survive and win long-term.

To summarize, we are faced with the old dilemma: opinion vs. measurement.   Customers provide opinion.    Product needs can be identified and measured only through rigorous/objective analysis.


Thursday, December 22, 2005


Value-Driven, Risk-Adjusted, Solution Delivery

Over the years there have been many discussions as well as actual offerings that purport to be methodologies or processes for delivering technology projects. It has long been our view that all of these seemingly varied techniques are simply special cases of a single common general process. This super class of software engineering process has never been afforded much discourse. This, in spite of the continuing ineffectiveness of all the current variant approaches. Yet, it is only by understanding the dynamics and performance of this natural super class can we begin to learn how to improve our ability to deliver value to our customers.


That is the key to understanding this software engineering super class. In other words, the primary directive (if you will) must be to deliver value to customers. Otherwise, why are we doing any of this?

Consequently, our label for this super class of software engineering process is value-driven, risk-adjusted, solution delivery.

Let's look at this label more closely. We've mentioned already the importance of value. This is the idea that the customer must be the sole arbiter of quality and fitness, and that this externally focused (i.e., external to the project team) business value perspective must drive all project decision-making. In other words, when we are deciding delivery sequences, priorities of requirements, etc., we should be guided by what delivers the most value to our customer.

The second part of the label, risk-adjusted, refers to the importance of understanding that unmanaged risk is the source of all project problems. Technology projects are, after all, business investments. And, as with any business investment, its return (that is, its value in the form of benefits like increased revenues, reduced operating costs, higher utilization of resources, etc.) must be sufficient to compensate the investor (the customer, sponsor, or owner) for the risks they are taking. Accordingly, an effective software engineering process must assist project management and the project team in continuously identifying risk and then provide mechanisms for aggressively removing or mitigating these solution acquisition risks. If this isn't done properly, then the economic value of that investment can be substantially impaired.

Finally, we come to the phrase, solution delivery. This phrase emphasizes that the primary goal of the software engineering process is to actually deliver tangible usable value in the form of complete business solutions to the customer. Not simply building or installing software. It is the idea that delivered solutions that operate in the customer's real world is the key objective. That is, enabling and enhancing a company’s ability to grow and compete in its marketplace through the acquisition (whether built or bought) of complete, fully integrated, and organizationally unified business capabilities.

So, in our view, value-driven, risk-adjusted solution delivery is the mission statement for all meaningful software engineering processes.

When viewed in this context, it becomes clear that a critical success factor for such a process is the ability to rapidly and reliably make risk unambiguously visible to the project team and then to provide the means to mitigate that risk so that it can be managed. By the way, by managing risk, what is meant is that

  • We know what the exposure is (what creates the risk in the first place)
  • We know its severity, should it occur (i.e., how it reduces, or delays, the benefit stream, increases the costs, etc.)
  • We know the likelihood of it occurring
  • We know the remedial actions that can be taken that will preserve the investment's return potential as compared with alternate uses of those resources (money, talent, technology)

Risk is essentially a measure of the variability of the project's outcomes (principally seen or felt in its benefits, costs, quality, etc.). The greater this variability, the greater the overall project risk. Consequently, risk is a measure of uncertainty. The less certain we are regarding a project's outcome (or any single dimension of its outcome, say, its total cost), the greater the likelihood that particular outcome will not meet its target. Thus, the greater the risk.

So, one important question that arises is how do we reduce uncertainty?

It turns out that this is a telling question. Because, when we look carefully at uncertainty, we see that uncertainty in the project's outcomes is zero (or, pretty dog-gone small anyway) at the end of that project. This is because we now know (since we are done) exactly how much it will (i.e., did) cost, and whether the benefits were actually realized at the targeted levels and time frames, etc. Accordingly, the day before the last day, the risk is higher but only slightly higher; a week before, the risk is even higher, and so on, until the risk reaches a maximum on day one of the project, where uncertainty is typically the highest. (This is not to imply that risk is a linear function of elapsed project time. It is, in fact, quite non-linear. But rather, to say that risk can only be reduced by capturing the appropriate learning that can only come from executing the project. But, if that learning is not captured, then uncertainty may not be reduced---may actually be increased---as the project advances.)

Just to push this argument a bit farther, if we accept that uncertainty (and thus risk) are at their lowest when the project is completed, then we could say that if we find another customer with exactly the same needs, then we could simply deliver that same solution to them as well and be confident that it will be an essentially zero risk undertaking. And if we find a third such customer, then we can deliver the same solution to them as well, and so on. This, of course, is nothing more than reuse.

Consequently, a key learning is that reuse reduces uncertainty, and thus reduces project risk. This seems very intuitive. Yet, as we have pointed out before, reuse is still a woefully underutilized practice. Stated differently, an effective software engineering process must make it easy to assemble solutions rather than build them.

But why should this be so?

One answer is that reuse is failure-free. (Remember that we have assumed that we are referring to the reuse of a solution for a customer with exactly the same needs as the initial customer. While recognizing that this condition---that is, both customers having exactly the same needs---is extremely unlikely, we can agree that to the degree that the targeted customers for reuse have similar needs, then the more certain one can be that the solution will be as a failure-free as possible.)

The fact that reuse for other customers exploits the investment that the team had already made in removing defects from the solution for the initial customer, yields another learning: Bugs, by their very nature, increase uncertainty and thus increase project risk.

Consequently, an effective software engineering discipline must (a) make it difficult to insert defects into the work-products in the first instance, and (b) failing that, make it easy to locate them so they can be removed before they are shipped downstream (i.e, to the next stage, process, or customer) where they have exponentially larger impacts.

Well, now we are getting somewhere. So, to reduce risk we must reduce defects. OK, but what is a defect?

To properly understand the answer to this question we must first examine the nature of the software engineering process itself. That is, what is its essence? All software engineering processes (regardless of their vocabulary, tool sets, etc.) can be essentially viewed as a sequence of translation steps, where each translation step attempts to elaborate the problem and solution domains to lower levels of refinement (starting with some usually informal narrative description of the problem or opportunity) until a refinement level is reached that can be directly implemented (typically a very rigorous unambiguous software specification or source code). At this point, and only at this point, can the results of all this translation actually deliver any tangible value to the customer.

The precise number of these steps, their exact method of translation, the nature of these refinement levels, and the format and structure of each step's output is defined by the particular software engineering process being used. But, all such processes, nevertheless, share this step-wise translation and refinement characteristic.

Further, each of these translation steps comprises two distinct activities: representation and validation.

The representation activity involves the elaboration of the work products from their current level of refinement to the next lower level of refinement. For most software engineering processes, a typical elaboration sequence is analysis, design, specification, coding, etc.

The validation activity involves ensuring that each such translation step is complete, error-free, and relevant. That is, ensuring that the meaning of a representation at any level is exactly semantically equivalent to the meaning of the representation at the prior level. Typical validation activities include walk-throughs, inspections, testing, operational use, etc.

This step-wise translation and refinement technique is also referred to as the levels of abstraction technique since its primary engine is the refinement of the "problem" statement, starting with the level closest to the external customer and the operational world, and proceeding level by conceptual level until an abstraction level is reached that is sufficiently complete, unambiguous, and precise that it can be implemented on some processor.

Now, finally, we can get an answer to what is a defect, anyway.

A defect is simply the result of an error in one of these translation steps. (Where error means that the translation did not preserve the semantic integrity of the prior step.)

At this point it might be useful to point out where we believe the industry has diverged from an optimal path over the last few decades of research. Given this model, one can argue that there are two avenues of research that could prove helpful in improving the effectiveness of the software engineering process: (1) Improve the various translation techniques at each step, or (2) use fewer translation steps. While overwhelmingly the industry has focused on more and better translation techniques and tools (that is, research avenue 1), very little has been done to find better methods that would actually require fewer steps (and thus fewer opportunities for errors), with of course, the goal of reducing the steps to only one: Problem = Solution. This is the avenue we feel that will offer the greatest potential for our industry.

Recapping the story so far,

  • Risk is a measure of uncertainty (reducing uncertainty reduces risk)
  • Reuse increases certainty (assemble rather than build)
  • Bugs decrease certainty (never ship defects downstream)
  • Fewer translation steps = fewer opportunities for error

These all appear to be important software engineering principles.

But, there is one more principle, perhaps the most important principle, in describing what we mean by value-driven, risk-adjusted solution delivery:

Customer usage of the solution increases certainty.

Everything else (meetings, prototypes, inspections, reviews, testing) is but a weak approximation to actual customer usage. We saw this principle in action earlier when we commented that uncertainty (and thus risk) is at its lowest when we have actually delivered the solution to the customer and they are using it to realize the benefits of its operation.

Now, of course, this occurs at the end of the project. But, why should it only happen then? Why not deliver all the way through the project's duration, starting at the very beginning? Certainly, the earlier we do this the better, right? And when we say deliver, we mean deliver customer value, not designs, or specs, or the myriad other work products that we have contrived to gain customer "buy-in" or approval. We mean deliver actual operational functionality. We mean deliver tangible solutions that they can start using right away.

Think how the software engineering world must change if we established an iron-clad rule that every project must deliver tangible operational value to the customer every 6 weeks? Every 2 weeks? Every day?

(This is very practical by the way for both very large and very small technology efforts. All that is required is for the project team to understand that they must begin on day one to think in terms of customer value and how to break that value up into chunks that can be incrementally delivered and continuously integrated into the evolving total solution.)

We have found that the higher the (value) delivery frequency, the greater the reduction in uncertainty and risk. As a result, a key question our project managers ask is "How many delivery cycles (iterations) are necessary to maximize value and reduce risk commensurate with the corresponding increased energy expenditure due to each iteration?" That balance becomes the vital optimization issue for the project team.

Finally, frequent customer delivery and usage also helps transfer ownership of the solution very early to the customer (which has been found to be a critical success factor) as well as helping to close expectation gaps (the gap between what a customer thinks or believes a solution will do for them based on their current understanding of the discussions and documents they have seen together with their own biases and paradigms, and what the solution actually delivers in reality---which can only be ascertained by using it).


Wednesday, December 14, 2005

If reuse has any potential as a silver bullet for successfully delivering business and technology value, it will almost certainly rely on our ability to clearly and unambiguously characterize the problem space so that the corresponding subset of pre-built validated solutions that target this domain (the solution space) are visible. Only then can we even make the choice to reuse.

In other words, we can't reuse something if we don't know it is a fit for our needs, or worse, that a previous solution even exists.

Consequently, lack of a regular reuse routine starts with the fact that we have, as an industry, such poor requirements discipline and rigor, if we do them at all. It is hardly surprising, therefore, that we fail to recognize how closely our current problem may actually be to one that has been solved before. This lack of awareness together with our innate invention-is-more-fun bias quickly justifies our need to rebuild the solution.

Another invention … another solution that will not be saved and not catalogued for reuse. And, of course, another solution whose cost and risk are exorbitantly and needlessly high.

So, it again comes back to requirements. How often do we find that the root cause or critical success factor in some important software engineering issue rests with this thing called requirements (and the process for capturing and integrating them into the necessary work-products)? And yet, this facet of software engineering (one might even say, seminal, or foundational facet) is given embarrassingly short shrift.

No area of software engineering has received less attention than requirements.

Yet it is central to everything we do. It defines quality. It defines success.

(A digression may be helpful. Requirements and its role in software engineering is not well understood. But, the term as used here is not intended to imply any particular representation or manifestation, or even logical sequence, but rather simply a characterization of the problem or opportunity that is sufficiently unambiguous and complete so that someone other than the requirements owner can reliably obtain and validate a successful solution, and know it. This characterization is independent of tool sets, database repository designs, document templates, and may be captured before, during, and after the software solution has been approximated. Two crucial issues remain, however: (1) they are captured in a rigorous way, and (2) they are assets of, and owned by, the corresponding user or customer.)

On the positive side of the reuse equation it turns out that there are only a handful of unique business problems and a correspondingly astonishingly small set of software solutions. Most problems that any of us will ever see have been successfully solved before, often many times. (We will perhaps talk more later about this relatively small universe of business and technology frameworks, architectures, classes, templates, etc. that, in fact, span 99% of all business and technology situations that any of us will ever encounter. Recognizing this catalog of templates will be an important piece of the puzzle.) This says, however, that if we ever get the chance, virtually every technology project, if not---soon in fact---every one, can substantially benefit from reuse.

Further, we had observed earlier that one enormous advantage of the reuse idea is that it reduces the number of steps between problem or opportunity recognition and tangible value experienced by the targeted customer or user. We also pointed out that fewer steps are always better.

So, let's start with zero steps.

In other words, the definition of the problem is the solution. Problem = Solution. Done.

And, of course, if we did in fact capture the requirements in a rigorous way and were also in possession of a special computer whose instruction set included the requirements "language", then in one step we capture the requirements which can now be executed on this special computer to directly solve this problem and deliver its intended value.


Talk about silver bullets.

All we need now is to focus our energy and talent on finding a way to rigorously capture requirements and then on defining this special computer that will execute these requirements.

We are closer than you might think. In fact, we may already be there.


Monday, December 12, 2005

Because we aren't sure it will work.

Why aren't we sure it will work?

Because it is an invention.

Why is it an invention?

Because we don't save.

Why don't we save?

Because it is more fun to invent than to reuse.

That's it.

If we saved more of the stuff we build, and were motivated to reuse-assemble-deliver rather than design-build-assemble-deliver, then lots of problems become much smaller. In particular, the number of steps between problem (or opportunty) recognition and tangible value experienced by (not just delivered to) the targeted customers is dramatically reduced. And, everyone knows that reducing the number of steps required to do something is always better. Fewer opportunities for errors, lead to fewer actual errors, lead to fewer defects inserted into the software (yes, someone is putting them in), lead to shorter time-to-market (even your "internal" market of users within your company), etc. All of which accelerates benefit steam recognition---and, reduces waste, failure costs, and lost or dissatisfied customers, while simultaneously increases return on everyone's investment. Removing unneeded steps helps everybody.

So, why don't we save what we build? Why don't we naturally leverage our investment in time and energy and money, so the next time we just pick it up and reuse it?

The technology and the tools are there to do all this. They have been for years.

It would not be too much of a stretch to say that this bias, this tendency when faced with a challenge to view it as a completely new event, represents our single largest impediment to reliable, high-performance, low-cost, rapidly delivered technology solutions that work the first time, every time.

(By the way for those of you keeping track of such things, this is, in fact, the elusive silver bullet you have heard so much about in the past. You know, the "thing" that nothing ever is? At least, this is as close as we are likely ever to get.)

You can see this bias firsthand in the tools that the industry provides the technology community. These tools aren't being forced on developers. The tool companies are responding to demand. Accordingly, overwhelmingly the industry is dominated by tools (and practices) that focus on design-build, rather than reuse. This is because reuse is still considered an interesting, but ultimately second-tier, idea.

So, again, why don't we simply put to better and more comprehensive use the few reuse tools we have, or demand more reuse capabilities from our tool suppliers?

The answer may have something to do with a little thing called requirements.