Saturday, February 21, 2009

Continuous integration is a powerful iterative process of incremental delivery in which an entire business solution is constructed, validated, and accepted package by package in a series of iterations, where each iteration delivers an operational and functional subset of the total evolving solution. When the last package has been integrated and delivered in this fashion the application is considered complete.

But, why is continuous integration and incremental delivery so powerful?

It turns out that unmanaged risk is the source of all project problems. And, since risk is a measure of uncertainty, our focus must be on reducing this uncertainty.

After reuse, nothing reduces uncertainty as simply and as effectively as customer usage.

Everything else is merely a weak approximation.

As a result, the use of high frequency iterations in the manner illustrated above has the following benefits:
  •  Allows progress to be defined in terms of actual customer capabilities that can be touched and experienced, rather than by often abstract technical or project oriented stages, and thus serves as a practical progress management tool
  • Creates a momentum of success since new tangible value is available (typically) every few weeks
  • Ensures an operational subset of the solution is always available, i.e., the most recent failure free iteration
  • Allows the proposed benefit stream to be realized as early as is feasible
  • Facilitates the transfer of ownership from the developers to the customer—a critical step for success
  • Closes expectation gaps—which may have nothing to do with the stated requirements but only surface when customers actually experience something
  • Identifies defects very early in the project life cycle which substantially reduces project costs while improving customer satisfaction
  • Increases scheduling flexibility by permitting the delivery sequence to be more easily altered as conditions warrant
  • Isolates problems to the current package being integrated, thus dramatically reducing correction and reintegration costs and schedule delays
  • Promotes a high degree of parallelism in scheduling that can dramatically reduce overall project calendar time
But, how do we define these iterations?

This process starts with the packaging plan which captures the size, scope, dependencies, and complexity of the packaging structure and from which the resulting iterations can be dynamically assembled. (For more, see the discussion of Glue and its information model.)

Packages should be small, functionally independent business-centric feature sets. In general, it is important to define packages and the packaging plan so as to maximize:
  • Tangible business value to the enterprise
  • Momentum of success, especially early in the project
  • Early problem detection and especially the validation of architecturally significant requirements and solution components
Finally, it is important to organize and sequence the packages so that they maximize the total return across all benefit streams for the enterprise.

One observation on the size and scope of a package: Size is important, because ideally a project manager will almost always prefer a greater number of smaller packages, rather than only a few large packages. This is for three reasons:
  • Allows packages to move more rapidly through the implementation process (typically, every two to four weeks), which creates a momentum of delivery and success
  • Presents a smaller conceptual “footprint” to the customer so that it’s content is easier to understand, embrace, and accept
  • Increases delivery sequence flexibility by permitting more granularity for rearranging packages to better accommodate changes in project and user priorities, unplanned scheduling contingencies, delays in other packages, etc.
While there are many benefits to this incremental approach to solution delivery, one simple and very powerful benefit is that by chunking the solution delivery one can dramatically reduce sizing (or scoping) risk. This risk is particularly prevalent in situations where there is a high degree of solution complexity, inexperienced teams, unknown or unproven technologies, etc.

The key principle is that by increasing the customer delivery points we increase the effective resolution of the solution delivery process, which quickly and routinely flushes out illogical, missing, or incorrect requirements and expectation gaps, so that rework, delay, and overruns are substantially reduced, while significantly increasing the ultimate quality of the total delivered solution.

Finally, this approach of continuous integration and incremental delivery implies two distinct project management skills:
  1. How to manage each cycle, i.e., the requirements specification, solution construction, and validation of the feature set or functional chunk associated with that cycle
  2. How to manage across cycles, i.e., the synchronization, coordination, and integration of each new chunk into an evolving and conceptually whole solution

Unmanaged risk is the source of all software engineering problems. Risk is a measure of uncertainty in the project’s outcome. The greater this uncertainty, the greater is the risk.

Consequently, an IT organization’s quality efforts must be focused on best practices that reduce this uncertainty.

If we look at history we can make a few obvious observations. First, we can say that this project uncertainty can vary significantly over the life cycle of the effort. In general, it is at its highest when the project begins, since this is where we least understand the project and its dynamics.

As the project advances our degree of uncertainty about the project’s outcomes (e.g., delivery date, cost, quality, adoption rate within the enterprise, ROI, etc.) generally decreases—although in fits and starts and with setbacks and leaps forward—until we reach the end of the project, when our true understanding is at its highest, and our uncertainty is close to zero.

This leads us to instantly draw several conclusions about how to reduce uncertainty and risk. For example, if risk is at its minimum at the end of any project, and if we happen to come across another customer with exactly the same needs as that first customer, then it should follow that the project to deliver the first solution, again, to the new customer can be carried out with very little risk.

Thus, our first axiom: Reuse reduces uncertainty.

Wherever possible, solutions should be delivered by assembling pre-built and pre-validated components, rather than by constructing them. The fastest, cheapest way to build anything is to not build it all. This is the primary learning that arose out of the many quality efforts during the last fifty years in engineering and manufacturing. Reuse is so powerful that it absolutely demands emphasis. Very few actions that an organization can take will have this level of ROI.

Reuse derives its power because after we build that first solution we have two critical pieces of knowledge that we didn’t have before: (1) A customer together with the requirements, needs, constraints, and expectations of a successful solution, and (2) an actual instance of such a successful solution.

So, whereas at the project’s beginning, we were uncertain about these two points, we now, at the end, have much greater clarity, understanding, and certainty.

This leads us to our second axiom: Customer usage reduces uncertainty.

This is the primary learning from reuse. Everything else (better requirements, inspections, reviews, testing, more effective tools, etc.) are all merely weak approximations to simply getting the solution in front of a customer. And, then let them experience it in some organized way.

Remember, the quality principles require that any effective process (in this case, the SE process) must listen to the voice of its customer and then match that voice with the voice of the process (the solution). Further, we know from experience that in SE projects, the voice of the customer cannot be completely captured in the documented requirements, but only fully surfaces and reveals itself when customers actually experience something. These missing requirements are often called expectations, and must be identified, captured, and met for the solution to be considered successful. Moreover, the best way to flush out these missing or misunderstood requirements and expectations is to give customers something to touch and use and interact with.

Consequently, if after exploiting reuse as best we can, we find that we must still build all or portions of the solution, then we must exploit the second axiom, customer usage.

We do this by finding ways to structure and package the total solution into a sequence of component sub-solutions, and then incrementally deliver these sub-solutions to the customer throughout the project’s lifecycle, not just once at the end. Each one of these incremental sub-solutions is simply an opportunity to directly engage the customer with a chunk of the evolving total solution and to get feedback on how well that chunk meets the customer’s needs and expectations. But, because these chunks (customer interaction opportunities) are spread out over the duration of the project, we quickly learn and can reduce uncertainty rapidly.

So, whereas, the first axiom, reuse, is based on delivering the same solution to a different customer, this axiom, is based on delivering different solutions to the same customer. The first reduces uncertainty by leveraging the certainty in the solution, the second by leveraging the certainty in the customer.

An important conclusion to all this is that the higher the delivery frequency (the more chunks the business solution is composed of), the greater is the reduction in uncertainty. However, since each delivery incurs a necessary cost in energy utilization (labor and resources are consumed to deliver each chunk to the customer), the question that matters, is “How many delivery cycles are necessary to reduce risk commensurate with the proposed energy expenditure?” In other words, how do we balance the dramatic reduction in risk associated with each subsequent customer delivered chunk, with the incremental additional expenditure of resources?

This is, in many ways, the key trade-off for IT leaders and their project managers to more fully understand.


Thursday, February 19, 2009

In our practice, it is common that an organization at some point embarks on the visioning journey. Depending upon the company and its history, this journey can take a variety of paths. But in almost all cases, it involves the creation of one or more artifacts: Mission, vision, principles, values, strategy, and the like.

The world has learned several important points about these efforts:
  • It is not about documents, but about the thinking. In particular, the need for pervasive strategic thinking. These documents are of course necessary. It is simply that the artifacts themselves are not the goal, or where the true value lies.
  • All these artifacts should collectively tell essentially the same story, just from a different perspective and with a different focus. They must all reinforce the same central themes and describe the same entity. Otherwise, what gets communicated is confusion and irrelevance, regardless of the actual content. In general, the fewer artifacts the better. The simpler the message the better. The more focused the better.
  • The essential value and power of all these artifacts lies in the degree to which the leadership actually lives and breathes these principles and continually reinforces their essential ideas through frequent and direct interaction with all constituencies—customers, employees, partners, and shareholders. These interactions represent opportunities for the leadership to highlight practical examples where these ideas have worked or where gaps are found. It is through this pervasive personal dialogue (not a speech, or presentation), and only this—the documents (or, posters, web pages, etc.) themselves will always be weak vessels—can the essence of the ideas come alive and mean anything. Consequently, in addition to the creation of any new visioning documents, the organization must include how it should be communicated and used as a tool for increasing dialogue and actively and continually promoting its key messages.
  • Finally, when the leadership thinks about this type of new vision or strategy effort, they need to answer a few questions: What is our goal? Why are we changing what we have now? What exactly does success look like? How will we know whether the new "vision" made any difference?
We use the phrase that "everyone has a piece of the truth".

If the goal is simply a new document (or, paragraph) that does a better job of describing who the organization is now and why they exist and then gets published somewhere, then it is a fairly straightforward PR or marketing piece. In other words, the skills needed are good writing skills to achieve this document re-write goal.

But, if the goal is change, then that needs to start with some alignment on what is not working now, as well as what the future-state should look like. This requires intensive, inclusive (and safe) dialogue among all constituencies, where the leadership can see who they are (their "truth") through their constituents' varied lenses.

For this goal, the writing is the easy part.


Monday, January 05, 2009

Earned Value Management has been around for years as a tool for improving insight into the status of technology projects, especially for DoD, NASA, DoE, DoT, and other large-scale federal efforts.

Unfortunately, its entire premise, namely that the work done has “value” equal to its budget, is (it would seem quite obviously) fundamentally and completely flawed.

There is no useful correlation between cost and value.

Further, for these terms to have any sensible meaning, they must be interpreted in the eyes of the customer. This is the entity paying the bills (incurring the cost), and who only receives value from their investment when the resulting solution begins to deliver tangible business value to that customer—increased revenue, market share, productivity, quality, responsiveness, or lower costs, delays, and waste. The fact that x% of the budget has been consumed is a useful cost accounting metric (burn rates, variances, etc.), but has nothing to do with any value received by the customer. In fact, often no real tangible business value can legitimately be booked until huge chunks (or, even all) of the budget has been spent.

This fatal disconnect from reality has been highlighted in an earlier posting, "Earned Value" has nothing to do with value, nor with "earning" anything except more cost., and a related posting Value, not Cost, Accounting: The Only True Window of Progress.

When you examine EVM you see that its principal judgment is the %-complete judgment. This is an often arbitrary and certainly highly subjective assessment of how much “real work” has been done to-date when compared with how much was “planned” to be completed by now. In the topsy-turvy EVM world where cost=value, this is used as a proxy for progress.

For those of you who feel that EVM is a helpful tool, our solution is that you simply define %-complete to be the percentage of the requirements that have been validated, accepted by the customer, and delivered to production.

This is simple. Everything else stays the same. All the existing EVM formulae remain unchanged. The only change is that the proxy for value and progress is not how much of the budget has been spent, but how much of the requirements have been delivered.

Of course, this does not address the fundamental flaw. But, this simple change not only radically simplifies a key EVM black box (the %-complete judgment), but also begins to subtly shift the project planning and management to think more in terms of exactly how do we deliver requirements to our customers more quickly, more frequently, and much earlier in the life-cycle.

In other words, how can we actually earn real value.


Thursday, September 11, 2008

One of the things that you can say about technology projects, of all kinds, is that the industry has overwhelmed us with a plethora of systems, technologies, and spreadsheets that inundate us with massive amounts of data about our projects. Unfortunately, even with all this data we seem no closer to answering very simple questions about these projects and their true progress.

True progress must be a measure that gives the customer (i.e., the buyer and user of the solution being delivered) a meaningful picture of when the benefits offered by this investment will begin to be realized. To the customer, the project is simply an investment vehicle. The result of that investment is a solution that when implemented will generate benefits to the organization. These benefits can be anything that the customer believes will help optimize the organization’s performance and assist it in more effectively achieving its goals. These benefits range from lower costs, less waste, faster responsiveness, simplicity of operation, increased market share, reduced outages, etc.

The point is that from the customer’s point of view, the only measure of progress that matters is ROI. In other words, when will our investment start paying off? When will we see the business benefits of the solution we are investing in?

This is what we mean by value.

The vast majority of project management systems, books, and classes never dwell on this shortcoming, but instead impress on us the importance of the vast analytical array of data that they can collect. Instead of getting insight into what customer’s really want to know, they shower us with “%-completes”, “actual vs. plan”, and other cost accounting analyses. These are easy and look impressive and have a lot of analytical sizzle, but are essentially meaningless when it comes to understanding, much less predicting, when the customer will begin to see real business value.

The reason for this is simple. Cost accounting is not value accounting.

Further, while we have for decades tried to forge a useful link between cost and value, we have nevertheless been left with very unsatisfying results. In fact, it is not too difficult to conclude that the knowledge, however deep we may have, of costs and burn rates brings us no closer (and is, in fact, highly misleading—one has only to look at the typical phenomenon of a task taking three times as long to finish the last 20%, than it did the first 80%) to understanding when the system will be delivered. That is, when we will actually realize the promised business value.

Fortunately, the answer is really quite simple. The elusive value metric we are seeking is right in front of us. It is requirements. More accurately, validated and delivered requirements.

See Golden Triangle diagram.

To see how this works, the diagram illustrates what one could call the golden triangle of value. This golden triangle is true of all technology projects of all types and sizes, and is completely independent of all methodologies, software engineering approaches, or project management styles.

What the golden triangle says is that the way to manage value delivery to your customer is to aggressively manage these three artifacts and their connections.

We start with the premise that customers are seeking not just solutions, but quality solutions. That is, solutions that perform as they expect, all the time, every time. We know from the quality industry that quality is not a subjective sense of “relative goodness” or some arbitrary opinion, but is rather simply meeting the requirements that the customer has laid down for that solution. The more effectively the solution meets those requirements, the higher quality the solution.


So, as we see in the diagram, if we can say with precision that the requirements fully define the solution we are seeking, and we can say that we have test cases that cover those requirements. Then, when we execute all those test cases against this evolving solution without generating any failures, we can say with confidence that the solution meets the requirements, … that we have a quality solution.

Accordingly, the value accounting metrics become:

  • Productivity, the number of validated and delivered requirements per labor hour (or labor dollar)
  • Cycle Time, the number of calendar days necessary to validate and deliver a requirement
  • Earned Value, the ratio of the number of requirements that have been validated and delivered to the total number of requirements in the solution
Naturally, there are more value metrics than these three, but they provide the foundation.

The key take-away is you should augment your cost accounting with value accounting if you truly want a window into how your project is doing and when it will be done successfully. And, the key to value accounting lies in understanding requirements and how effectively they are being validated and delivered to your customers.


Thursday, August 09, 2007

The sole purpose of measurement is learning. It is one of the highest forms of inquiry. Measurement offers reliable and often unexpected insights into the true nature of things. Further, this knowledge (true knowledge, if you will) is the key to improvement. Trying to improve something (a process, a product, yourself, …) when you lack true knowledge only makes things worse, not better. It is meddling, not managing.

Measurement is a process, of course, and as such requires all the necessary constituents of any process, namely method, tools, talent, etc.  In addition, for the measurement process to yield reliable insights, rather than distortions and “noise”, this process must itself be stable.

A common obstacle to effective measurement is the complexity of the financial and statistical concepts and in the understanding of how and when to apply these ideas for optimum  return to the enterprise. This ability to coalesce and distill the multitude of ideas, formulae, and tools into a simple pragmatic approach for inquiring into the dynamics and operational behavior of a process or operation is a hallmark of effective management and governance.

We have implemented many such measurement and governance approaches for our clients, in a variety of settings and contexts. We have found that Doug Hubbard’s new book, How To Measure Anything, is an important step in providing a window into many of these complexities and offers a variety of straightforward approaches to measurement that we feel preserves its fundamental role as a vital instrument of inquiry and knowledge.

Finally, when we refer to measurement as a form of inquiry, we mean a very special type of inquiry, namely inquiry into variation and its root causes. It turns out that isolating the underlying drivers of variation is the first step to any sustainable, predictable improvement program. Without an effective measurement process, management lacks the knowledge necessary to successfully guide the enterprise from where it is now, to where it desires to be.


Wednesday, December 06, 2006

Every process has a customer. The customer consumes or utilizes the outputs delivered by that process.

The voice of the customer is their expectations regarding what these outputs must look like and what they must do. The voice of the process is its output.

The goal of the process owner is to align the voice of the process with the voice of the customer. This alignment is what we mean by process management. One of its primary tools is continuous improvement.

Consequently, in order to “write better requirements”, we must first examine the process that produces these requirements, and then seek to improve that process.

Let’s start at the top. The solution delivery process delivers a business solution to its customer (the so-called end customer). We can think of this process has having two sub-processes: Requirements engineering and solution construction. (Note, that these two sub-processes can be executed once for any given solution, or if more iterative approaches are being used, they can be executed multiple times, where each cycle carves out functional subsets of the evolving total solution. In either case, the sub-processes remain the same.)

The requirements engineering sub-process generates the requirements specification, in other words, the requirements in our discussion. The solution construction sub-process, in turn, translates that requirements specification into an operational solution that delivers the intended value to the end customer. For a project to be considered successful this delivered value (the output of solution construction) must match the needs and expectations of the end customer as represented by the requirements specification (the output of requirements engineering).

Consequently, to write better requirements we must first seek to improve the requirements engineering process. This means aligning the voice of its process, the requirements specification, with the voice of its customer, in this case the needs and specifications of its downstream process, solution construction.

The needs and expectations of the solution construction process are indeed straightforward: A rigorous, unambiguous specification of the needs and expectations of the end customer suitable for design, construction, and implementation. Consequently, the voice of the requirements engineering customer, solution construction, consists of two required elements:

  • A clear specification of exactly what the end customer is expecting
  • A specification suitable for implementation

These two required elements imply that an effective requirements engineering process must not only capture what the end customer wants, but must capture it in a manner so the downstream process can implement a solution that delivers the intended value. This is why the quality of the requirements specification is so vital to project success. Since the solution (for it to be a success) must actually deliver tangible value in the real world, then to the extent that the requirements specification has gaps or inconsistencies, they will be “corrected” by downstream workers (designers, developers) in order to “complete” the solution so that it can be implemented. In other words, the choice is not whether one will add the necessary details and rigor needed for implementation (since nothing can be implemented without it), the only choice is who will do it and when.

Decades of research and analysis have demonstrated the overwhelming desirability of the sequence: First clearly and unambiguously understand the problem, and then, and only then, seek to solve it. In other words, requirements first, then solution.

Regardless of tools, formats, and practices, the objective of any requirements engineering process is to understand (as best we can) exactly what is needed and then to be able to capture that understanding in some tangible mode (write it down somehow) so that it can be shared and understood by others so that semantic integrity is preserved. In other words, the voice of the end customer (their needs and expectations as they understand them) must match the voice of the process (the requirements specification).

We should hasten to add that a primary way anyone understands anything is by means of successive approximations: An initial level of understanding is achieved, which is then “tested” in some way. The results of that testing are then incorporated into a next level of enhanced understanding. The cycle continues until either we run out of time or interest (the typical case, but certainly not desirable), or until we have reached a targeted level of requirements quality that characterizes the degree of understanding that we believe is necessary for success (the more desirable alternative).

In this represent-test-refine-retest cycle there are many methods for representing and testing these successive approximations to understanding.

Representation methods include English text, use cases, entity-relationship and related diagrams, UML-style object models, rigorous specification languages, and software iterations and prototypes. Testing methods typically accompany each representation technique. But, dialogue-based review sessions are by far the most prevalent---we sit down in front of the subject matter expert or knowledge worker and ask them questions and let them elaborate and clarify our understanding. Accordingly, since natural language (English in our case) is the most prevalent form of problem representation, as well as the basis for the corresponding review (“testing”) sessions, let’s examine more carefully exactly what we mean by writing better requirements in English.

A quality requirement exhibits the following four attributes:

  • Completeness. The requirement must contain all the information necessary for full understanding by the downstream customer. All assumptions must be made explicit.
  • Relevance. The requirement must exclude any information not necessary for full understanding by the downstream customer. Impertinent or extraneous material must be removed.
  • Precision. The requirement must have only one interpretation by the downstream customer.
  • Context. The requirement must explicitly identify all intended variations in usage and meaning. All pertinent meanings must be clarified. For example, the term order may have several contexts: Customer order, supplier order, purchase order, back order, etc. To the extent that these represent legitimate contexts and have different meanings, then they must be explicitly distinguished in the requirements materials.

Note that there are two sources of information that are necessary for full and unambiguous understanding: The requirements specification itself and the memories of the downstream customer. One need not document and explain everything. One only needs to capture those terms and phrases that are not part of the collective knowledge of the downstream customer. This is especially true of the many common words that are intended to retain their common and plain definitions. This determination, namely, which terms have common meanings that are also the intended meanings, and which terms require explicit clarification and elaboration for the downstream customer is a crucial role for the requirements analyst.

It should also be pointed out that quality requirements are also testable requirements. This testability criterion is another perspective on what we mean by a quality requirement. Testability is a characteristic of a requirement in which someone other than the author (for example, a test analyst, another type of downstream customer) can engineer a set of test cases and expected results that will validate that requirement. That is, determine whether it has been successfully implemented by the target solution. Testability is an exit criterion for the entire requirements engineering process.

We would like to acknowledge our colleague David Gelperin for many insights related to this posting.


Thursday, June 08, 2006

“…self deception, the root of all evil” – Lazarus Long


Does this apply to organizations as well as individuals?

An individual can do any number of things while deluding himself as to his motives. This can be very destructive to the individual, but the damage is usually limited to that person. In extreme cases many lives may be affected. This type of self-delusional behavior, when indulged in by an organization, can be far more destructive.

The ability of management/leadership to militantly seek and destroy these delusions is crucial if any lasting change to an organization is to be accomplished.

While most management will agree that their mission is to create an organization that values and seeks the right changes, they may have inadvertently created (or, more likely, let stand) an environment / culture that values exactly the opposite. The causes can be manifold, but all boil down to, at some level, lack of leadership. Everyone knows that a good leadership must do what it says. A leader who values change must also create a culture that values truth. Change must be undertaken in response to facts. The real facts. An organization that makes changes on any other basis is just twisting dials, hoping that things will get better. This leads to the situation where changes are attempted, effectively at random, with the clear potential to do more harm than good. Many organizations, unfortunately, will make decisions based, not on the actual facts of the situation, but only on the facts that they permit. A management style that does not allow truth (facts) to bubble up to the decision-makers always creates an environment where change is not only undervalued, but is deliberately eschewed. Facts, even (especially?) the ugly ones, must be reported up the management chain deliberately and forthrightly.

What could cause employees and managers to deliver or elevate data that is incorrect? The obvious answer is to hide shortfalls in performance. Weaknesses in planning, forecasting and execution can all lead to underperformance that has the potential to be masked. Of course, underperformance must be properly identified before it can be addressed. Masking this underperformance, either in toto, or by attributing it to the wrong underlying causes, may stimulate a change that is both wrong and potentially harmful. Envision a doctor prescribing medicine that makes an affliction worse because of a misdiagnosis. Do that often enough and a patient will either find another doctor, or ignore the one he has. If he doesn’t take any of the medicine that the doctor prescribes, he knows he won’t get sicker, and might even get better on his own. So it goes with the organization. Management ‘prescriptions’ are ignored in favor of the status quo, because the changes prescribed in the past made things worse. Ignore the change directives, and work a little harder, and maybe things will get better on their own.

The leadership issue to address here is the performance ethos that failure is not tolerated. Like any other process in life, what you measure is what you get. If failure is punished, than all efforts will be made to avoid failure. Not only does this create a culture that is afraid to innovate; the culture learns to undervalue and/or fear facts, as they can only betray the failure that will not be tolerated to all and sundry. Response: the failure to uncover and address problems is the behavior that should not be tolerated. Management must create an environment that militantly pursues, discovers and values facts, as they are the true indicators of problems. Tolerance of lack of response to reality is the behavior that should be eschewed by an organization. Dogged pursuit of the facts as a window into the true operational state of the organization should be the example set by leaders, indoctrinated by management and exemplified by line personnel. The culture that values facts as being the real and correct precursors for change will embrace the changes that result.

Another cause of myth-based decision making is that the real data is inaccessible or unknown. The data may be buried in multiple systems and not available for discovery, or the managers may not know how to assemble the data pieces they have into a cogent set of facts that represent reality. This, much more mechanical topic, will be addressed in future posts.


Thursday, April 06, 2006

Several things have become clearer over the years.

One is that the CIO role is not a technology job, it is a business value job. In this way, of course, it is really no different than any other executive leadership position. This requires the manager to both manage the performance of their vertical unit---their operational area, whether it is an SBU or a functional department like IT---while at the same time operate horizontally across the enterprise to optimize the company's total performance and thus maximize the aggregate value of the business. (The exact vertical-horizontal mix depends on the particular role and current operational challenges.)

Another is that organizations are the way they are because of their leaders. In other words, culture is not some magic all-pervasive "ether" that mysteriously infuses the workforce. Culture is simply the collection of habits, customs, stories, assumptions, and aspirations that the leadership has promoted, supported, and sustained. It is the way it is precisely because that is what leadership wants and expects, whether consciously or not. For example, ineffective leaders are often totally unaware of the effect they have on their colleagues, and are amazed to learn that what they think is positive behavior actually is viewed negatively.

Accordingly, high performance organizations are, in fact, high performance precisely because they have high performance leadership. In other words, you get what you are. Companies perform at a high level only because (and until) the leadership performs at a high level.

As a result, high performance is not about advanced technologies, or best practices, or the latest sound-bites.

It is about leadership.

This is because if the process infrastructure, governance practices, and process maturity are not up-to-snuff, then they will fix it. They will solve those problems in an optimal manner precisely because they are high performance individuals themselves. That's what they do. That's who they are.

A corollary to this second point is that high performance leaders tend to share several important characteristics. Characteristics that can help locate this talent in your company. The most central of these characteristics is the preeminence of values over behavior. That is, their leadership approach fixates on a few fundamental principles---sometimes articulated, sometimes implied---that define a collection of preferences for what is important, for what the organization stands for (and against). This is in opposition to the much more prevalent approach that is behavior based, that is, focusing on control, micro-management, and on altering the way people actually do their work, changing their methods, practices, tools, etc.---with the idea that this will make them better.

The problem with the behavior based approach is that it forces a set of "solutions" that presuppose a shared understanding of the "problem". And if the populace does not share that same sense of the problem, and they often don't, then, to them, the prescribed "solution" is not really an answer for them. In fact, it is a burden, another unwanted administrative intrusion into their world.

On the other hand, people with a sense of shared values will naturally seek out the tools and methods that are consistent with their common value system.

A second characteristic of high performance leaders is their emphasis on learning. This comes from their insatiable curiosity about how things work and don't work, why things are the way they are. This tends to also include a strong desire for challenging long-held assumptions and for fact-based decision making. This characteristic also shows up in their desire for a collaborative workforce---an environment where ideas matter more than rank or title, and where everyone has a piece of the truth.

Thirdly, these individuals have a very tight, almost laser-like, focus on customers and how best to deliver value to them, whether these are the external enterprise customers or the many internal customers. This customer centric bias informs much of their thinking about priorities, and in particular sharpens the dialogue about what really adds value, since in this world the customer defines value, quality, and completeness, not the supplier. This external focus has a remarkably refreshing, simplifying, and cleansing effect on the entire organization.

Another characteristic that appears common is their sense of personal accountability, both for themselves and for their workers. This idea that we must absolutely honor all commitments becomes a reverence for doing exactly what we say we are going to do, without excuses, without surprises, starting with top management. This is a very powerful and compelling example for the whole organization. Further, these leaders provide safety and support for the workers so that everyone can constructively hold each other accountable for the decisions they make, including (and especially) top management. This characteristic also tends to promote an environment of ownership where responsibility (and authority) is delegated to the lowest level practical, and closest to the customer. This ownership generates strong feelings of personal responsibility for performance, continuous improvement, and getting better every day. A pride in workmanship naturally emerges.

Finally, high performance leaders, create sustainable high performance organizations because when they think about solutions they first think about strategy. They have a need to understand exactly where the company is going and what success will look like when they get there. This vision of a future state, while largely conceptual, is a powerful communication and motivation tool. Once the vision comes into focus, they turn their attention to the business and technology architecture that serves as the framework for realizing that strategy. This framework is a picture of a possible implementation of the strategy. Further, the architecture provides the organization with a blueprint for success. It shows how all the business pieces fit together and how they interact. In other words, this architecture defines the future business model that will deliver on the chosen strategy. But, even more, it becomes a road map for defining and prioritizing investment and actions. As such it serves as the overall strategic plan for the enterprise since the journey to the desired future state involves incrementally adding or replacing each architectural element with its improved solution. In this way, solutions are never simply tactical, short term disconnected fragments, but are integrated components of a unified whole that are always aligned with the business as they simultaneously advance the company, step by step, towards its goals.


Monday, April 03, 2006

What do we do now? What are the next steps? ….

We see it all the time in our jobs and professions. This insatiable need to take action. This desire for "progress". We seem to be forever in various stages of defining action plans, setting targets, organizing teams---all focused on "making something happen"?

And, of course, they do create action. Things will "happen". After all, the very nature of these efforts is activity, putting things in motion.

But, are they the right things? And, even if they may be, are they sustainable? In other words, will what we are doing lead to sustainably improved business performance?

Unfortunately, history has not been kind. These efforts often lead nowhere, or if they do create positive results, they are frequently illusory and unsustainable. Interestingly, the problem typically lies not with the work itself. Most organizations can do a credible job of carrying out action plans if properly nourished (capital, talent) and motivated (leadership). The issue isn't necessarily one of management, or the inability to do the work.

Rather, more often than not, the organization is simply working on the wrong things, in the wrong order. They are doing a good job, even an excellent job at times, of simply delivering the wrong solutions faster.

What is needed is to make sure, as leaders, that the enterprise is focusing its limited talent and capital on the right topics. So, when they do their good job of execution, it results in impacts that really matter to the performance of the company.

This relentless focus on the right topics defines the crucial gap between mediocre organizations and high performance organizations.

And, it all starts with the right questions.

We believe that nothing influences high performance and sustainable success more than making sure that the leadership of the enterprise remains focused on the right questions. The answers to these questions---the solutions---will at times be tricky, but more often than not, if you make sure you are answering the right questions, then it is much more likely that the results you get, even if suboptimal, will have a far greater impact on success than an excellent execution towards the wrong goals.

Consequently, a key issue for an organization is to ensure that the leadership is mercilessly focusing on the right questions. And, we mean focus with laser-like intensity. This means two things: There should only be a handful of questions (a small number concentrates the mind and the organization) and, these questions need to be continually reviewed to ensure they remain the most relevant issues for where the organization is at that moment.

Because alignment of the leadership is central for sustainable success, these questions must necessarily arise within this group. Moreover, the most compelling questions for the leadership team tend to be a variant of "who, exactly, are we?". The questions below can be a useful starting point for getting to this answer:

  • How do we add the most value? What is our most compelling value proposition?
  • What customer segments have the greatest need?
  • What is our future business model? How do we serve those needs?
  • What business and technology capabilities best deliver that model?
  • How are those capabilities best provisioned?
  • What does success look like, exactly?
  • What is the case for change: Why do we need to do anything materially different than we do now?
  • How have we agreed to hold each other accountable for the decisions we have made?
  • What are the highest return, lowest risk actions for addressing these questions?

In many companies, unfortunately, it still remains difficult to explore these very existential issues. They are often viewed as too soft a topic, or as not being relevant, or actionable, but in our view an organization that has a deep connection to its roots, to its sense of who it is, and how it chooses to deliver value to its customers is an enterprise that knows where it is heading. This is an organization that has the confidence to make the tough choices about where and how to compete, and to do it in ways that preserve its integrity and authenticity. And, an organization that acts authentically, is naturally sustainable.


Monday, March 20, 2006

To improve customer perception of our products and services we often ask the customer directly – what do you want?  

There are many common sources of such feedback: focus groups, surveys, call-backs, etc.

Most often these actions are very biased and therefore misplaced.

  • Focus groups with ‘pre-selected’ customers give responses biased in favor of persons doing the selection.
  • Surveys get replies from people who ‘like’ surveys – i.e. more bias.
  • Call-backs typically reach the complainers – did I hear bias.
  • Etc.
So how do you get unbiased feedback?   You can’t.   By definition human feedback is biased.  

Customer feedback should be reserved for addressing specific narrow issues: complaints, failures, accidents, etc.  

If you want broadband quality improvement you go to the experts – Deming, Juran, Crosby, Feigenbaum, Ishikawa, Garvin, Taguchi, etc.   All mention customers in passing, but get down to business with “PRODUCT”: Zero Defects, Six Sigma, quality circles, TQM, testable/tested requirements, ‘ilities, SPC/mature processes, focused metrics, trained workers, and continuous-continuous improvement.

Biased customer feedback leads to gimmicks, fads, crazes, etc., which move markets in the short-term.   Good, value-priced products survive and win long-term.

To summarize, we are faced with the old dilemma: opinion vs. measurement.   Customers provide opinion.    Product needs can be identified and measured only through rigorous/objective analysis.


Tuesday, February 14, 2006

Organizational transformations are among the most difficult undertakings that leaders face. Whether the organization is a nation or a department, history is replete with false starts, ugly outcomes, unintended consequences, and abandoned efforts.

There are, of course, many reasons why these efforts are so intractable. Probably the most striking is that they are not "projects" in the traditional sense. They resemble more an organic adaptation. That is, less an activity that can be centrally planned, prioritized, scheduled, and controlled, and more like an activity in which one carefully observes various stimulus-response behaviors in order to better grasp the underlying, often hidden, survival mechanism that ultimately drives the decisions that the organism makes to persist and succeed in its chosen competitive landscape, and then to apply this knowledge to influence the organism to evolve in the desired way.

Moreover, just focusing on changing behaviors is doomed if those behavior changes are at odds with this underlying survival mechanism. As we have seen all too often, the organism may appear to comply for some period of time---constrained as is often the case by focused, determined external forces and coercion (i.e., management), or sometimes inadvertently compliant through its own internal lapses which periodically distract every organism. But, soon, through attrition and the daily grinding away at these artificial fetters by the organism's unrelenting survival mechanism, it breaks free. Further, the organism's own survival reflexes react in unpredictable ways to these crude assaults resulting in the expensive, time-consuming, and ultimately unsatisfactory outcomes of these behavior-based transformations that we now come to expect.

A more successful perspective lies in recognizing that these transformations are an act of war.

A war, not of territory, or of behaviors, but a war of values.

In other words, organizational transformations (regardless of how they may be spun) essentially seek to replace the current set of values with a different set of values. Since the values of an organization essentially define its foundational principles---mission, purpose, and meaning---transformations, if they are to succeed, must necessarily attack this survival mechanism directly by focusing explicitly on the underlying values that shape its actions.

It should be pointed out that the issues at stake are typically not as clear cut as simply replacing one set of values with another completely new set. What is often found is that the desired set of values are not really new, but can be found among the ideas that the current organization deems important, it is rather more a question of priority, emphasis, and interpretation. Regardless, the fight is over whose values and interpretations will prevail as the governing principles for the organization.

This declaration of war concept is vital because it unambiguously communicates to the organization the existential import of the undertaking. If an organization fools itself into thinking that all that is needed is a bit of behavior modification, then disaster lies this way. Expectations, risk-reward assessments, investment decisions, priorities, are all quite different depending upon which path you choose.

Consequently it is vital to clearly characterize transformations as significant "bet the organization" or "burning platform" style decisions.

Behaviors are the only reliable windows into values---what you stand for is most clearly revealed by the decisions you make and the actions you take. Accordingly, an organization that can quickly read the signs in its behavior, identify the misalignment in values driving that behavior, and then respond with leadership actions focused on restoring the proper values are on the path towards long-term, sustainable success. On the other hand, simply treating the behavior itself as the problem and merely correcting the behavior without addressing the underlying value system is a recipe for disaster and low morale.


Tuesday, January 31, 2006

“…self deception, the root of all evil” – Lazarus Long

Does this idea apply to organizations as well as individuals?
An individual can do any number of things while deluding himself as to his motives. This can be very destructive to the individual, but the damage is usually limited to that person. In extreme cases many lives may be affected. This type of self-delusional behavior, when indulged in by an organization, can be far more destructive.

Obvious examples of an organizations self-delusions include the proficiency of a company at their core capabilities (i.e. ability to deliver software well), their ability to manage and comply with internal processes, their ability to provide services both internally and externally (although providing services externally usually involves a much more effective, and not so easily ignored/disillusioned set of feed back i.e. your customers stop paying you), their ability to adapt to changing market conditions etc.

An individual can seek therapy for their self-delusions, and perhaps learn to be more truthful with their self. What kind of therapy exists for an entire organization that indulges in this kind of behavior?

Over the coming weeks I will explore particular types of this deception, the manifestations of, and consequences to, this type of behavior (both short and long term) for an organization and what organizations of all sizes can do to eliminate this destructive thinking and improve their performance.


Tuesday, January 03, 2006

Projects are successful when the customer says they are. Customers say a project succeeds when it meets their expectations. Expectations are personal visions. The test drive is the most effective way to determine whether a project's results meet a customer's personal vision and expectations.

Every other technique, at best, is a distant second.

Consequently, to achieve project success, the project manager's overarching mission must be to find a way to deliver tangible value to the customer soon (the test drive), and often, and to then incorporate the learning extracted from each such mini-delivery so that it informs the next mini-delivery. This means we want lots of test drives, each one necessarily revealing, like nothing else really can, the conceptual distance between the customer's personal vision and the actual experience itself. The project manager's goal: Reduce that distance to zero.

The only difference between the first min-delivery and the second, or fifteenth, and the last mini-delivery is that nothing follows the last one. You stop delivering because you now have met the customers' expectations---and you know it, because the customer tells you so.

Another project successfully completed.

This seems simple and straightforward.

Yet, we know that the vast majority of time, effort, and talent is overwhelmingly spent on work that has little (and often, nothing) to do with this overarching mission. In fact, in most projects, there are a preponderance of activities that actually delay delivery, or reduce the number of deliveries. Or, that make all this frequent delivery stuff very difficult and expensive. All you have to do is look at your last project, and review its time sheets and actual progress to see this unfortunate truth firsthand. (For example, how much time and effort was spent before the first actual delivery of value to the customer? Between the first and second? …)

Also, when we speak of test drives and delivery of value, we don't mean reading documents, reviewing requirement models, or having someone sit in front of a non-functional "prototype" that is often nothing more than a partially navigable slide show. Those don't deliver value to the customer. (They may deliver value to you, but not to the customer.)

A test drive is just that: A customer actually inhabiting and experiencing a controlled portion of their new operational reality. That is, operating the technology, interacting with business processes, solving problems, connecting with the real world, getting the "feel of the road".

This is not to say that those other work products are not necessary and useful. They can be. It is only to say that they offer very little insight into the customer's expectations and the distance between their vision of what the new technology should do for them, and what it will actually do---and how it will actually do it.

The upshot of this is again very straightforward. If we agree that project success is defined solely by the customer and their expectations of what the project's results must do for them, and that the window into those expectations is the test drive, then we should naturally see a preponderance of tools and techniques that are designed to

  • Decompose the problem/solution into its constituent chunks of value
  • Package these value chunks into mini-deliveries suitable for test drives
  • Manage the assembly and construction of each such mini-delivery package
  • Manage the test drive itself and the learning derived from it
  • Validate the evolving total solution

These are the activities that should comprise the bulk of time and effort on the project. Everything else we do in the project should be subservient to these activities. We mean subservient in two senses: If it doesn't advance these activities and thus the project manager's overarching mission, discard it; if it does, make it as simple and fast as possible.

In other words, if the test drive is the engine that drives success, then we should see this reflected in approaches that make it easy to define, execute, and learn from these test drives. Furher, we should see simple tools for efficiently managing this repetitive cycling of incremental solution delivery.

Ask yourself this question: For a recent project or for some typical, representative project in your company, what was its average test drive cycle time?

The test drive cycle time can be approximated by dividing the project's total calendar time (starting with any planning, requirements, and proceeding through the final implementation step) by the number of tangible value delivery events (i.e., real test drives by real customers, users, knowledge workers).

What must happen to cut this number in half each year over the next five years?

That, my friends, is a business-IT strategy worth investing in.


Thursday, December 22, 2005


Value-Driven, Risk-Adjusted, Solution Delivery

Over the years there have been many discussions as well as actual offerings that purport to be methodologies or processes for delivering technology projects. It has long been our view that all of these seemingly varied techniques are simply special cases of a single common general process. This super class of software engineering process has never been afforded much discourse. This, in spite of the continuing ineffectiveness of all the current variant approaches. Yet, it is only by understanding the dynamics and performance of this natural super class can we begin to learn how to improve our ability to deliver value to our customers.


That is the key to understanding this software engineering super class. In other words, the primary directive (if you will) must be to deliver value to customers. Otherwise, why are we doing any of this?

Consequently, our label for this super class of software engineering process is value-driven, risk-adjusted, solution delivery.

Let's look at this label more closely. We've mentioned already the importance of value. This is the idea that the customer must be the sole arbiter of quality and fitness, and that this externally focused (i.e., external to the project team) business value perspective must drive all project decision-making. In other words, when we are deciding delivery sequences, priorities of requirements, etc., we should be guided by what delivers the most value to our customer.

The second part of the label, risk-adjusted, refers to the importance of understanding that unmanaged risk is the source of all project problems. Technology projects are, after all, business investments. And, as with any business investment, its return (that is, its value in the form of benefits like increased revenues, reduced operating costs, higher utilization of resources, etc.) must be sufficient to compensate the investor (the customer, sponsor, or owner) for the risks they are taking. Accordingly, an effective software engineering process must assist project management and the project team in continuously identifying risk and then provide mechanisms for aggressively removing or mitigating these solution acquisition risks. If this isn't done properly, then the economic value of that investment can be substantially impaired.

Finally, we come to the phrase, solution delivery. This phrase emphasizes that the primary goal of the software engineering process is to actually deliver tangible usable value in the form of complete business solutions to the customer. Not simply building or installing software. It is the idea that delivered solutions that operate in the customer's real world is the key objective. That is, enabling and enhancing a company’s ability to grow and compete in its marketplace through the acquisition (whether built or bought) of complete, fully integrated, and organizationally unified business capabilities.

So, in our view, value-driven, risk-adjusted solution delivery is the mission statement for all meaningful software engineering processes.

When viewed in this context, it becomes clear that a critical success factor for such a process is the ability to rapidly and reliably make risk unambiguously visible to the project team and then to provide the means to mitigate that risk so that it can be managed. By the way, by managing risk, what is meant is that

  • We know what the exposure is (what creates the risk in the first place)
  • We know its severity, should it occur (i.e., how it reduces, or delays, the benefit stream, increases the costs, etc.)
  • We know the likelihood of it occurring
  • We know the remedial actions that can be taken that will preserve the investment's return potential as compared with alternate uses of those resources (money, talent, technology)

Risk is essentially a measure of the variability of the project's outcomes (principally seen or felt in its benefits, costs, quality, etc.). The greater this variability, the greater the overall project risk. Consequently, risk is a measure of uncertainty. The less certain we are regarding a project's outcome (or any single dimension of its outcome, say, its total cost), the greater the likelihood that particular outcome will not meet its target. Thus, the greater the risk.

So, one important question that arises is how do we reduce uncertainty?

It turns out that this is a telling question. Because, when we look carefully at uncertainty, we see that uncertainty in the project's outcomes is zero (or, pretty dog-gone small anyway) at the end of that project. This is because we now know (since we are done) exactly how much it will (i.e., did) cost, and whether the benefits were actually realized at the targeted levels and time frames, etc. Accordingly, the day before the last day, the risk is higher but only slightly higher; a week before, the risk is even higher, and so on, until the risk reaches a maximum on day one of the project, where uncertainty is typically the highest. (This is not to imply that risk is a linear function of elapsed project time. It is, in fact, quite non-linear. But rather, to say that risk can only be reduced by capturing the appropriate learning that can only come from executing the project. But, if that learning is not captured, then uncertainty may not be reduced---may actually be increased---as the project advances.)

Just to push this argument a bit farther, if we accept that uncertainty (and thus risk) are at their lowest when the project is completed, then we could say that if we find another customer with exactly the same needs, then we could simply deliver that same solution to them as well and be confident that it will be an essentially zero risk undertaking. And if we find a third such customer, then we can deliver the same solution to them as well, and so on. This, of course, is nothing more than reuse.

Consequently, a key learning is that reuse reduces uncertainty, and thus reduces project risk. This seems very intuitive. Yet, as we have pointed out before, reuse is still a woefully underutilized practice. Stated differently, an effective software engineering process must make it easy to assemble solutions rather than build them.

But why should this be so?

One answer is that reuse is failure-free. (Remember that we have assumed that we are referring to the reuse of a solution for a customer with exactly the same needs as the initial customer. While recognizing that this condition---that is, both customers having exactly the same needs---is extremely unlikely, we can agree that to the degree that the targeted customers for reuse have similar needs, then the more certain one can be that the solution will be as a failure-free as possible.)

The fact that reuse for other customers exploits the investment that the team had already made in removing defects from the solution for the initial customer, yields another learning: Bugs, by their very nature, increase uncertainty and thus increase project risk.

Consequently, an effective software engineering discipline must (a) make it difficult to insert defects into the work-products in the first instance, and (b) failing that, make it easy to locate them so they can be removed before they are shipped downstream (i.e, to the next stage, process, or customer) where they have exponentially larger impacts.

Well, now we are getting somewhere. So, to reduce risk we must reduce defects. OK, but what is a defect?

To properly understand the answer to this question we must first examine the nature of the software engineering process itself. That is, what is its essence? All software engineering processes (regardless of their vocabulary, tool sets, etc.) can be essentially viewed as a sequence of translation steps, where each translation step attempts to elaborate the problem and solution domains to lower levels of refinement (starting with some usually informal narrative description of the problem or opportunity) until a refinement level is reached that can be directly implemented (typically a very rigorous unambiguous software specification or source code). At this point, and only at this point, can the results of all this translation actually deliver any tangible value to the customer.

The precise number of these steps, their exact method of translation, the nature of these refinement levels, and the format and structure of each step's output is defined by the particular software engineering process being used. But, all such processes, nevertheless, share this step-wise translation and refinement characteristic.

Further, each of these translation steps comprises two distinct activities: representation and validation.

The representation activity involves the elaboration of the work products from their current level of refinement to the next lower level of refinement. For most software engineering processes, a typical elaboration sequence is analysis, design, specification, coding, etc.

The validation activity involves ensuring that each such translation step is complete, error-free, and relevant. That is, ensuring that the meaning of a representation at any level is exactly semantically equivalent to the meaning of the representation at the prior level. Typical validation activities include walk-throughs, inspections, testing, operational use, etc.

This step-wise translation and refinement technique is also referred to as the levels of abstraction technique since its primary engine is the refinement of the "problem" statement, starting with the level closest to the external customer and the operational world, and proceeding level by conceptual level until an abstraction level is reached that is sufficiently complete, unambiguous, and precise that it can be implemented on some processor.

Now, finally, we can get an answer to what is a defect, anyway.

A defect is simply the result of an error in one of these translation steps. (Where error means that the translation did not preserve the semantic integrity of the prior step.)

At this point it might be useful to point out where we believe the industry has diverged from an optimal path over the last few decades of research. Given this model, one can argue that there are two avenues of research that could prove helpful in improving the effectiveness of the software engineering process: (1) Improve the various translation techniques at each step, or (2) use fewer translation steps. While overwhelmingly the industry has focused on more and better translation techniques and tools (that is, research avenue 1), very little has been done to find better methods that would actually require fewer steps (and thus fewer opportunities for errors), with of course, the goal of reducing the steps to only one: Problem = Solution. This is the avenue we feel that will offer the greatest potential for our industry.

Recapping the story so far,

  • Risk is a measure of uncertainty (reducing uncertainty reduces risk)
  • Reuse increases certainty (assemble rather than build)
  • Bugs decrease certainty (never ship defects downstream)
  • Fewer translation steps = fewer opportunities for error

These all appear to be important software engineering principles.

But, there is one more principle, perhaps the most important principle, in describing what we mean by value-driven, risk-adjusted solution delivery:

Customer usage of the solution increases certainty.

Everything else (meetings, prototypes, inspections, reviews, testing) is but a weak approximation to actual customer usage. We saw this principle in action earlier when we commented that uncertainty (and thus risk) is at its lowest when we have actually delivered the solution to the customer and they are using it to realize the benefits of its operation.

Now, of course, this occurs at the end of the project. But, why should it only happen then? Why not deliver all the way through the project's duration, starting at the very beginning? Certainly, the earlier we do this the better, right? And when we say deliver, we mean deliver customer value, not designs, or specs, or the myriad other work products that we have contrived to gain customer "buy-in" or approval. We mean deliver actual operational functionality. We mean deliver tangible solutions that they can start using right away.

Think how the software engineering world must change if we established an iron-clad rule that every project must deliver tangible operational value to the customer every 6 weeks? Every 2 weeks? Every day?

(This is very practical by the way for both very large and very small technology efforts. All that is required is for the project team to understand that they must begin on day one to think in terms of customer value and how to break that value up into chunks that can be incrementally delivered and continuously integrated into the evolving total solution.)

We have found that the higher the (value) delivery frequency, the greater the reduction in uncertainty and risk. As a result, a key question our project managers ask is "How many delivery cycles (iterations) are necessary to maximize value and reduce risk commensurate with the corresponding increased energy expenditure due to each iteration?" That balance becomes the vital optimization issue for the project team.

Finally, frequent customer delivery and usage also helps transfer ownership of the solution very early to the customer (which has been found to be a critical success factor) as well as helping to close expectation gaps (the gap between what a customer thinks or believes a solution will do for them based on their current understanding of the discussions and documents they have seen together with their own biases and paradigms, and what the solution actually delivers in reality---which can only be ascertained by using it).