Projects are successful when the customer says they are. Customers say a project succeeds when it meets their expectations. Expectations are personal visions. The test drive is the most effective way to determine whether a project's results meet a customer's personal vision and expectations.
Every other technique, at best, is a distant second.
Consequently, to achieve project success, the project manager's overarching mission must be to find a way to deliver tangible value to the customer soon (the test drive), and often, and to then incorporate the learning extracted from each such mini-delivery so that it informs the next mini-delivery. This means we want lots of test drives, each one necessarily revealing, like nothing else really can, the conceptual distance between the customer's personal vision and the actual experience itself. The project manager's goal: Reduce that distance to zero.
The only difference between the first min-delivery and the second, or fifteenth, and the last mini-delivery is that nothing follows the last one. You stop delivering because you now have met the customers' expectations---and you know it, because the customer tells you so.
Another project successfully completed.
This seems simple and straightforward.
Yet, we know that the vast majority of time, effort, and talent is overwhelmingly spent on work that has little (and often, nothing) to do with this overarching mission. In fact, in most projects, there are a preponderance of activities that actually delay delivery, or reduce the number of deliveries. Or, that make all this frequent delivery stuff very difficult and expensive. All you have to do is look at your last project, and review its time sheets and actual progress to see this unfortunate truth firsthand. (For example, how much time and effort was spent before the first actual delivery of value to the customer? Between the first and second? …)
Also, when we speak of test drives and delivery of value, we don't mean reading documents, reviewing requirement models, or having someone sit in front of a non-functional "prototype" that is often nothing more than a partially navigable slide show. Those don't deliver value to the customer. (They may deliver value to you, but not to the customer.)
A test drive is just that: A customer actually inhabiting and experiencing a controlled portion of their new operational reality. That is, operating the technology, interacting with business processes, solving problems, connecting with the real world, getting the "feel of the road".
This is not to say that those other work products are not necessary and useful. They can be. It is only to say that they offer very little insight into the customer's expectations and the distance between their vision of what the new technology should do for them, and what it will actually do---and how it will actually do it.
The upshot of this is again very straightforward. If we agree that project success is defined solely by the customer and their expectations of what the project's results must do for them, and that the window into those expectations is the test drive, then we should naturally see a preponderance of tools and techniques that are designed to
- Decompose the problem/solution into its constituent chunks of value
- Package these value chunks into mini-deliveries suitable for test drives
- Manage the assembly and construction of each such mini-delivery package
- Manage the test drive itself and the learning derived from it
- Validate the evolving total solution
These are the activities that should comprise the bulk of time and effort on the project. Everything else we do in the project should be subservient to these activities. We mean subservient in two senses: If it doesn't advance these activities and thus the project manager's overarching mission, discard it; if it does, make it as simple and fast as possible.
In other words, if the test drive is the engine that drives success, then we should see this reflected in approaches that make it easy to define, execute, and learn from these test drives. Furher, we should see simple tools for efficiently managing this repetitive cycling of incremental solution delivery.
Ask yourself this question: For a recent project or for some typical, representative project in your company, what was its average test drive cycle time?
The test drive cycle time can be approximated by dividing the project's total calendar time (starting with any planning, requirements, and proceeding through the final implementation step) by the number of tangible value delivery events (i.e., real test drives by real customers, users, knowledge workers).
What must happen to cut this number in half each year over the next five years?
That, my friends, is a business-IT strategy worth investing in.