How Do We Know If We Are Getting Value for Our Software Vendors?

(You can download this report here.)

Scope of this Report

Discuss what is meant by value, the process of sizing and estimating the software deliverable and the benefits of those results

  • What is “Value”?
  • Functional Value
  • More on the estimation process
  • Case study example
  • Conclusion

What is “Value”?

We can look at value for software development from vendors in terms of how much user functionality is being delivered by the software vendor. In other words, how many user features and functions are impacted as a result of a project. We can also consider whether the software deliverables were completed on time and on budget to capture “value for money” and the monetary implications of timeliness. Finally, we can see if the software project delivered what was expected from a user requirements’ perspective and if it meets the users’ needs.

This last, more subjective, assessment of value gets into issues of clarity of requirements and the difficulties of responding to emergent requirements if the initial requirements are set in stone. It is outside the scope of this report but we believe the best way to address this issue is through Agile software development which is covered in several other DCG Trusted Advisor reports.

Functional Value

To quantify the software, we must first size the project. Of course, there are several ways to do this with varying degrees of rigor and cross-project comparability. Function Points and Story Points both have sizing perspectives that take a user’s perspective of the delivered software. Since Function Points Analysis is an industry standard best practice sizing technique we find that it is used more often for sizing at this Client-Vendor interface.

Function point analysis considers the functionality that has been requested by and provided to an end
user. The functionality is categorized as pertaining to one of five key components: inputs, outputs,
inquiries, interfaces and internal data. Each of the components is evaluated and given a prescribed
weighting, resulting in a specific function point value. When complete, all functional values are added
together for a total functional size of the software deliverable. After you have established the size of the software project, the result can be used as a key input to an estimating model to help derive several other metrics that could include but are not limited to cost, delivery rate, schedule and defects. A good estimating model will include industry data that can be used to compare the resulting output metrics to benchmarks to allow the client to judge the value of the current software deliverable under
consideration. Of course, there are always mitigating circumstances but at least this approach allows for an informed value conversation (which may result in refinement of the input data to the estimating

5 Key Components of Function Point Analysis

Of course, if you can base your vendor contract even partially on a cost per function point metric, this provides an excellent focus on the delivery of functional value although it is wise to have an agreed independent third party available to conduct a function point count in the event of disputes.

More on the Estimation Process

We have mentioned the importance of the estimation model and the input data in achieving a fair assessment of the functional value of the delivered software. We have also hinted that these will be issues to be discussed if there is disagreement between client and vendor about the delivered value. Hence, it is worth digging a little deeper into the estimation process.

The process for completing an estimate involves gathering key data that is related to the practices, processes and technologies used during the development lifecycle of the software deliverable. DCG analyzes the various project attributes using a commercial software tool (e.g. SEER-SEM from Galorath), assessing the expected level of effort that would be required to build the features and functions that had to be coded and tested for the software deliverable. The major areas for those technical or non-functional aspects are:

  • Platform involved (Client-server, Web based development, etc.)
  • Application Type (Financial transactions, Graphical user interface, etc.)
  • Development Method (Agile, Waterfall, etc.)
  • Current Phase (Design, Development, etc.)
  • Language (Java, C++, etc.)

Sophisticated estimating models such as those built into the commercial tools considers as well that are too numerous to mention include numerous other potential inputs including parameters are related to the personnel capabilities, develop environment and the target environment.

Given the size of the software deliverable and the complexity of the software deliverable represented by some of all of the available input parameters, we also need to know the productivity of the software development team that is developing the software. This can be a sensitive topic between Client and Vendor. We have often seen that the actual productivity of a team might be different from the reported productivity as the Vendor throws people on the team to make a delivery date (bad) or adds trainees to the team to learn (good) – mostly, for value purposes, the Client only cares about the productivity that they will be billed for!

Once we have established the development team’s rate of delivery or functions points per effort month, we can then use that information along with all the previous information (size, complexity) to deliver the completed estimate.

The end result of the sizing and estimating process would show how long the project will take to complete (Schedule), how many resources will be needed in order to complete (Effort) and the overall cost (Cost) of the software deliverable.

Sizing and Estimating Process

Case Study Example

DCG recently completed an engagement with a large global banking corporation who had an ongoing engagement with a particular vendor for various IT projects. One such project involved a migration effort to port functionality from one application platform to another new platform. The company and the vendor developed and agreed on a project timeline and associated budget. However, at the end of the allocated timeline, the vendor reported that the migration could not be completed without additional time and money.

The company was reasonably concerned about the success of the project and wanted more information as to why the vendor was unable to complete the project within the agreed-upon parameters. As a result, the company brought David Consulting Group on board to size and evaluate the work had been completed to-date, resulting in an estimation of how long that piece of work should have taken.

The objectives of the engagement were to:

  • Provide a detailed accounting of all features and functions that were included in the software being evaluated
  • Calculate the expected labor hours by activity, along with a probability report (risk analysis) for the selected releases

DCG’s initial estimate was significantly lower than what the vendor was billing for that same set of development work. With such a significant difference in the totals, it was clear that something was off. DCG investigated the issue with the vendor to explore what data could be missing from the estimate, including a review of the assumptions made in the estimate regarding:

  • Size of the job
  • Degree of complexity
  • Team’s ability to perform

In the end, the company and the vendor accepted the analysis and utilized the information internally to resolve any issues relevant to the project and as a result the company also decided to use another software vendor for any future software projects resulting in a significant cost saving.


This case study highlights a typical business problem wherein projects are not meeting agreed-upon parameters. In cases such as these, Function Point Analysis proves to be a useful tool in measuring and evaluating the software deliverables, providing a quantitative measure of the project being developed. The resulting function point count can also be used to track other metrics such as defects per function point, cost per function point and effort hours per function point. These metrics along with several others can be used to negotiate price points with current and future software development vendors to ensure that the company is receiving the best value for their IT investment.

The estimation process helps in keeping vendors accountable for the work they are producing by providing solid data on the realistic length of a project as well as the relative cost of the project. Quantitative estimates on project length allow companies to better manage their vendor relationships with increased oversight and an enhanced understanding of the expected outcome for their software deliverables.

Written by Default at 05:00

What is #NoEstimates?

This report can be downloaded here.

Scope of this Report

Estimation is one of the lightening rod issues in software development and maintenance. Over the past few years the concept of #NoEstimates has emerged and has become a movement within the Agile community. Due to its newness, #NoEstimates has several camps revolving around a central concept of not generating task level estimates. The newness of the movement also means there are no (or very few) large example projects that can be used as referencesi. Finally there are no published quantitative studies of results comparing the results of work performed using #NoEstimates techniques to other methods. In order to have a conversation we need to be begin by establishing a shared context and language across the gamut of estimating ideas whether Agile or not. Without a shared language that includes #NoEstimates we will not be able to compare the concept to classical estimation concepts.


#NoEstimates Context:

There are two main groups or camps of thought leaders in the #NoEstimate movement (the two camps probably reflect more of continuum of ideas rather than absolutes). The first camp argues that a team should break work down into small chunks and then immediately begin completing those small chunks (doing the highest value first). The chunks would build up quickly to a minimum viable product (MVP) that can generate feedback, so the team can hone its ability to deliver value. This camp leverages continuous feedback and re-planning to guide work, and luminaries like Woody Zuill often champion this camp. A second camp begins in a similar manner – by breaking the work into small pieces, prioritizing on value (and perhaps risk), delivering against a MVP to generate feedback – but they measure throughput. Throughput is a measure of how many units of work (e.g. stories or widgets) a team can deliver in a specific period of time. Continuously measuring the throughput of the team provides a tool to understand when work needs to start in order for it to be delivered within a period time. Average throughput is used to provide the team and other stakeholders with a forecast of the future. This is very similar to throughput measured used in Kanban. People like Vasco Duarte champion the second camp who practice #NoEstimates from a lean or Kanban perspective. We recently heard David Anderson, the Kanban visionary, discuss a similar #NoEstimates position using throughput as a forecasting tool. Both camps in the #NoEstimates movement eschew developing story- or task-level estimates. The major difference is on the use of throughput to provide forecasting which is central to bottom-up estimating and planning at the lowest level of the classic estimation continuum.

Classic Estimation Context:

Estimation as a topic is often a synthesis of three related, but different concepts. The three concepts are budgeting, estimation and planning. Because these three concepts are often conflated it is important to understand the relationship between the three. These are typical in a normal commercial organization, however these concepts might be called different things depending on your business model.

An estimate is a finite approximation of cost, effort and/or duration based on some basis of knowledge (this is known as a basis of estimation). The flow of activity conflated as estimation often runs from budget, to project estimation to planning. In most organizations, the act of generating a finite approximation typically begins as a form of portfolio management in order to generate a budget for a department or group.

The budgeting process helps make decisions about which pieces of work are to be done. Most organizations have a portfolio of work that is larger than they can accomplish, therefore they need a mechanism to prioritize. Most portfolio managers, whether proponents of an Agile or a classic approach, would defend using value as a key determinant of prioritization. Value requires having some type of forecast of cost and benefit of the project over some timeframe. Once a project enters a pipeline in a classic organization, an estimate is typically generated. The estimate is generally believed to be more accurate than the original budget due to the information gathered as the project is groomed to begin.

Plans breakdown stories into tasks often with personal assigned, an estimate of effort generated at the task level and sum the estimates into higher-level estimates. Any of these steps can (but should not) be called estimation. The three -level process described above, if misused, can cause several team and organizational issues. Proponents of the #NoEstimates movement often classify these issues as estimation pathologies. Jim Benson, author of Personal Kanban, established a taxonomy of estimation pathologies that includes:

1. Guarantism – a belief that an estimate is actually correct.

2. Swami-itis – a belief that an estimate is a basis for sound decision making.

3. Craftosis – an assumption that estimates can be done better.

4. Reality Blindness – an insistence that estimates are prima facie implementable.

5. Promosoriality – a belief that estimates are possible (planning facility)

Estimates by definition are imprecise and can only be accurate within a range of confidence however these facts are often “forgotten” in lieu of the single number contract. Acting as if any of these pathologies are true has generated the anger and frustration needed to fuel the #NoEstimates movement.

When done correctly, both #NoEstimates and classic estimation are tools to generate feedback and create guidance for the organization. In its purest form #NoEstimates uses functionality to generate feedback and to provide guidance about what is possible. The less absolutist “Kanban’er” form of #NoEstimates uses both functional software and throughput measures as feedback and guidance tools. Classic estimation tools use plans and performance to the plan to generate feedback and guidance. The goal is usually the same, it is just that the mechanisms are very different.

Budgeting, Estimation, Planning, #NoEstimates and the Agile Planning Onion

There are many levels of estimation including budgeting, high-level estimation and task planning (detailed estimation). We can link a more classic view of estimation to the “Agile Planning Onion” popularized by Mike Cohn. In the Agile Planning Onion strategic planning is on the outside of the onion and the planning that occurs in the daily sprint meetings at the core of the onion. Budgeting is a strategic form of estimation that most corporate and governmental entities perform. Other than in its most extreme form, budget is generally not a practice being eschewed by #NoEstimate proponents. Estimation exists in the middle layers of the Agile Planning Onion (product and release layers). In classic estimation, these estimates are often developed using top down techniques such as analogy or parametric estimation using function points, story points or tee-shirt sizing. #NoEstimate proponents leveraging Kanban techniques perform this level of estimates as forecasts using average flow rates and queuing theory (an application of Little’s Law). The resistance at this level has generated the perception that sized based estimation at this level (and later, planning at the task level) generate several pathological behaviors within organizations. The final layers of the planning onion, iteration and daily planning, are generally the areas of highest concern to the #NoEstimates movement. While tasks may be identified, effort is not assigned.

It should be noted that while effort estimates are not done at the planning layers or generally at the estimation layer most teams adopt rules to break work down into predictable units. Rules or guidelines are often established that affect story and task size. The used of rules to govern granularity is one of the reason flow measures can be used to forecast when work needs to begin in order to meet date or dependency requirements. Johanna Rothman stated in her article “The Case for #NoEstimates,” that “when you deliver small, valuable chunks of work every day or more often” that you can avoid estimation. The critical words being small and everyday which requires the team to understand how to groom stories to the desired granularity. Whether through the use of rules or feedback using these techniques to groom stories could easily be construed as a crude form of estimation.


Standard Corporate Environments:

Organizational budgeting (strategy and portfolio): Continuous flow or other #NoEstimates techniques don’t answer the central questions most organizations need to answer which include:

1. How much money should I allocate for software development, enhancements and maintenance?

2. Which projects or products should we fund?

3. Which projects will return the greatest amount of value? While most budgets are scientific guesses there is a need to understand at least some approximation of the size and cost of the work on the overall backlog.

High Level Estimation (product and release):

Release Plans and product road maps could easily be built from forecasts based on teams that have a track record of delivering value on a regular basis. The idea of #NoEstimates can be applied at this level of planning and estimation IF the right conditions are met. Conditions include:

1. Stable Teams
2. Agile Mindset (both team and organizational levels)
3. Well-groomed stories

The classic questions of when, what and how much can be answered in this environment for work done by single teams or by scaled Agile programs.

It should be noted that the example used by Woody Zuill’s, which uses the most form of #NoEstimates (start, deliver, get feedback, and then do more) is a reflection of an environment where all of these factors are reflected.

Task Level Estimation (iteration and daily):

Task level planning is the base of #NoEstimates discussions. Stable teams that are able consistently to accept and deliver what is expected do not have any need to plan effort at a task level.

Commercial / Contractual Work:

Raja Bavani, Senior Director at Cognizant Technology Solutions stated in a recent conversation, that he thought that #NoEstimates was a non-starter in a contractual environment.


Estimation is a form of planning. Planning is a considered an important competency in most business environments. Planning activities abound whether planning the corporate picnic to planning the acquisition and implementation of a new customer relationship management system. Most planning activities center on answering a few very basic questions. When will “it” be done? How much will “it” cost? What is “it” that I will actually get? Rarely does the question of how much effort will it take get asked unless as proxy for how much will it cost. As the work progresses the questions shift to whether we are going to meet the date, budget or scope. Answering those questions can be accomplished by any number of techniques. Using #NoEstimates techniques still requires most organizations to budget. Using #NoEstimates techniques requires breaking down stories into manageable, predictable chunks so that teams can predictably deliver value. The ability to predictably deliver value provides organizations with the tool to forecast the delivery. #NoEstimates really isn’t not estimating . . . it is just estimating differently.


1. The C3 Project was used to hone and prove many of the Agile techniques (eXtreme Programing and WIKIs for example) and acted as a training ground for many luminaries of the early Agile movement.

2. 4/27/15 or 4/27/15

Written by Default at 05:00

How Can We Optimize Our SDLC to Maximize Demonstrable Value to the Business?

(You can download this report here.)

Scope of this Report

This report investigates how changes to the SDLC (Software Development Lifecycle) can improve the delivery of demonstrable value to the business. We consider how we might measure “demonstrable value” in a way that the business will understand. We review the theory of “Lean Software Engineering” and we suggest some ways that the theory can be applied to optimize different SDLCs. Finally, we discuss the importance of Value Visualization – requiring each story or requirement in the SDLC to have a demonstrable and highly visible set of business value criteria to drive tactical decision-making.

What is “Demonstrable Value to the Business?"

Basically, most software development organizations are driven by demands (or possibly polite requests) from the business(es) that funds the software development. This is not all the work they do because some work is self-generated, either from the software development group or the rest of IT, but, generally, this second category of work still has to be accepted for funding by the business and prioritized against business needs.

So, “Demonstrable Value to the Business” could be simply delivering to the business what it asks and doing so in accordance with the “iron triangle” of "in scope, on time and on budget." In “The Business Value of IT,” Harris, Herron and Iwanicki argue that “business value” tends to be in the eye of the beholder. While this is an important ingredient of the definition, it is not sufficient. Some rigour must be applied beyond simple customer satisfaction because, at the end of the day, the success of an organization will almost always be judged in monetary terms. Even non-profits must be able to stick within the available budget while delighting customers.

Inevitably, the best way to introduce objectivity into a business value discussion is to follow the money, however difficult and apparently unjustified this may seem to the participants. Certainly, there are value types that cannot be measured in dollars and cents, but we would argue that such situations are relatively rare in the business of software development. Hence, while we would always include customer satisfaction when assessing business value, we believe that “demonstrable value” requires the objectivity of financial metrics.

Lean Software Engineering

In our November 2014 DCG Trusted Advisor report, we investigated the meaning of the term “Lean Software Engineering.” That report is a good starting point and recommended reading for this report. To summarize the key points of relevance to this report:

  • The Poppendiecks (Mary and Tom) have proposed seven principles of Lean Software Development:
    - Eliminate Waste
    - Build Quality In
    - Create Knowledge
    - Defer Commitment
    - Deliver Fast
    - Respect People
    - Optimize the Whole
  • Lean software engineering is not an SDLC but an optimization philosophy that can be applied to all SDLCs. The following practices are often associated with implementations of the Lean philosophy in software engineering:
    - Visual Controls including Kanban Boards
    - Cumulative Flow Diagrams
    - Virtual Kanban Systems
    - Small batch sizes
    - Automation
    - Kaizen (or continuous improvement) Events
    - Daily Standup meetings
    - Retrospectives
    - Operations Reviews.
  • Lean principles and practices are applicable to waterfall SDLCs and embodied in Agile SDLCs, although in neither case is there usually 100% compliance.

In considering lean product development flow, Don Reinertsen identifies twelve problems with traditional product development. Reinertsen was referring to traditional waterfall implementations of product development but, as Figure 1 shows, some of these have been addressed by typical Agile implementations but some are only implied. For example, “Absence of WIP constraints”: Few scrum implementations have specific WIP constraints but constraining team size and constraining sprint duration implies a WIP constraint.

Opportunities for Optimization

Of course, there are many variations of the generic SDLC models that arise from local customization to address either the problems we have described or other local issues. Hence, for example, our scores here might be modified (up or down!) by an Agile implementation that is not textbook scrum but some combination of Lean/Scrum/XP. That same is true for modified versions of waterfall, which, for example, have a strong cadence achieved by pipelining requirements/design, development and testing into, say, three month chunks that endlessly repeat. For this reason, in Appendix A, we have added some commentary to the simple scores of Figure 1.

Figure 1: Problems with traditional approaches to Product Development (after Reinertsen) and the degree to which they are addressed today by typical Waterfall (Blue) and Agile (Red) SDLCs. A Reinertsen problem that has been fully-addressed in an SDLC would score a ‘3’.

SDLC Lean Problems

So with the stipulation that the actual SDLC model you are looking at might not score exactly the same way as we have suggested in Figure 1, the greatest opportunities for optimization leap off of the chart:

  • For both SDLC’s
    - Quantify the economics and make them visible at all levels so that they can be used for tactical decisions
    - Make queues explicitly visible at all levels
  • For waterfall:
    - Implement Agile
    - Implement a Lean Kanban system or some other system that:
    Sets Work in Process (WIP) Limits on process steps
    Sets small batch sizes
    Decentralizes control by implement work-pull instead of work-push.
  • AND
  • Focus on optimizing end-to-end value throughput instead of focusing on resource utilization that may maximize local productivity at the expense of throughput.

How? The Economic View - Value Visualization

We need to associate a set of economic information with each requirement or story that we want to flow through our SDLC. We propose the minimum set in Figure 2 should be added to each requirement/story in an easily visible (see Figure 3) or electronically accessible way.

Figure 2: The Value Visualization Economic Metrics set for every Requirement/Story

Economic Metrics

As we have learned from Kanban boards in Agile, visualization is very powerful for team decision-making, and so it makes sense to associate the economic value data of Figure 2 with a visual model for which we provide a template in Figure 3 and an example in Figure 4.

Value Visualization Trains

There are some challenges carrying value visualization data through requirement or story decomposition because the business cases that drive the requirements and stories often map to quite high-level requirements and epics, rather than the “small batch” level requirements and stories that we want to see flowing through development. We have learned from at least one client that it does not make sense to have economic value data at the lowest requirement/story level because of the difficulty of breaking up the high-level economic information into ever smaller units. Hence, at some level of decomposition it will be necessary to stop breaking up the economic data and follow three simple rules:

  • Use T-shirts sizes for Value e.g. High, Medium, Low at the lowest levels of story.
  • These T-shirt sizes should be inherited from and the same as the lowest level parent requirement/story for which economic data was available.
  • Implement a control mechanism to ensure that below the lowest of economic data, all child requirements/stories are connected and prioritized together as much as possible.

There must be an associated process for the third main rule. It is possible that a subset of the stories associated with a particular set of economic data could be deployed before all the stories associated with that data are ready for deployment. For example, in some cases, 90% of the value could be realized by deploying 75% of the child stories. In such cases, decisions need to be made and executed about the value of the remaining “orphaned stories.” In short, are they still needed or can they be removed from or deprioritized in the SDLC flow.

The Use of Lean-Kanban

While Agile SDLC’s such as Scrum and SAFe are designed to embody many lean principles, waterfall was not originally designed with lean principles in mind. However, that does not mean that moving to Agile is the only course of action available. If there are good reasons for an organization to stick with waterfall for part of its operations then the application of Lean-Kanban principles can help. After all, Lean principles originally emerged in manufacturing environments that tend to be waterfall in nature (a quick reminder/warning that lean manufacturing and lean software engineering are not the same thing as we have discussed in earlier DCG Trusted Advisor reports).

To move from classic waterfall to a Lean-Kanban model, the following minimal steps need to be taken:

  • Create a product backlog of requirements/stories in priority order
  • [Ideally but not necessarily at first] Add the Value Visualization Data to each requirement/story
  • Create a “Ready” step as the first step in the flow to ensure that requirements/stories do not enter the SDLC until they are ready to be worked. Put a WIP limit of less than 10 on the Ready step
  • Create a “Ready to Deploy” step as the last step in the flow with no WIP limit
  • For each step in the waterfall SDLC (e.g. Analysis, Design, Code, Test), create two sub-steps: “In Progress” and “Done.” Each whole step should have a WIP limit for of less than 10.
  • Allow the staff in each step to pull requirements from the preceding “Ready” or “Done” step if, and only, if bringing in that requirement/story does not exceed their WIP limit. Staff should pull by highest value or WSJF.
  • Use Cumulative flow charts to track and predict bottlenecks and shift resources or WIP limits to optimize value flow.


We are capable of building and running effective Waterfall SDLC’s but they are not necessarily efficient in optimizing value flow. Worse, waterfall SDLC’s are not good at visualizing the data needed to improve value flow and they tend to be poor at implementing lean principles. Agile SDLC’s are much better at implementing Lean principles and so improve value flow but even Agile SDLC’s are not optimal if they do not have a way to include some basic economic data in their tactical decision-making.


  • “The Business Value of IT,” Harris, Herron and Iwanicki, CRC Press, 2008.
  • DCG Trusted Advisor Report, November 2014, “What is meant by the term ‘Lean Software Development,” /insights/publications/ta-archives/lean-software-development/
  • “Implementing Lean Software Development – From Concept to Cash,” Mary and Tom Poppendieck, Addison Wesley, 2007. [Or, indeed, any book by Mary and Tom on this topic!]
  • “The Principles of Product Development Flow – Second Generation Lean Product Development,” Donald G. Reinertsen, Celeritas Publishing, 2009.

Appendix A

Written by Default at 05:00
Categories :

DCG Publishes Third Volume of Trusted Advisor Anthology

Trusted Advisor

We've got news! That's right, the latest Trusted Advisor anthology is now available!

The third volume of the popular Trusted Advisor book series, “DCG Trusted Advisor Anthology 2015 Edition,” features 12 reports, including:

  • What is Excellence in Software Development? And Why Do Some Benchmarkers Think There Is Only One Answer?
  • Is Calculating ROI Meaningful for Agile Projects?
  • How Do I Size My Non-Functional Software?

Trusted Advisor is DCG’s members-only research forum (membership is free and open to all IT professionals). Members can submit IT-related questions to the forum, and then, each month, members can vote on the question that they would like to have researched. The question with the most votes is then researched by the DCG team. DCG produces a short report answering the question. The reports are all available on the DCG website, to both members and non-members. 

Each anthology contains all of the reports generated from the previous year. All of the anthologies are available for purchase on Amazon. More information about Trusted Advisor, including how to join, is available here.

Written by Default at 05:00

Why Are So Many of Our Projects Late, Over Budget or Deliver Less Than Was Promised?

Scope of this Report

 This report identifies evidence that projects are late, over budget or deliver less than promised? It then considers various potential causes for these failures including culture, process, and estimation and how getting these things right can contribute to success.

What evidence is there that projects are late, over budget or deliver less than promised?

Most organizations develop business cases to initiate change1. These business cases require narrative explanation of the change and the associated financial return on investment.

Dan Galorath, noted software estimation expert, cites government data, “A recent US governmentreport showed 81% of budget or $57 billion in IT projects in danger of failing. Detailed reports on the hearings can be found here. Of 413 IT projects identified by OMB and federal agencies NEARLY 80% OFTHEM WERE IDENTIFIED AS HAVING BEEN POORLY PLANNED. The scorecard for IT projects shows muchprogress but much work left to do.”

The PMI Pulse Report 2014 pointed up some stark statistics. “Only 56 percent of strategic initiatives meet their original goals and business intent. This poor performance results in organizations losing $109 million for every $1 billion invested in projects and programs. High-performing organizations successfully complete 89 percent of their projects, while low performers complete only 36 percent successfully.”

Culture, Process or Estimation Issue?

Are process, culture or estimation responsible for the failures? Any or all of them can have a significant impact on a project’s performance. The tendency is always to blame the supplier for the failure – ‘Company X failed to deliver the ABCD project on-time for the XYZ government’, is not an uncommon headline. In truth it’s normally a combination of all three.

We need to look at potential sources of failure from several directions


  • Culture: Is the organisation working as one towards a common, transparent, communicated goal?
  • The Governance Process: What decisions need to be made, who makes them and are they tracked to completion?
  • Backlog Prioritisation and Change Control: Was there a product backlog or its equivalent effectively managed and prioritised?
  • Is estimation effective?: Are estimates based on facts or opinions?
  • Is the development model effective?: Agile, Iterative or Waterfall methods are found, but are they effectively policed?

Weaknesses or failures in any of the above will put a project at risk. It is incumbent on both the supplier and Business teams to ensure that there are strong robust processes in place to de-risk the project.


The PMI Pulse report consists of responses to a voluntary questionnaire and is therefore self-selecting, but it is a valuable resource for discussion of what seems to be a constant refrain over the years.

The clear message of the report is that the most successful organisations in terms of project delivery have strong processes backed by effective measurement and project management offices.

Success comes on the back of success. Companies with effective traditional development methods can adapt quickly and effectively to agile methods. The key here is the word “effective” – Kotter suggests that without urgency, transformation cannot happen – and change is harder for some companies than others. The whole organisation adopts agile because of effective leadership, visible sponsors and a commitment to succeed. Such organisations are either consciously or culturally Lean. To them facts influence decisions; changes to process are tracked and monitored, and successful change remains while unsuccessful change is found early and discarded. During projects deviations from the norm are analysed and corrections are made. Projects seldom fail.

Contrast that with poor process organisations that change methods to follow the latest trends. For them a change in process is an excuse for chaos. Typically we see blame cultures, with poor communications and absent sponsors. Use of metrics is poor and often concentrates only on cash and time to market. If a project falls behind schedule the typical response is to throw people at it. Hordes of heroes are bound to help. Once again we have people repeating the same behaviour expecting a different result. That’s the definition of insanity often, incorrectly, attributed to Einstein. Whoever said it was right. In this instance insanity comes from not analysing reasons for failure but looking for quickv“obvious” answers, and doing that repeatedly.

Process – Governance

We look for an IT governance framework which has similar characteristics to the model proposed by Weill and Ross in 2004:

  • Identify what decisions you need to make;
  • Identify who makes those decisions – an individual;
  • Identify how those decisions will be made e.g. what data is needed, who else should contribute opinions.

Effective organisations have clear governance based on effective leadership and visible sponsorship. They avoid committee decision making and clearly communicate decisions. Crucially they have effective monitoring and measurement activities so that deviations from course are made knowingly or are corrected quickly with little drama.

Process - Development methods

All development methods demand process. Some, such as waterfall, can be process intensive. Agile by contrast is process light, but it’s not process absent. Rather, we can say that Agile is less prescriptive.

Many effective organisations use waterfall or iterative development with defined methods backed by strong metrics and effective reporting. The best performers can adapt to agile when it fits the situation and they continue to be successful.

Agile works best when thought of as a lean process and that means that once you commit to build some user functionality you should do it only once. That means taking a disciplined approach to defining the minimum marketable features, refining the product backlog and delivering sufficient documentation to enable maintenance. It can become a game in ineffective organisations.

Two conversations, which we have had recently, underscore the need for discipline in the use of agile methods. In both cases the productivity related to delivery of functionality was defined as low. When probed, the reason is that both organisations were content to develop and re-develop the same functionality a number of times – to get it right in the end. The business and the development teams seemed to accept that delivering a business change using agile methods allowed for infinite changes of mind. This adoption of agile is not cost effective and gives rise to concern about how effectively agile methods are being used. Instead of the oft-quoted “paralysis by analysis” we see in waterfall, we have “endless enhancement,” and in either case a lot of time, energy, creativity and money is wasted.

Again the lesson is that effective development methods work to their maximum potential when the right amount of control and monitoring of progress is applied. Measurement and reporting is often seen to be an overhead and expensive. We have found that effective measurement and reporting consumes about 1% of project budget (1.5% to 2% on small projects and as low as 0.5% on major programmes). Companies that want to use facts to manage recognise this as money well spent as it enables effective management. Those that look for easy cost-cuts generally take out “overhead” first preferring to chart their course through the icebergs in the dark with a small rudder.


Estimating is difficult and the key thing to remember is that it is only an estimate. Often these become written in blood as the initial and final answer. Estimates should be a living thing throughout the project and should be revisited when either something significant changes in the project or when we know sufficiently more to refine the estimate.

In a waterfall development, estimates should at least be performed at Requirements, end of High level design, the start of Construction and reviewed at implementation. Just because Agile doesn’t have the traditional phases, early estimates are still important, and just as useful.

However, estimates can be revisited any time during the lifecycle such as when requirement shift or other variables come into play which will impact originally stated outcomes.

Failure to review and maintain project estimates means you can’t manage risk or use any contingency.

Early Estimates and the Challenges

Early in the project lifecycle, cost and schedule estimates are generated based on best information available. As is often the case, this information is lacking in detail and most likely is ambiguous. This presents several problems for the estimator. For example, ambiguous requirements make it difficult to determine a proper size.

However, by making and documenting stated assumptions, the estimate produced early in the lifecycle can be effectively managed and customer expectations can be properly set.

What Should Estimates Be?

Estimating is a risk assessment activity. The wise project manager can use a well-developed project estimate to properly set and manage end user expectations. Transparency is the watchword here. By sharing stated assumptions with the user and by helping them to understand the basis for the estimate, you are engaging them and making them share in the accountability for the estimate.

For example, if they are aware that the estimate is based on their requirements and the general feeling is that the requirements are somewhat incomplete then it can safely be assumed that another estimate will be required when more data is available and that the new estimate will probably be different from the original estimate.

“I want it delivered NOW!”

This dynamic shows itself, not so subtly, when management doesn’t really want an estimate at all; they want the software delivered when they want it delivered.

How many times have we seen a situation where the sales/marketing group, the business users or even our own senior management has requested a software solution that has a fixed delivery date already attached to it? And even though the user or senior manager may ask for an estimate they really aren’t interested in the response unless they are told what they want to hear or alternatively the supplier is told what the answer already is.

In this type of management environment, the IT organization doesn’t invest much time in their estimating practice because they don’t realize the power of good estimation as a vehicle to properly manage the project and/or their customer’s expectations. The net result is the best endeavours by the IT organisation in a blind attempt to deliver the requirement and usually a project starting with a high tariff of risk.

Expert Estimates?

One perceived problem that is seen in expert estimates is that of memory. Estimates based on memory are subject to the cognitive bias of the estimator therefore involvement of others provides a balance that helps to cancel the potentially negative impacts of bias.

Single expert estimates tend to be either too high or too low depending on the estimator responsible and the culture, for example, the Scotty from Star Trek syndrome (bias) creeps into play with expert estimates. The seasoned estimator estimates high knowing it’ll be corrected anyway by the PM and the ultimate result may be a sensible figure.

The other typical scenario is it is easy for the expert to estimate how long a piece of work would take him to do but when he has to estimate for less experienced colleagues then it’s much more difficult to achieve so we tend to get under-estimates.

Don’t accept just a number for the estimate, three point estimates and the estimate assumptions are a key way to review and validate estimates.

Use of three-point estimating techniques also allows a more reasoned view of the estimate. In reality usually when we estimate we always think of a range –how long does it take you to get to work each day it might be 30 minutes on average but 20 minutes with quiet roads and 50 minutes in rush-hour. So combining all three estimates (Optimistic of 20, Average of 30 and Pessimistic of 50) gives you a much better view of the risk and what contingency you may need to use.

Expert estimates are a key estimate to obtain but there is great value in obtaining another estimate to reconcile this estimate against.

In the Agile space, the benefit of normalizing various experts estimates is often formalized through “planning poker” which constrains the estimate values that experts are allowed to choose and then requires the experts to justify and ultimately reconcile their estimates with each other. Given how effective it is in Scrum planning, the same process could and should apply more widely to expert estimates in general.

How Else Should We Address the Estimating Issue?

We can view popular estimation techniques through two separate perspectives: data and algorithms. First, from the perspective of experiential data / historical data and algorithmic / collaborative techniques, we see that many of the experiential based techniques leverage collaborative techniques to combat perceived weaknesses.

Historical data is used both in model based and expert estimates. Estimating without memory of the past is not possible. The bigger issue is whether models derived from historical data are clearly superior to expert estimates. If you are trying to remove the need for expert estimators the answer is unfortunately ... no.

Finally what is true is that expert estimates require a level of expertise that is sometimes not readily available, which then requires leveraging tools to be used to validate estimates. If estimates are important and the required level of expertise is not available then the choice is far starker. Estimates generated from models leveraging historical data in calibrated tools are the only logical choice.

Volatility and the Impact of Change?

Another key characteristic of a failing or delayed project is the degree of volatility and change. Studies show the exponential rising cost of change as you get in the later stages of a project particularly with a Waterfall methodology.

Agile is designed to accommodate change but change can occasionally be an excuse to use it as in, ”I don’t need to know what I want, I can keep changing my mind and we’ll be fine if we use Agile” can be a client view. This can lead to the same “priority” story being redeveloped multiple times until the client has worked out what they want and the Agile project fails because it runs out of time or money. Is the methodology at fault? Of course not but perhaps a requirements or design “spike” could have been implemented with the client to help them clarify their ideas.

Governance of change is the key, if you know what the business is going to look like at the implementation of the project then the project will control change and is much more likely to succeed.

Deviation from the Norm?

Often, changes to applications with regular release cycles tend to be of a similar size with the same team doing the work. The expert estimates roll into complexity matrices and sensible size metrics and all should be well as long as the estimators continually update their historical records and test the results against external databases. We still see the same mistakes repeated in the hope that a miracle will happen.

The challenge is when we deviate from the norm. Compressed timescale, significant increase in size will invalidate the current estimating methodology in the project. People will assume we can deliver at the same rate and the project is set to fail.

Lawrence H. Putnam published an empirical software estimation model in 1978. In the formula noted below, Size is the product size (whatever size estimate is used by your organization is appropriate). Putnam uses ESLOC (Effective Source Lines of Code) throughout his books.

  • B is a scaling factor and is a function of the project size.
  • Productivity is the Process Productivity, the ability of a particular software organization to produce software of a given size at a particular defect rate.
  • Effort is the total effort applied to the project in person-years.
  • Time is the total schedule of the project in years.

In practical use, when making an estimate for a software task the software equation is solved for effort:

Parametric estimating toolsets understand the likely impact and use this or their own bespoke calculation engines to deal with this and can make a major contribution in increasing the change of success in a project by setting realistic expectations.

Tracking and Monitoring

The final area to consider is effective tracking and control of the project. Continuous review of the projects velocity (Agile) or rate of delivery (Size measure per time period) will indicate the projects real status and chance of success, for example, if the development team report the project is 80% complete 3 weeks in a row then the project is likely in trouble.

Again the PMI report indicates that organisations that deliver successful projects, irrespective of the methods used, tend to have functioning and effective Programme Management Offices (PMO) and theses PMOs gather and analyse data effectively to support the successful delivery of business initiatives.


There is no need for projects to be delivered late, over budget or with less scope. These statements come from organisations that don’t understand that the light in the tunnel is actually a train coming full speed at you.

Successful delivery of a project requires a culture of effective business processes, effective estimation and sound development processes.

Strong business processes linking, effective business vision, realistic expectations and close communication between the vendor and supplier are key elements in successful delivery.

Development methods are only as good as the organisation that uses them. Whatever end of the process spectrum, it’s the effective use of the end to end processes that delivers the goods without drama.

The fundamental transformation of the idea to money comes with the estimate. Good estimates are living things that change with the circumstances.

Effective estimation requires an organization to commit resources to the development and execution of a well-defined software estimating practice, backed by a PMO that delivers effective data analysis.


  1. Dan Galorath on Estimating Blog,
  1. PMI Pulse Report 2014:
  1. “IT Governance: how top performers manage IT decision rights for superior results.” Peter Weill and Jeanne Ross 2004
  1. “DCG Works With Leading Customer Management Company to Implement Measurement and Governance Program for Data-based Decision Making” /insights/publications/measurement-program-for-data-based-decision-making/Is there a Business Case for Better Estimation? DCG Trusted Advisor Report, July 2013
  1. “What are the benefits if any of estimating my software projects through the use of a vendor developed estimating model?” DCG Trusted Advisor Report, July 2014
  1. Putnam Model
Written by Default at 05:00

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!