Why Do We Never Have Time to Test?

(You can download this report here.)

Scope of this Report

This paper discusses the time constraints of testing, its impact on several testing stakeholders, and possible ways to mitigate this problem. It includes:
Statistics on testing length.

  • Who are some of the stakeholders for software testing?
  • What kinds of delays do testers frequently face?
  • Making more time to test.

Testing Length

The following estimate of an average testing length is drawn from The Economics of Software Quality, by Capers Jones and Olivier Bonsignour, which is based on the authors’ clients average test cases per function point and time per case. The calculation is based on the types of tests used by 70% or more projects, and provides the following average for a 100 function point project. This assumes a thorough test of the system using an appropriate number of test cases for the size of the project.

Testing Times

Therefore, between one or two months might be spent for the testing of the sample project. Note that a 100 FP project is relatively small. Projects of ten thousand function points or more, especially for new software, are not uncommon. Testing these larger projects to the same degree could take a year or more, assuming testing time increased in a linear fashion. Testing time actually increases faster as the project grows larger. To completely test all combinations of possible processes for very large projects will take far more time than is available. For practical purposes, exhaustive testing is impossible for such projects.

The above calculation uses only a tenth of the total number of testing types. For example, integration and system testing were not included in the list of four most common testing types but are still quite common (listed as used in 60% and 40% of projects, respectively). The more testing methods applied, the better the final software quality but they will require still more time. Again, the potential testing time starts to exceed practical reasonableness. As a result, risk analysis and other measures must be applied to determine how much testing is enough.

Stakeholders

Any software project will have a number of stakeholders, each with potentially very different needs that can actively conflict with one another. Several stakeholders, with example goals, are presented here.

Testers

A goal of a software tester is to ensure that the application being tested is as free of defects as possible. As noted above, for any large project this is at best a lengthy and expensive goal, at worst an unobtainable one. At a certain point, further testing will not be practical. Some testers may naturally want to be more thorough than is necessary while others may fear reprisal if a defect is missed. Either way, the testing can easily become overlong because of these goals.

Developers

Developers want the project to be done and out the door so they can move on to the next one, or in Agile, to finish the current sprint. Testing, or having to return to the program to remove defects, delays this goal. As for the testers, this can become an even greater problem for developers penalties are imposed for defects in the code.

Customers

Customers want the application to be perfect out of the box for a low cost. The software must work without defect on all hardware combinations (especially for home users) and fulfill all requirements. Most customers realize that this is not going to happen but that is the goal. Software that does not meet the ideal will be well known in the community very quickly. This can put pressure on the business, which puts pressure on the manager, and finally on the development and testing teams.

Managers

Like the customer, the manager wants the program to be good quality and low cost, and most likely also wants a short development time. Improving any one of these goals (reducing cost, increasing quality, reducing time) requires a sacrifice in one or both of the other two. To decrease time and cost, testing may be cut or reduced. The process of solving a problem (development) is often, from the manager’s point of view, more clearly important than the process of preventing one (testing). Ultimately, management must make the decisions on how much time can be expended on any part of a project, and testing is often sacrificed in favor of the more visible development.

Delays, Delays

There is always the potential for any work to run into delays. Unforeseen problems with the software, changing personnel, and damaged equipment are all potential problems. There are too many possibilities to list here, but two will be presented: human factors and changing requirements.

Human Factors

However well-planned the software testing portion of a project might be, there is always the possibility that attitudes and habits of the development team can get in the way of completion. Distractions, attitudes towards testing, and politics can all cause delays.

Software teams, clearly, must work on computers much of their time, and computers are rife with potential distractions: social media, games, e-mail, and so on. These pursuits can sometimes improve efficiency in various ways, but they are still a lure to waste more time than is appropriate.

A number of testing types, including Subroutine and Unit testing (included in the time estimate above), are often most appropriately performed by the developers. Additionally, and pre-test defect removal will also involve the developer. Sometimes, developers do not believe that their time is properly spent on such activities. Further, even if the developers do relatively little testing themselves, a separate group of testers sending back the work due to defects, especially if this happens multiple times, can cause friction and further delays.

Changing Requirements

Most projects are going to have requirements evolve over time. Except with very small applications, as the work progresses, new features will be desired, a better view of current features will be available, and some requirements may actually be removed as unnecessary. Priorities will also shift, even if the requirements remain relatively stable. This increases both the development time and the testing time, as adaptions are made, but in this case testing is more likely to be sacrificed than development.

Making More Time for Testing

Defect Prevention

Traditionally, testing has been done after all development tasks are finished but before deployment. This means that for a small project an extra two months (according to the earlier testing time estimate) would be added to the end of the project, increasing the likelihood that the testing will be cut short. Finding defects throughout the development process (as in Figure 1) may increase the efficiency of removing defects, making the testing needed after the coding phase shorter.

Finding Defects

Test Driven Development

Test driven development (TDD) comes from Agile practices. It is a discipline in which development does not start with code, but with the development of a test case, which in turn is based on a requirement. The code is then written to pass the test case. If the code does not initially pass, it is returned to the backlog and attempted once again until it succeeds. This means that the testing is spread throughout the development process and the tests are ready at hand. However, studies of the technique show inconsistent benefits to costs: the process often costs more than other testing methods.

Test Maturity Model integration

TMMi is similar to Capability Maturity Model Integration (CMMI). It is a set of criteria that are used to determine the fitness of an organization’s testing processes. It does not dictate how the testing is done, rather it gives guidance in making improvements. It has five levels: Initial, Managed, Defined, Measured, and Optimizations. Each level except Initial has a set of requirements for an organization to be certified to it. The model is growing in popularity, as it gives a map to continuous improvement of testing systems. While not all organizations will need to obtain the highest level of TMMi, even the lower levels can lend insight to testing. Indeed, under TMMi, it is perfectly acceptable to be operating at several different levels simultaneously if those levels reflect the goals of the organization.

Conclusions

So, why is there never enough time for testing? Part of this is perception. All stakeholders want, to greater or lesser extents, an error-free application. Unfortunately, the exhaustive testing that this would require takes too much time to be possible in all but the smallest projects. As long as the goal is finding and fixing all defects, there can never be enough time to test. Proper risk assessment and prioritization is necessary before testing to reduce this problem.

Written by Default at 05:00

How Do We Know If We Are Getting Value for Our Software Vendors?

(You can download this report here.)

Scope of this Report

Discuss what is meant by value, the process of sizing and estimating the software deliverable and the benefits of those results

  • What is “Value”?
  • Functional Value
  • More on the estimation process
  • Case study example
  • Conclusion

What is “Value”?

We can look at value for software development from vendors in terms of how much user functionality is being delivered by the software vendor. In other words, how many user features and functions are impacted as a result of a project. We can also consider whether the software deliverables were completed on time and on budget to capture “value for money” and the monetary implications of timeliness. Finally, we can see if the software project delivered what was expected from a user requirements’ perspective and if it meets the users’ needs.

This last, more subjective, assessment of value gets into issues of clarity of requirements and the difficulties of responding to emergent requirements if the initial requirements are set in stone. It is outside the scope of this report but we believe the best way to address this issue is through Agile software development which is covered in several other DCG Trusted Advisor reports.

Functional Value

To quantify the software, we must first size the project. Of course, there are several ways to do this with varying degrees of rigor and cross-project comparability. Function Points and Story Points both have sizing perspectives that take a user’s perspective of the delivered software. Since Function Points Analysis is an industry standard best practice sizing technique we find that it is used more often for sizing at this Client-Vendor interface.

Function point analysis considers the functionality that has been requested by and provided to an end
user. The functionality is categorized as pertaining to one of five key components: inputs, outputs,
inquiries, interfaces and internal data. Each of the components is evaluated and given a prescribed
weighting, resulting in a specific function point value. When complete, all functional values are added
together for a total functional size of the software deliverable. After you have established the size of the software project, the result can be used as a key input to an estimating model to help derive several other metrics that could include but are not limited to cost, delivery rate, schedule and defects. A good estimating model will include industry data that can be used to compare the resulting output metrics to benchmarks to allow the client to judge the value of the current software deliverable under
consideration. Of course, there are always mitigating circumstances but at least this approach allows for an informed value conversation (which may result in refinement of the input data to the estimating
model).

5 Key Components of Function Point Analysis

Of course, if you can base your vendor contract even partially on a cost per function point metric, this provides an excellent focus on the delivery of functional value although it is wise to have an agreed independent third party available to conduct a function point count in the event of disputes.

More on the Estimation Process

We have mentioned the importance of the estimation model and the input data in achieving a fair assessment of the functional value of the delivered software. We have also hinted that these will be issues to be discussed if there is disagreement between client and vendor about the delivered value. Hence, it is worth digging a little deeper into the estimation process.

The process for completing an estimate involves gathering key data that is related to the practices, processes and technologies used during the development lifecycle of the software deliverable. DCG analyzes the various project attributes using a commercial software tool (e.g. SEER-SEM from Galorath), assessing the expected level of effort that would be required to build the features and functions that had to be coded and tested for the software deliverable. The major areas for those technical or non-functional aspects are:

  • Platform involved (Client-server, Web based development, etc.)
  • Application Type (Financial transactions, Graphical user interface, etc.)
  • Development Method (Agile, Waterfall, etc.)
  • Current Phase (Design, Development, etc.)
  • Language (Java, C++, etc.)

Sophisticated estimating models such as those built into the commercial tools considers as well that are too numerous to mention include numerous other potential inputs including parameters are related to the personnel capabilities, develop environment and the target environment.

Given the size of the software deliverable and the complexity of the software deliverable represented by some of all of the available input parameters, we also need to know the productivity of the software development team that is developing the software. This can be a sensitive topic between Client and Vendor. We have often seen that the actual productivity of a team might be different from the reported productivity as the Vendor throws people on the team to make a delivery date (bad) or adds trainees to the team to learn (good) – mostly, for value purposes, the Client only cares about the productivity that they will be billed for!

Once we have established the development team’s rate of delivery or functions points per effort month, we can then use that information along with all the previous information (size, complexity) to deliver the completed estimate.

The end result of the sizing and estimating process would show how long the project will take to complete (Schedule), how many resources will be needed in order to complete (Effort) and the overall cost (Cost) of the software deliverable.

Sizing and Estimating Process

Case Study Example

DCG recently completed an engagement with a large global banking corporation who had an ongoing engagement with a particular vendor for various IT projects. One such project involved a migration effort to port functionality from one application platform to another new platform. The company and the vendor developed and agreed on a project timeline and associated budget. However, at the end of the allocated timeline, the vendor reported that the migration could not be completed without additional time and money.

The company was reasonably concerned about the success of the project and wanted more information as to why the vendor was unable to complete the project within the agreed-upon parameters. As a result, the company brought David Consulting Group on board to size and evaluate the work had been completed to-date, resulting in an estimation of how long that piece of work should have taken.

The objectives of the engagement were to:

  • Provide a detailed accounting of all features and functions that were included in the software being evaluated
  • Calculate the expected labor hours by activity, along with a probability report (risk analysis) for the selected releases

DCG’s initial estimate was significantly lower than what the vendor was billing for that same set of development work. With such a significant difference in the totals, it was clear that something was off. DCG investigated the issue with the vendor to explore what data could be missing from the estimate, including a review of the assumptions made in the estimate regarding:

  • Size of the job
  • Degree of complexity
  • Team’s ability to perform

In the end, the company and the vendor accepted the analysis and utilized the information internally to resolve any issues relevant to the project and as a result the company also decided to use another software vendor for any future software projects resulting in a significant cost saving.

Conclusions

This case study highlights a typical business problem wherein projects are not meeting agreed-upon parameters. In cases such as these, Function Point Analysis proves to be a useful tool in measuring and evaluating the software deliverables, providing a quantitative measure of the project being developed. The resulting function point count can also be used to track other metrics such as defects per function point, cost per function point and effort hours per function point. These metrics along with several others can be used to negotiate price points with current and future software development vendors to ensure that the company is receiving the best value for their IT investment.

The estimation process helps in keeping vendors accountable for the work they are producing by providing solid data on the realistic length of a project as well as the relative cost of the project. Quantitative estimates on project length allow companies to better manage their vendor relationships with increased oversight and an enhanced understanding of the expected outcome for their software deliverables.

Written by Default at 05:00

What is #NoEstimates?

This report can be downloaded here.

Scope of this Report

Estimation is one of the lightening rod issues in software development and maintenance. Over the past few years the concept of #NoEstimates has emerged and has become a movement within the Agile community. Due to its newness, #NoEstimates has several camps revolving around a central concept of not generating task level estimates. The newness of the movement also means there are no (or very few) large example projects that can be used as referencesi. Finally there are no published quantitative studies of results comparing the results of work performed using #NoEstimates techniques to other methods. In order to have a conversation we need to be begin by establishing a shared context and language across the gamut of estimating ideas whether Agile or not. Without a shared language that includes #NoEstimates we will not be able to compare the concept to classical estimation concepts.

Context

#NoEstimates Context:

There are two main groups or camps of thought leaders in the #NoEstimate movement (the two camps probably reflect more of continuum of ideas rather than absolutes). The first camp argues that a team should break work down into small chunks and then immediately begin completing those small chunks (doing the highest value first). The chunks would build up quickly to a minimum viable product (MVP) that can generate feedback, so the team can hone its ability to deliver value. This camp leverages continuous feedback and re-planning to guide work, and luminaries like Woody Zuill often champion this camp. A second camp begins in a similar manner – by breaking the work into small pieces, prioritizing on value (and perhaps risk), delivering against a MVP to generate feedback – but they measure throughput. Throughput is a measure of how many units of work (e.g. stories or widgets) a team can deliver in a specific period of time. Continuously measuring the throughput of the team provides a tool to understand when work needs to start in order for it to be delivered within a period time. Average throughput is used to provide the team and other stakeholders with a forecast of the future. This is very similar to throughput measured used in Kanban. People like Vasco Duarte champion the second camp who practice #NoEstimates from a lean or Kanban perspective. We recently heard David Anderson, the Kanban visionary, discuss a similar #NoEstimates position using throughput as a forecasting tool. Both camps in the #NoEstimates movement eschew developing story- or task-level estimates. The major difference is on the use of throughput to provide forecasting which is central to bottom-up estimating and planning at the lowest level of the classic estimation continuum.

Classic Estimation Context:

Estimation as a topic is often a synthesis of three related, but different concepts. The three concepts are budgeting, estimation and planning. Because these three concepts are often conflated it is important to understand the relationship between the three. These are typical in a normal commercial organization, however these concepts might be called different things depending on your business model.

An estimate is a finite approximation of cost, effort and/or duration based on some basis of knowledge (this is known as a basis of estimation). The flow of activity conflated as estimation often runs from budget, to project estimation to planning. In most organizations, the act of generating a finite approximation typically begins as a form of portfolio management in order to generate a budget for a department or group.

The budgeting process helps make decisions about which pieces of work are to be done. Most organizations have a portfolio of work that is larger than they can accomplish, therefore they need a mechanism to prioritize. Most portfolio managers, whether proponents of an Agile or a classic approach, would defend using value as a key determinant of prioritization. Value requires having some type of forecast of cost and benefit of the project over some timeframe. Once a project enters a pipeline in a classic organization, an estimate is typically generated. The estimate is generally believed to be more accurate than the original budget due to the information gathered as the project is groomed to begin.

Plans breakdown stories into tasks often with personal assigned, an estimate of effort generated at the task level and sum the estimates into higher-level estimates. Any of these steps can (but should not) be called estimation. The three -level process described above, if misused, can cause several team and organizational issues. Proponents of the #NoEstimates movement often classify these issues as estimation pathologies. Jim Benson, author of Personal Kanban, established a taxonomy of estimation pathologies that includes:

1. Guarantism – a belief that an estimate is actually correct.

2. Swami-itis – a belief that an estimate is a basis for sound decision making.

3. Craftosis – an assumption that estimates can be done better.

4. Reality Blindness – an insistence that estimates are prima facie implementable.

5. Promosoriality – a belief that estimates are possible (planning facility)

Estimates by definition are imprecise and can only be accurate within a range of confidence however these facts are often “forgotten” in lieu of the single number contract. Acting as if any of these pathologies are true has generated the anger and frustration needed to fuel the #NoEstimates movement.

When done correctly, both #NoEstimates and classic estimation are tools to generate feedback and create guidance for the organization. In its purest form #NoEstimates uses functionality to generate feedback and to provide guidance about what is possible. The less absolutist “Kanban’er” form of #NoEstimates uses both functional software and throughput measures as feedback and guidance tools. Classic estimation tools use plans and performance to the plan to generate feedback and guidance. The goal is usually the same, it is just that the mechanisms are very different.

Budgeting, Estimation, Planning, #NoEstimates and the Agile Planning Onion

There are many levels of estimation including budgeting, high-level estimation and task planning (detailed estimation). We can link a more classic view of estimation to the “Agile Planning Onion” popularized by Mike Cohn. In the Agile Planning Onion strategic planning is on the outside of the onion and the planning that occurs in the daily sprint meetings at the core of the onion. Budgeting is a strategic form of estimation that most corporate and governmental entities perform. Other than in its most extreme form, budget is generally not a practice being eschewed by #NoEstimate proponents. Estimation exists in the middle layers of the Agile Planning Onion (product and release layers). In classic estimation, these estimates are often developed using top down techniques such as analogy or parametric estimation using function points, story points or tee-shirt sizing. #NoEstimate proponents leveraging Kanban techniques perform this level of estimates as forecasts using average flow rates and queuing theory (an application of Little’s Law). The resistance at this level has generated the perception that sized based estimation at this level (and later, planning at the task level) generate several pathological behaviors within organizations. The final layers of the planning onion, iteration and daily planning, are generally the areas of highest concern to the #NoEstimates movement. While tasks may be identified, effort is not assigned.

It should be noted that while effort estimates are not done at the planning layers or generally at the estimation layer most teams adopt rules to break work down into predictable units. Rules or guidelines are often established that affect story and task size. The used of rules to govern granularity is one of the reason flow measures can be used to forecast when work needs to begin in order to meet date or dependency requirements. Johanna Rothman stated in her article “The Case for #NoEstimates,” that “when you deliver small, valuable chunks of work every day or more often” that you can avoid estimation. The critical words being small and everyday which requires the team to understand how to groom stories to the desired granularity. Whether through the use of rules or feedback using these techniques to groom stories could easily be construed as a crude form of estimation.

Scenarios:

Standard Corporate Environments:

Organizational budgeting (strategy and portfolio): Continuous flow or other #NoEstimates techniques don’t answer the central questions most organizations need to answer which include:

1. How much money should I allocate for software development, enhancements and maintenance?

2. Which projects or products should we fund?

3. Which projects will return the greatest amount of value? While most budgets are scientific guesses there is a need to understand at least some approximation of the size and cost of the work on the overall backlog.

High Level Estimation (product and release):

Release Plans and product road maps could easily be built from forecasts based on teams that have a track record of delivering value on a regular basis. The idea of #NoEstimates can be applied at this level of planning and estimation IF the right conditions are met. Conditions include:

1. Stable Teams
2. Agile Mindset (both team and organizational levels)
3. Well-groomed stories

The classic questions of when, what and how much can be answered in this environment for work done by single teams or by scaled Agile programs.

It should be noted that the example used by Woody Zuill’s, which uses the most form of #NoEstimates (start, deliver, get feedback, and then do more) is a reflection of an environment where all of these factors are reflected.

Task Level Estimation (iteration and daily):

Task level planning is the base of #NoEstimates discussions. Stable teams that are able consistently to accept and deliver what is expected do not have any need to plan effort at a task level.

Commercial / Contractual Work:

Raja Bavani, Senior Director at Cognizant Technology Solutions stated in a recent conversation, that he thought that #NoEstimates was a non-starter in a contractual environment.

Conclusion

Estimation is a form of planning. Planning is a considered an important competency in most business environments. Planning activities abound whether planning the corporate picnic to planning the acquisition and implementation of a new customer relationship management system. Most planning activities center on answering a few very basic questions. When will “it” be done? How much will “it” cost? What is “it” that I will actually get? Rarely does the question of how much effort will it take get asked unless as proxy for how much will it cost. As the work progresses the questions shift to whether we are going to meet the date, budget or scope. Answering those questions can be accomplished by any number of techniques. Using #NoEstimates techniques still requires most organizations to budget. Using #NoEstimates techniques requires breaking down stories into manageable, predictable chunks so that teams can predictably deliver value. The ability to predictably deliver value provides organizations with the tool to forecast the delivery. #NoEstimates really isn’t not estimating . . . it is just estimating differently.

Sources

1. The C3 Project was used to hone and prove many of the Agile techniques (eXtreme Programing and WIKIs for example) and acted as a training ground for many luminaries of the early Agile movement.

2. http://herdingcats.typepad.com/my_weblog/2015/03/five-estimating-pathologies-and-their-corrective-actions.html 4/27/15 or http://moduscooperandi.com/blog/modus-list-3-our-five-estimate-pathologies/ 4/27/15

Written by Default at 05:00

How Can We Optimize Our SDLC to Maximize Demonstrable Value to the Business?

(You can download this report here.)

Scope of this Report

This report investigates how changes to the SDLC (Software Development Lifecycle) can improve the delivery of demonstrable value to the business. We consider how we might measure “demonstrable value” in a way that the business will understand. We review the theory of “Lean Software Engineering” and we suggest some ways that the theory can be applied to optimize different SDLCs. Finally, we discuss the importance of Value Visualization – requiring each story or requirement in the SDLC to have a demonstrable and highly visible set of business value criteria to drive tactical decision-making.

What is “Demonstrable Value to the Business?"

Basically, most software development organizations are driven by demands (or possibly polite requests) from the business(es) that funds the software development. This is not all the work they do because some work is self-generated, either from the software development group or the rest of IT, but, generally, this second category of work still has to be accepted for funding by the business and prioritized against business needs.

So, “Demonstrable Value to the Business” could be simply delivering to the business what it asks and doing so in accordance with the “iron triangle” of "in scope, on time and on budget." In “The Business Value of IT,” Harris, Herron and Iwanicki argue that “business value” tends to be in the eye of the beholder. While this is an important ingredient of the definition, it is not sufficient. Some rigour must be applied beyond simple customer satisfaction because, at the end of the day, the success of an organization will almost always be judged in monetary terms. Even non-profits must be able to stick within the available budget while delighting customers.

Inevitably, the best way to introduce objectivity into a business value discussion is to follow the money, however difficult and apparently unjustified this may seem to the participants. Certainly, there are value types that cannot be measured in dollars and cents, but we would argue that such situations are relatively rare in the business of software development. Hence, while we would always include customer satisfaction when assessing business value, we believe that “demonstrable value” requires the objectivity of financial metrics.

Lean Software Engineering

In our November 2014 DCG Trusted Advisor report, we investigated the meaning of the term “Lean Software Engineering.” That report is a good starting point and recommended reading for this report. To summarize the key points of relevance to this report:

  • The Poppendiecks (Mary and Tom) have proposed seven principles of Lean Software Development:
    - Eliminate Waste
    - Build Quality In
    - Create Knowledge
    - Defer Commitment
    - Deliver Fast
    - Respect People
    - Optimize the Whole
  • Lean software engineering is not an SDLC but an optimization philosophy that can be applied to all SDLCs. The following practices are often associated with implementations of the Lean philosophy in software engineering:
    - Visual Controls including Kanban Boards
    - Cumulative Flow Diagrams
    - Virtual Kanban Systems
    - Small batch sizes
    - Automation
    - Kaizen (or continuous improvement) Events
    - Daily Standup meetings
    - Retrospectives
    - Operations Reviews.
  • Lean principles and practices are applicable to waterfall SDLCs and embodied in Agile SDLCs, although in neither case is there usually 100% compliance.

In considering lean product development flow, Don Reinertsen identifies twelve problems with traditional product development. Reinertsen was referring to traditional waterfall implementations of product development but, as Figure 1 shows, some of these have been addressed by typical Agile implementations but some are only implied. For example, “Absence of WIP constraints”: Few scrum implementations have specific WIP constraints but constraining team size and constraining sprint duration implies a WIP constraint.

Opportunities for Optimization

Of course, there are many variations of the generic SDLC models that arise from local customization to address either the problems we have described or other local issues. Hence, for example, our scores here might be modified (up or down!) by an Agile implementation that is not textbook scrum but some combination of Lean/Scrum/XP. That same is true for modified versions of waterfall, which, for example, have a strong cadence achieved by pipelining requirements/design, development and testing into, say, three month chunks that endlessly repeat. For this reason, in Appendix A, we have added some commentary to the simple scores of Figure 1.

Figure 1: Problems with traditional approaches to Product Development (after Reinertsen) and the degree to which they are addressed today by typical Waterfall (Blue) and Agile (Red) SDLCs. A Reinertsen problem that has been fully-addressed in an SDLC would score a ‘3’.

SDLC Lean Problems

So with the stipulation that the actual SDLC model you are looking at might not score exactly the same way as we have suggested in Figure 1, the greatest opportunities for optimization leap off of the chart:

  • For both SDLC’s
    - Quantify the economics and make them visible at all levels so that they can be used for tactical decisions
    - Make queues explicitly visible at all levels
  • For waterfall:
    - Implement Agile
    OR
    - Implement a Lean Kanban system or some other system that:
    Sets Work in Process (WIP) Limits on process steps
    Sets small batch sizes
    Decentralizes control by implement work-pull instead of work-push.
  • AND
  • Focus on optimizing end-to-end value throughput instead of focusing on resource utilization that may maximize local productivity at the expense of throughput.

How? The Economic View - Value Visualization

We need to associate a set of economic information with each requirement or story that we want to flow through our SDLC. We propose the minimum set in Figure 2 should be added to each requirement/story in an easily visible (see Figure 3) or electronically accessible way.

Figure 2: The Value Visualization Economic Metrics set for every Requirement/Story

Economic Metrics

As we have learned from Kanban boards in Agile, visualization is very powerful for team decision-making, and so it makes sense to associate the economic value data of Figure 2 with a visual model for which we provide a template in Figure 3 and an example in Figure 4.

Value Visualization Trains

There are some challenges carrying value visualization data through requirement or story decomposition because the business cases that drive the requirements and stories often map to quite high-level requirements and epics, rather than the “small batch” level requirements and stories that we want to see flowing through development. We have learned from at least one client that it does not make sense to have economic value data at the lowest requirement/story level because of the difficulty of breaking up the high-level economic information into ever smaller units. Hence, at some level of decomposition it will be necessary to stop breaking up the economic data and follow three simple rules:

  • Use T-shirts sizes for Value e.g. High, Medium, Low at the lowest levels of story.
  • These T-shirt sizes should be inherited from and the same as the lowest level parent requirement/story for which economic data was available.
  • Implement a control mechanism to ensure that below the lowest of economic data, all child requirements/stories are connected and prioritized together as much as possible.

There must be an associated process for the third main rule. It is possible that a subset of the stories associated with a particular set of economic data could be deployed before all the stories associated with that data are ready for deployment. For example, in some cases, 90% of the value could be realized by deploying 75% of the child stories. In such cases, decisions need to be made and executed about the value of the remaining “orphaned stories.” In short, are they still needed or can they be removed from or deprioritized in the SDLC flow.

The Use of Lean-Kanban

While Agile SDLC’s such as Scrum and SAFe are designed to embody many lean principles, waterfall was not originally designed with lean principles in mind. However, that does not mean that moving to Agile is the only course of action available. If there are good reasons for an organization to stick with waterfall for part of its operations then the application of Lean-Kanban principles can help. After all, Lean principles originally emerged in manufacturing environments that tend to be waterfall in nature (a quick reminder/warning that lean manufacturing and lean software engineering are not the same thing as we have discussed in earlier DCG Trusted Advisor reports).

To move from classic waterfall to a Lean-Kanban model, the following minimal steps need to be taken:

  • Create a product backlog of requirements/stories in priority order
  • [Ideally but not necessarily at first] Add the Value Visualization Data to each requirement/story
  • Create a “Ready” step as the first step in the flow to ensure that requirements/stories do not enter the SDLC until they are ready to be worked. Put a WIP limit of less than 10 on the Ready step
  • Create a “Ready to Deploy” step as the last step in the flow with no WIP limit
  • For each step in the waterfall SDLC (e.g. Analysis, Design, Code, Test), create two sub-steps: “In Progress” and “Done.” Each whole step should have a WIP limit for of less than 10.
  • Allow the staff in each step to pull requirements from the preceding “Ready” or “Done” step if, and only, if bringing in that requirement/story does not exceed their WIP limit. Staff should pull by highest value or WSJF.
  • Use Cumulative flow charts to track and predict bottlenecks and shift resources or WIP limits to optimize value flow.

Conclusion

We are capable of building and running effective Waterfall SDLC’s but they are not necessarily efficient in optimizing value flow. Worse, waterfall SDLC’s are not good at visualizing the data needed to improve value flow and they tend to be poor at implementing lean principles. Agile SDLC’s are much better at implementing Lean principles and so improve value flow but even Agile SDLC’s are not optimal if they do not have a way to include some basic economic data in their tactical decision-making.

Sources

  • “The Business Value of IT,” Harris, Herron and Iwanicki, CRC Press, 2008.
  • DCG Trusted Advisor Report, November 2014, “What is meant by the term ‘Lean Software Development,” /insights/publications/ta-archives/lean-software-development/
  • “Implementing Lean Software Development – From Concept to Cash,” Mary and Tom Poppendieck, Addison Wesley, 2007. [Or, indeed, any book by Mary and Tom on this topic!]
  • “The Principles of Product Development Flow – Second Generation Lean Product Development,” Donald G. Reinertsen, Celeritas Publishing, 2009.

Appendix A

Written by Default at 05:00
Categories :

DCG Publishes Third Volume of Trusted Advisor Anthology

Trusted Advisor

We've got news! That's right, the latest Trusted Advisor anthology is now available!

The third volume of the popular Trusted Advisor book series, “DCG Trusted Advisor Anthology 2015 Edition,” features 12 reports, including:

  • What is Excellence in Software Development? And Why Do Some Benchmarkers Think There Is Only One Answer?
  • Is Calculating ROI Meaningful for Agile Projects?
  • How Do I Size My Non-Functional Software?

Trusted Advisor is DCG’s members-only research forum (membership is free and open to all IT professionals). Members can submit IT-related questions to the forum, and then, each month, members can vote on the question that they would like to have researched. The question with the most votes is then researched by the DCG team. DCG produces a short report answering the question. The reports are all available on the DCG website, to both members and non-members. 

Each anthology contains all of the reports generated from the previous year. All of the anthologies are available for purchase on Amazon. More information about Trusted Advisor, including how to join, is available here.

Written by Default at 05:00

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!