If I am going to scale agile to most of my organization, what do we need to do at the portfolio level?

Scope of Report

In this report, we suggest some considerations for executives seeking to grow the number of agile teams in their organization. At some point, changes are needed at the top. In particular, the portfolio management team needs to reorganize the proposed software development work to allow it to be pulled by the programs and teams from a portfolio backlog prioritized by economic value.


In “Agile Software Requirements,” Dean Leffingwell (a founder of the Scaled Agile Framework® or SAFe® quotes Mikko Parkolla’s description of portfolio management:

Portfolio management is a top-level authority that makes long-term investment decisions on strategic areas that affect the business performance of the company. The main responsibility of portfolio management is to set the investment levels between business areas, product lines, different products, and strategic portfolio investment themes; these are a collection of related strategic initiatives.

Leffingwell goes on to list three sets of activities that should be under the control or influence of the organizations’ software portfolio management team:

  • Investment funding: Determining the allocation of the company’s scarce R&D resources to various products and services.
  • Change management: Fact patterns change over time, and the business must react with new plans, budgets and expectations.
  • Governance and oversight: Assuring that the programs remain on track and that they follow the appropriate corporate rules, guidelines, and relevant standards.

As organizations start to establish Agile outside of a few isolated teams, introducing or refocusing a portfolio management team will be a big part of the change. To stand any chance of success, using business value as the driver for the improvement of software development, the leaders of the organization must first instill a culture of lean software development.

Lean Software Development

The first step in establishing software value strategies is to learn about and commit to a philosophy of lean software development. We do not have space to dive too deeply into the theory and history of lean manufacturing and the evolution of lean software engineering in this report. Interested readers should refer first to “Implementing lean software development” by Mary and Tom Poppendieck. Instead, we focus on some important ideas from lean and lean software engineering. In particular what we consider to be important principles that an organization should adopt before it attempts to drive its Agile software development across the whole business.

Removal of waste

A key principle of lean is that activities that do not add value should be removed from the process. “Activities that do not add value” is the definition of “waste” in lean. Of course, in the spirit of “kaizen” or continuous improvement, the phrase “do not add value,” is interpreted relatively as in “Activity A adds less value than Activity B – how can we make an improvement?”

Pull not push

Historically, in software development the business side of the organization has almost always created more ideas for work than the software development group can handle within the desired time or budget. This has led to competition between business heads to get their work prioritized by the software development teams ahead of their colleagues. Also, it led to a mentality that it was important to “push” as much work as possible into the software development group such that every available drop of capacity was utilized, even if this meant overloading and burning out development teams. This still happens today.

However, flow theory based on ideas from lean manufacturing and telecoms routing in Reinertsen’s “The Principles of Product Development Flow,” suggests that the strategy of bringing resources to projects and optimizing their utilization is a poorer strategy for delivering economic value than applying lean principles to the flow of work through small teams of expert resources. The lean answer is to establish appropriately-sized teams for the desired work steps and the available budget and then let the teams “pull” work, from a backlog of tasks that is ready to be worked on, as and when they are ready to do the work. This approach combined with Work In Process (WIP) limits at each step avoids queues of “work in process” throughout the team. Queues represent waste because there is no value being added while work items are sitting in queues. These principles are fundamental elements of “kanban” systems.

Importantly, “pull not push” requires a change in understanding at the strategic level. Executives must be persuaded, and accept, that in organizing software development under lean principles, to maximize flow they need to organize their desired deliverables into “portfolio backlogs” in which the desired deliverables are prioritized.

We must acknowledge that this is a big cultural change because most executives’ experience (and ego?) tells them that the software development department is there to do what executives want, when they want it. This pushing of requirements and delivery dates onto software development results in projects that are short on scope, over budget and late. Many executives experience these results with disappointment and develop lack of trust in software development. With “pull not push”, the executives get a promise of greater value flow, greater agility and some acknowledged uncertainty (grounded in reality). We believe that the change is worthwhile and the benefits are real but we acknowledge that the cultural change is difficult and important.

Economic focus

This might seem obvious to a senior management team and they are likely to push back on the suggestion that they do not prioritize everything with an economic focus. Further, they might jump quickly from the term “economic focus” to the concept of business cases. Many of us have experience of business cases as “box checking” in the bureaucratic culture of some organizations. Such business cases do not add value and should be eradicated as waste from a lean perspective. The economic focus we want to engender in the executive team is that of maximizing the flow of value through software development. This means assigning relative business value to projects or epics at the strategic value. This means being aware that, as Reinertsen tells us, “If you only measure one thing then measure cost of delay.” Cost of delay is a useful default metric that can be easily evaluated as a relative metric which includes business value, timeliness constraints and risk reduction or opportunity enhancement. Further, when cost of delay is incorporated into the “Weighted Shortest Job First” metric then we have a more refined metric for prioritizing by business value that can still be easily and quickly assessed on a relative basis by a knowledgeable group of business and IT stakeholders.

A Portfolio Management Kanban System

Figure 1: Portfolio Management Kanban Board

 Scale Agile

Figure 1 illustrates the sort of Portfolio kanban system that can be used at the portfolio level to prioritize new software development investment. It is important to emphasize here that the goal is prioritization of new software development projects (or epics) to maximize the flow of business value. This implies that we have a way to measure business value at this level and we cover a simple, default technique, “Weighted Shortest Job First”, below. It also implies that some overarching strategic directions have been set for the organization against which competing investments with similar value (to different business units) can be prioritized.

The portfolio management system in Figure 1 is just one example and is not intended to be prescriptive. The process steps are as follows:

Ideas Backlog: Any and all ideas from the business go into this backlog. At this point, the ideas are high-level with limited descriptions and a simple value statement because there is no point in wasting effort (good lean practice!) in preparing detailed descriptions or value statements if these ideas are never going to make it to the next step. There is no WIP limit because there is benefit in considering all ideas and let the best float to the top. In practice, weak ideas will “age out” of the ideas backlog as better ideas continue to be prioritized above them.

Ready for Review: The key activity at this process step is for the portfolio management team to pull “the best” ideas from the ideas backlog for further analysis. Some sort of simple algorithm is required for establishing “the best.” Probably, it should include inspection of the ideas for certain minimum information such as a business value statement and then a ranking based on that business value statement and the current strategic priorities of the organization. Some interaction between the portfolio management team and the business will be required to develop and document a refined understanding of the idea. There is a WIP limit on this step because the organization has limited funds to invest in software development, the portfolio management team has limited capacity and the WIP limit acts as a filter to accept on the best ideas. Ideas that are processed in this step may be rejected, returned to the ideas backlog or passed on for Preliminary Analysis.

Preliminary Analysis: In preliminary analysis, it is necessary to make a first pass at measuring relative business value. Note that this is a prioritization process so relative business value is more important that actual, specific business value. Further, it is unlikely that future business value flows can be predicted with any real accuracy so there is little point wasting effort to try. For relative business value at this point, we recommend using Cost of Delay and Weighted Shortest Job First (WSJF) – the relative values of these can be estimated by a workshop involving business, portfolio management and software development representatives. More details are provided below. The project with the highest relative WSJF should be done first. At this point, ideas become projects (or epics). Projects that are processed in this step may be rejected, returned to the ideas backlog or passed on for Detailed Analysis.

Detailed Analysis: At this stage, there is some certainty that the projects under consideration have real business value. Hence, it is appropriate for the portfolio management team to work with the business to develop a lightweight business case – a few pages of structured information – and work with the software development group to build some high level cost and duration estimates. Information from these documents should be used to refine the WSJF estimate. Projects that are processed in this step may be rejected, returned to the ideas backlog or moved into the portfolio backlog according to their relative WSJF priority versus the items already in the portfolio backlog.

To some extent, there is tension between strategy at the executive level being all about deciding on the big things that need to get done and Reinertsen telling us to reduce batch size which implies pushing small things through the portfolio backlog pipeline. Big ideas can certainly enter the ideas backlog but their chances of progressing quickly and smoothly through the portfolio management kanban system are limited by their size. Hence, savvy business heads will submit smaller ideas to the ideas backlog to increase their chances of getting investment quickly.

Cost of Delay (CoD) and Weighted Shortest Job First (WSJF)

Before we consider Cost of Delay and Weighted Shortest Job First, we must describe a technique for estimating that uses the shared expertize of a group of informed people to achieve a relative estimate. The technique is based upon the Delphi Method developed by the Rand Corporation in the 1950’s which was simplified as “Planning Poker” by Agile guru, Mike Cohn. The relative estimates for a set of items are considered by a group of informed people. First, the group agrees on which item has the smallest estimate. This is arbitrarily assigned a value of 1. Next, each individual considers another item from the set and privately evaluates its estimate relative to the smallest unitary item from a specified, limited set of numbers. Typically, a modified Fibonacci sequence is used such as the Cohn Scale popularized by Mike Cohn for use in Story Points. For portfolio management purposes, we recommend a simple set: 1, 2, 3, 5, 8, 13, 20. Having privately decided on an estimate, the estimating group each share their estimates. The resulting reconciliation of individuals’ estimates based on sharing of individual knowledge and justifications is the true value of the process because all the individual assumptions upon which the individual estimates were made are made explicit and accepted or rejected by the group. The process continues with consideration of the next item to be estimated. It should be noted that at this point we have not specified what attribute of the items the group is estimating nor what units they are using. The technique can be applied to any attribute and there are no units because the estimates are relative.

All projects, epics, stories, tasks have a Cost of Delay. The Cost of Delay is the hourly, daily or monthly cost associated with NOT starting the project. For our prioritization purposes, we are interested in the cost of delay if we start Project A while setting aside Project B i.e. the Cost of Delay of Project B. Consider two software projects, A and B, to be carried out by the same team over 60 days and similar in every way except that project A requires an external specialist in Oracle databases who can be hired at $100 per hour and project B requires an external specialist in Mumps databases who can be hired at $150 per hour. Both specialists are available and must be hired now. The team can only do one project at a time. Which should they do first? Well, they should do the project with the highest cost of delay first.

If project A is deferred, it has a cost of delay of 60 days * 8 hours * $100 = $48,000. If project B is deferred, it has a cost of delay of 60 days * 8 hours * $150 = $72,000. Project B has the highest cost of delay so it should be started first. For cost of delay in this example, we were able to use explicit financial values. More often, this sort of explicit financial information is not available or is, at best, fuzzy. So how would this have looked from a Planning Poker perspective? Probably, we would have been clear that project A had a smaller cost of delay than project B because Oracle databases are much more common than Mumps and therefore the supply of Oracle experts would probably be greater and their costs lower. We would assign project A an estimated Cost of Delay of ‘1’. The discussion around project B would revolve around how much more expensive Mumps experts are than Oracle experts. Probably, we would have assigned an estimated Cost of Delay of ‘2’ (although in similar circumstances we have seen groups reassign project A an estimate of ‘2’ so they can assign project B an estimate of ‘3’ because they don’t believe B is twice the size of A!). The key lesson here is we are seeking to prioritize; so the absolute dollars are not as important as their relative size in the set under consideration for prioritization.

In the above example, we were careful to use the same duration for both projects i.e. 60 days. What if the duration had been different? Let’s say that project A, the Oracle project, had a duration of just 10 days. The cost of delay for project A would stay the same at $48,000 because project B still has a duration of 60 days. However, the cost of delay for project B is now 10 days * 8 hours * $150 = $12,000. In this case, project A has the highest cost of delay and should be started first. Clearly, duration has an impact on cost of delay and must be considered in prioritization decisions. Hence, we use, Weighted Shortest Job First, where:

𝑊𝑒𝑖𝑔ℎ𝑡𝑒𝑑 𝑆ℎ𝑜𝑟𝑡𝑒𝑠𝑡 𝐽𝑜𝑏 𝐹𝑖𝑟𝑠𝑡 = 𝐶𝑜𝑠𝑡 𝑜𝑓 𝐷𝑒𝑙𝑎𝑦 𝐷𝑢𝑟𝑎𝑡𝑖𝑜𝑛

Now, we prioritize highest those projects with the highest WSJF. Typically, we do not use Planning Poker to estimate relative WSJF. Instead, we use Planning Poker to estimate a relative Cost of Delay and a relative Duration for each project and then apply the formula for WSJF. In SAFe, it is recommended to break down Cost of Delay into three constituent parts which are estimated using Planning Poker and then summed to give the total relative Cost of Delay:

Cost of Delay = User or Business Value + Time Criticality + Risk Reduction or Opportunity Enablement Value

Since the same Cohn Scale numbers are used, this might effectively weight Cost of Delay at three times more than Duration in the WSJF formula but, in our experience, one of the three sub-components of Cost of Delay usually dominates in a given business case so the potential distortion is rarely realized in practice.

SAFe Portfolio Layer

In a pull organization, informed decisions about the prioritization of the items on the portfolio backlog are vital and add to the relevance and workload of traditional portfolio management teams. The activities associated with preparing items for the portfolio backlog justify the introduction of a software portfolio management team in those organizations that do not already have one. Hence, in SAFe, there is a Program Portfolio Management team at the Portfolio level.

The portfolio management team needs an algorithm for deciding which ideas might be best from the portfolio backlog which should include a ranking based on the current strategic priorities of the organization. At the Portfolio level, SAFe includes the concept of “Strategic themes” derived from the Enterprise goals to address this need.

In addition to driving prioritization decisions in the portfolio management kanban system, strategic themes should also drive the prioritization of budget allocation at the portfolio level. For example, if a bank decides that it wants or needs to build its business banking capabilities as a priority then that becomes a strategic theme. When the time comes to allocate annual budgets to software development value streams, it could be appropriate to allocate more of the budget to business banking (more software development teams) at the expense of, say, consumer banking (fewer software development teams).

SAFe Value Stream Layer

The organizational structure in large enterprises most often consists of business units organized around functionality (e.g. HR, Design, IT, Manufacturing, etc.) or business units organized around products and services (e.g. Ford Trucks division, Ford cars division, etc.) or business units organized around customers/geographies (e.g. Consumer Division, B2B Division, North America, Europe, etc.). In many, if not most, large organization some combination of all of these is in place.

SAFe applies lean principles by seeking to orient and organize software development teams around the stream of business value “from concept to cash.” Clearly, this “concept to cash” model is most easily understood in the context of the “for profit” end of our lens but restated as “concept to value.” It can work equally well for the “not for profits.” Also, this means that sometimes value streams are defined by individual business units in large organizations if the business units are organized around products/services or customers. More often, SAFe value streams involve parts of several business units.

SAFe Program Layer

The next layer below the Value Stream layer in the end-to-end business value flow (and also the layer below Portfolio for smaller businesses or smaller SAFe implementation) is the Program layer which is also sometimes thought of as the Product layer. Again, the flow of business value is via epics being pulled into a Program Backlog where they may be decomposed into smaller units, organized in a kanban system with WSJF reviewed and updated.

SAFe Team Layer

From the Program Backlog, in Agile software development groups, epics are decomposed into stories at the Team level and pulled into the sprint or iteration backlogs of individual teams. This is a key step because at this point, teams control the prioritization of work flow. How the teams do the prioritization is important for the flow of business value and we have suggested some approaches in our work on value visualization (see Sources).

We have reached the point where requirements are turned into working code but we have not yet delivered any business value. In SAFe, integrated, working code is a deliverable of each two week iteration. The deliverables from these two week iterations are aggregated up and demonstrated as working code in Program Increments which may contain 4-6 iterations. Developing and delivering working code in structured time boxes like this is called “Develop on Cadence” in SAFe. Still we have not delivered business value into the hands of the users and business. In SAFe, this last step is decoupled from the delivery of working code. A release management team prioritizes the working code that best suits the immediate needs of the business and releases it into production, usually with the assistance of a DevOps team.

End-to-End Business Value

Finally, we have delivered business value and the end-to-end system is complete. But let’s return to the question that opened this book: How do we know how much business value we have delivered? Does this represent the maximum that we could have delivered?

In SAFe, we have applied prioritization by WSJF at each level except the team level. We could have applied other business value metrics and prioritized using them but, by using WSJF, at least we can be sure that we have a basis of prioritization by economic value. To ensure that this is the maximum that we could have delivered, we need metrics – the WSJF values are a good start – and monitoring. We also need a culture of lean thinking and continuous improvement.


In this report, we have introduced some of the main challenges that organizations will face when applying executive-level strategic decision-making to a business value-driven software development group. Primarily, this report has been about a new approach to portfolio management. Lean thinking and scaled Agile techniques, even if the predominate methodology is Waterfall, will help and there are techniques to help with prioritizing work by business value.


1. Leffingwell, Dean. Agile Software Requirements:- lean requireements practices for teams, programs, and the enterprise. Pearson Education, Inc. 2011
2. Excerpt from Mikko Parkkola’s master’s thesis: Product Management and product Owner Role in Large Scale Agile Software Development.
3. Poppendieck, Mary and Tom. Implementing lean software development. Pearson Education Inc. 2007. 4. Reinertsen, Donald. The principles of prodcut development flow: Second generation Lean Product Development. Celeritas Publishing. 2009
5. http://www.rand.org/topics/delphi-method.html
6. http://store.mountaingoatsoftware.com/pages/planning-poker-in-detail
7. http://www.scaledagileframework.com/wsjf/
8. http://www.valuevisualization.com

Written by Default at 05:00
Categories :

The Origins of Function Point Measures

Trusted Advisor

Note of Thanks

We’re pleased to share this month’s Trusted Advisor, which was written by Capers Jones. Capers is a well-known author and speaker on topics related to software estimation. He is the co-founder, Vice President, and Chief Technology Officer of Namcook Analytics LLC, which builds patent-pending advanced risk, quality, and cost estimation tools. Many thanks to Capers for participating in Trusted Advisor and allowing us to publish his report!

Scope of Report

The 30th anniversary of the International Function Point User’s Group (IFPUG) is approaching. As such, this report addresses a brief history of the origin of function points. The author, Capers Jones, was working at IBM in the 1960’s and 1970’s, observing the origins of several IBM technologies, such as inspections, parametric estimation tools, and function point metrics. This report discusses the origins and evolution of function point metrics.


In the 1960’s and 1970’s IBM was developing new programming languages, such as APL, PL/I, PL/S etc. IBM executives wanted to attract customers to these new languages by showing clients higher productivity rates. As it happens, the compilers for various languages were identical in scope and had the same features. Some older compilers were coded in assembly language, while newer compilers were coded in PL/S, which was a new IBM language for systems software. When we measured productivity of assembly-language compilers versus PL/S compilers using “lines of code (LOC),” we found that even though PL/S took less effort, LOC per month favored assembly language. This problem is easiest to see when comparing products that are almost identical but merely coded in different languages. Compilers, of course, are very similar. Other products, besides compilers, that are close enough in feature sets to have their productivity negatively impacted by LOC metrics are PBX switches, ATM banking controls, insurance claims handling, and sorts.

A Better Metric

To show the value of higher-level languages, the first IBM approach was to convert high-level languages into “equivalent assembly language.” In other words, we measured productivity against a synthetic size based on assembly language instead of against true LOC size in the actual higher level languages. This method was used by IBM from around 1968 through 1972.

An IBM Vice President, Ted Climis, said that IBM was investing a lot of money into new and better programming languages. Neither he nor clients could understand why we had to use the old assembly language as the metric to show productivity gains for new languages. This was counter-productive to the IBM strategy of moving customers to better programming languages. He wanted a better metric that was language independent and could be used to show the value of all IBM high-level languages. This led to the IBM investment in function point metrics and to the creation of a function-point development team under Al Albrecht at IBM White Plains. 

The Origin of Function Points

Function point metrics were developed by the IBM team by around 1975 and used internally and successfully. In 1978, IBM placed function point metrics in the public domain and announced them via a technical paper given by Al Albrecht at a joint IBM/SHARE/Guide conference in Monterey, California. Table 1 shows the underlying reason for the IBM function point invention based on the early comparison of assembly language and PL/S for IBM compilers. Table 1 shows productivity in four separate flavors:

1. Actual lines of code in the true languages.
2. Productivity based on “equivalent assembly code.”
3. Productivity based on “function points per month.”
4. Productivity based on “work hours per function point.”

Function Point Evolution

The creation and evolution of function point metrics was based on a need to show IBM clients the value of IBM’s emerging family of high-level programming languages, such as PL/I, APL, and others. This is still a valuable use of function points, since there are more than 2,500 programming languages in 2016 and new languages are being created at a rate of more than one per month.

Another advantage of function point metrics vis a vis LOC metrics is that function points can measure the productivity of non-coding tasks, such as creation of requirements and design documents. In fact, function points can measure all software activities, while LOC can only measure coding.

Up until the explosion of higher-level programming languages occurred, assembly language was the only language used for systems software (the author programmed in assembly for several years when starting out as a young programmer).

With only one programming language, LOC metrics worked reasonably well. It was only when higher-level programming languages appeared that the LOC problems became apparent. It was soon realized that the essential problem with the LOC metric is really nothing more than a basic issue of manufacturing economics that had been understood by other industries for over 200 years.

This is a fundamental law of manufacturing economics: “When a manufacturing process has a high percentage of fixed costs and there is a decline in the number of units produced, the cost per unit will go up.”

The software non-coding work of requirements, design, and documentation acts like fixed costs. When there is a move from a low-level language, such as assembly, to a higher-level language, such as PL/S, the cost-per-unit will go up, assuming that LOC is the “unit” selected for measuring the product. This is because of the fixed costs of the non-code work and the reduction of code “units” for higher-level programming languages.

Function point metrics are not based on code at all, but are an abstract metric that defines the essence of the features that the software provides to users. This means that applications with the same feature sets will be the same size in terms of function points no matter what languages they are coded in. Productivity and quality can go up and down, of course, but they change in response to team skills.

The Expansion of Function Points

Function points were released by IBM in 1978 and other companies began to use them, and soon the International Function Point User’s Group (IFPUG) was formed in Canada.

Today, in 2016, there are hundreds of thousands of function point users and hundreds of thousands of benchmarks based on function points. There are also several other varieties of function points, such as COSMIC, FISMA, NESMA, etc.

Overall, function points have proven to be a successful metric and are now widely used for productivity studies, quality studies, and economic analysis of software trends. Function point metrics are supported by parametric estimation tools and also by benchmark studies. There are several flavors of automatic function point tools. There are function point associations in most industrialized countries. There are also ISO standards for functional size measurement.

(There was never an ISO standard for code counting, and counting methods vary widely from company to company and project to project. In a benchmark study performed for a “LOC” shop, we found four sets of counting rules for LOC that varied by over 500%).

Table 2 shows countries with increasing function point usage circa 2016, and it also shows the countries where function point metrics are now required for government software projects.

Expanding Use of Function Points

Several other countries will probably also mandate function points for government software contracts by 2017. Eventually most countries will do this. In retrospect, function point metrics have proven to be a powerful tool for software economic and quality analysis.

Trusted Advisor Guest Author

Again, our sincere thanks are extended to our guest author, Capers Jones.

You can download this report here

Written by Default at 05:00

Are Function Points Still Relevant?

Let's start with a quick overview of Function Point Analysis:

Function Point Analysis is a technique for measuring the functionality that is meaningful to a user, independent of technology.  It was invented by Allan Albrecht of IBM in 1979. Several standards exist in the industry, but the International Function Point Users Group (IFPUG) is the most widely used.  IFPUG produces the Function Point Counting Practices Manual, used by Certified Function Point Specialists (CFPS) to conduct function point counts.  IFPUG is one of the ISO standards for software sizing (ISO/IEC 28926:2009). 

Function Point Analysis considers five major components of an application or project: External Inputs, External Outputs, External Inquiries, Internal Logical Files and External Interface Files. The analyst evaluates the functional complexity of each component and assigns an unadjusted function point value. The analyst can also analyze the application against 14 general system characteristics to further refine the sizing and determine a final adjusted function point count.

Function Point Analysis

“The effective use of function points centers around three primary functions: estimation, benchmarking and identifying service-level measures.” [i] 

More and more organizations are adopting some form of Agile framework for application development and enhancement.  The most recent VisionOne State of Agile Survey reveals that 94% of organizations practice Agile.[ii]   Hot technologies such as big data, analytics, cloud computing, portlets and APIs are becoming ever more popular in the industry.

Let’s explore each of the three primary functions of function points and their relevance in today’s Agile dominated IT world and with new technologies.


Whether it is a move from traditional waterfall to Agile or from mainstream technologies to new innovations, project teams still have a responsibility to the business to deliver on time and within budget.  Estimates of the overall project spend and duration are critical for financial and business planning.

Parametric estimation is the use of statistical models, along with parameters that describe a project to derive cost and duration estimates.  These models use historical data to make predictions.  The key parameters necessary to describe a project are size, complexity and team experience.   Many other parameters can be used to further calibrate the estimate and increase its accuracy, including whether the project is using an Agile framework.  Several tools can be used to perform parametric estimation, including SEER, SLIM and COCOMO. 

Project size can be described in several ways, with software lines of code (SLOC) and function points being the most common.  SLOC has some inherent problems, one being that inefficient coding produces more lines of code, another being the fact that determining the SLOC size of a project before it is coded is itself an estimate.  That’s where function point analysis provides real value as a sizing tool.  Even in software developed using the latest innovations in technology, the five components of function point analysis still exist so function point counting remains a valuable tool for measuring software size.  Because a function point count can be done based on a requirements document or user stories, and the expected variance in function point counts between two certified function point analysts is between 5% and 10%, an accurate and consistent measure of the project size can be derived.  And because function point analysis is based on the users view and independent of technology it works just as well as technology evolves.

The function point size, along with the other parameters described above are then used by the parametric estimation tool to provide a range of cost and duration estimates for the entire project within a cone of uncertainty.  This information can be used for financial budgeting and business planning.  

Projects in an Agile framework can create estimates for the individual user stories with techniques like planning poker, t-shirt size or relative mass valuation.  These estimates are used for sprint planning and are refined through the backlog grooming process.  As the team measures and refines its velocity the estimates are further updated.   Ultimately all of these estimates should converge on the overall projected estimate created using parametric estimation.

Regardless of the technologies used for development, in this way estimates of the overall project through parametric estimation and Agile estimation techniques can coexist and complement each other in support of the business’s need for financial and business planning.


Whatever technology or development framework is being used, constant improvement is essential to an organizations ability to survive and thrive in a competitive environment.  Baselining an organization’s performance relative to productivity, quality and timeliness is the starting point for benchmarking and the first step toward an IT organization’s delivery improvement. 

Function points are a common currency for metrics equations.  They provide a consistent measure of the functionality delivered, allowing benchmark comparison of performance over time, of one technology against another, internally across various departments or vendors, and externally against the industry in which a company competes.  Benchmarking is also used in outsourcing governance models as a way to ensure a vendor is providing value with respect to contractual commitments and competitors in the marketplace.

A large amount of function point based industry benchmark data is available from many suppliers.  Some of the suppliers include: The Gartner Group, Rubin Systems Inc. META Group, Software Productivity Research, International Software Benchmarking Standards Group (ISBSG) and DCG Software Value. 

To execute a benchmark, data is collected for the target projects, including function point size, effort and duration.  The data is analyzed and functional metrics are created and baselined for the target projects.  Quantitative comparison of these baselines is done against suitable industry benchmarks.  Qualitative assessment is done to further analyze the target projects and determine contributing factors to performance differences with the benchmark.

Regardless of the development framework or technology used, function points is the basis for baselining and benchmarking an organization to determine their performance relative to the industry and allowing for improvements to move toward best-in-class performance.

Service-Level Measures:

Service-level metrics are most commonly used in outsourcing governance to measure the performance of the outsourcer to ensure contract compliance.  With IT’s increased alignment with the business, service-level metrics are increasingly used internally as well.  Delivery framework and technology don’t change the need for this kind of oversight. 

Outsourcing is typically done at the individual project or application level, for application maintenance, or the entire ADM environment.  Let’s examine each of these outsourcing models and how function point based service-level metrics can be used to monitor them.

Individual project or application:

In the case of individual project or application outsourcing service-level definition is based on the provider’s responsibility, the standards required by the customer and how success is defined.  Function point analysis has a role in all three of these areas. 

Definition of the outsourcer’s responsibilities helps identify the hand-off points.  Function point sizing at requirements hand-off provides an initial baseline of the project size for all metrics to be built upon.  As requirements change throughout the project the baseline can be updated through change control. 

The standards and development practices lead to establishment of compliance measures and targets for the outsourcer to meet.  Function point sizing can be used here as the basis of measures like productivity.

Success can be measured with function point based measures of delivery rate, duration and quality against contractual requirements or internal standards.

Application maintenance:

Measurement of maintenance in an outsourcing includes customer expectations, response time, defect repair, portfolio size, application expertise and others.  Let’s explore those that involve function point analysis.

Customer expectations can be thought of as the size of the portfolio being maintained, as well as the cost of maintaining it.  The portfolio size can be measured with function points to establish the maintenance baseline and its growth over time can be monitored. 

Support efficiency can be measured as the size of the support staff needed to maintain the maintenance baseline.  This can also be measured over time to show trends.

Entire ADM environment:

The measurement needs for ADM outsourcing are different from those of the previous two scenarios.  A multi-year outsourcing requires more complex measures to ensure the services provided by the outsourcer meet contractual commitments.  To do this more complex metrics dashboards are often built to allow a wide range of measurements to be analyzed. 

To build a metrics dashboard that provides the level of monitoring required, many factors must be considered including contractual requirements, end customer expectations and organizational standards and goals. 

The table below describes metrics derived from performance considerations and business drivers. [iii]

Function Points

Many of these metrics are based on functional size so function point analysis can be used to build the measurements.

For outsourcing and internal IT, effective measurement is critical to monitor performance and improvements and should be linked to the organizations goals and objectives.  Metrics based on functional size are key to a service-level measurement program without regard to the delivery framework or technology used.


We have seen above that function point analysis is versatile and adaptable with changing technology and processes.  All technologies still have the five basis components of function point analysis and organizations are still asking “when it will be done?”, “how much will it cost?” and “what will I get?”.  It is for these reasons that function point analysis remains relevant in today’s IT world.


  1. Garmus, D. Herron, D., Function Point Analysis, Measurement Practices for Successful Projects, Addison-Wesley, 2001
  2. IFPUG Metrics View, February 2016, International Function Point Users Group
  3. 9th Annual State of Agile Survey, VersionOne Inc., 2015
Written by Default at 05:00

Fourth Volume of Trusted Advisor Anthology Now Available

Trusted Advisor

We're excited to announce the release of the fourth Trusted Advisor anthology!

As you know, every month we publish a Trusted Advisor report. We research and draft this report based on IT-related questions that are submitted by members of Trusted Advisor. This helps us to keep up with IT trends and issues plaguing those in the field - and it means you can spend your time working instead of looking for the answers to your problems. In essence, we do the research for you!

At the end of the year we package the reports, 12 in total, into an anthology, making it easy to have the research available at your fingertips.

The fourth edition of the book features reports written throughout 2015, such as:

  • Why Should I Have More Than One Technique for Retrospectives?
  • Our Software is Full of Bugs. What Can We Do About It?
  • Story Points or Function Points or Both?

Buy the Book

All the reports are individually available to download from our website. But, if you're interested in the full anthology of reports, it can be purchased on Amazon.

Join Trusted Advisor

Do you have a question you'd like to submit to our research team? Membership to Trusted Advisor is open to all IT professionals at no cost. Registration details, and more information about Trusted Advisor, is available here.

Written by Default at 05:00
Categories :

How Do I Calculate Estimates for Budget Deliverables on Agile Projects this Year?

Scope of this Report

This report discusses the tension between organizational need of budgetary data for planned Agile deliverables vs traditional project cost accounting. Agile project lean-budgeting best practices at the portfolio level are highlighted to illuminate the importance of estimating and budgeting as Agile scales in an organization. The Scaled Agile Framework (SAFe) portfolio and value steam levels, as presented in SAFe 4.0, provide the backdrop for this discussion.

Budgetary Needs of the Organization at Scale

Small to medium sized businesses with 100 or less developers organized to develop, enhance or maintain a small group of software products can account for the labor and material needed with straightforward software cost accounting methods. This is true whether they are using traditional waterfall methods or Agile methods such as Scrum because, typically, such businesses are small enough to ignore or work around the differences between the project perspective of waterfall and the product or perspective of Agile.

Larger organizations with hundreds to thousands of developers, organized to develop and maintain a portfolio of software products, are more challenged in software cost accounting and budgetary planning or estimating early in the planning cycle for a given year. Estimating and budgeting early presents challenges to Agile credibility as well as governance.

Software leaders in the trenches deal with day to day change while higher-up executive or management seek predictability and certainty. This contributes to preliminary rough order of magnitude estimates becoming memorialized as commitments instead of a work-in-process numbers to drive conversations and decision-making. This is driven by the financial reporting needs in the executive suite rather than the workflow optimizing needs of the development activity.

Financial Reporting Tensions Rise as Agile Scales

Public accounting standards guide CPAs and CFOs shaping financial information to quantify business-created value. That value can be in the form of a hard good, such as a car, or a soft good, such as software or music. You can consult the Financial Account Standards Board (FASB) repository of standards1 and statements on software cost accounting for the specifics but in general, you can either expense or capitalize your costs when developing software. Choosing when to do so is left up to the organization (and their reporting needs) as long as they can defend the decision per the standards and they are below a certain size. Consequently, initial budgetary estimates are important inputs into the process because they allow for the reporting professionals to partition or plan the expense vs capitalize decision based on schedule

In the traditional waterfall world of software development, it is conceptually easier to decide when to expense and when to capitalize as the diagram below illustrates.

Figure 1: Waterfall Project Stages

Waterfall Project Stages

A waterfall software project has a distinct beginning and end with clear ‘phases’ identifying activity versus value created. Code, integration and test produce actual ‘capital’ value. Requirements, design and maintenance are ‘expenses’ required to produce ‘capital’ value. Also, substantial waterfall projects are longer in term and length and involve more resources, both labor and material. Longer term this makes it seem “easier” to estimate and schedule reporting events.

Agile projects, with most using the Scrum method, are inherently different in structure and execution employing an incremental value creation model as illustrated in Figures 2 and 3 below. This is a cyclical model where short bursts of activity create immediately available value (shippable product increment). Multiple cycles, iterations or sprints are crafted together over time to create releases of software product. Estimates can be made as to how much time will be spent designing, coding, testing, etc, in Agile and the work of people like Even Leybourne
suggests that it is not difficult to deal with the CAPEX/OPEX distinction but auditors like to see timesheet records to support estimates of the amount of time spent, say, designing versus coding. Splitting time recording like this for Agile teams within a sprint is hard to the point of being impossible. DCG Clients use approaches similar to those described by Leybourne to deal with this issue with little friction but only after they have convinced their auditors.

Figure 2: Agile (Scrum) Sprint Cycle

Agile Sprint Cycle

Figure 3: Multiple Sprints to Produce a Release

Multiple Sprints to Produce a Release

Estimating within an Agile cycle is a real-time small-bore activity focused on small pieces of potential value that is more often called “planning” in the Agile context. These are not estimates that cover the beginning, middle and end of an Agile initiative (or project if you prefer that term). These real-time estimates, done with relative methods and unanchored to any financial or engineering reality are unsuitable for aggregation and turning into a budget. You cannot aggregate bottom-up estimates to a budget in Agile.

However top-down needs persist for Agile budgets and early estimates to support not only prioritization but also the expense vs capitalize allocation process. As Agile scales, so do the reporting demands. Furthermore these early estimates or budget must be as reliable and consistent as possible, be based on a repeatable, verifiable process to increase confidence and accurately represent the value creation activity. If you cannot bottom-up aggregate Agile estimates, what about top-down?

Epics, Story Points and Releases

Epics are the large, high-level stories that can usually be broken down into many smaller stories. Epics drive Agile development, from the top, to create business value in the form of shippable product. Epics form the contents of portfolio and product backlogs rather than sprint backlogs and as such they are the level of stories most familiar to executive and technical leadership. Despite being high-level, they are brief, concise and unelaborated just like normal stories and, hence, almost impervious to early estimation due to the relative lack of information.
The Agile method, at the team level, is designed to decompose Epics into smaller units such as user stories. These user stories are estimated using relative methods such as story point, t-shirt sizing and other techniques. The Agile team continuously elaborate these user stories, in each sprint or iteration, leveraging end-user involvement and team dynamics to create incremental value. Discovery of requirements is an intimate process conducted by the team and not the organization.

At some endpoint in the Agile development a Release is designated ready and complete for shipment to the marketplace. This all costs money so how does the organization decide to fund one or more Agile teams when initial Epics are hard to estimate?

Portfolio Level Value Creation and Reporting

The term portfolio is common in financial and investment conversation while programs is a term common in planning and organization. For example, Federal contractors, building large military systems, use the term programs to cover the planned series of events, projects and other activities, such as in the F-22 Raptor Stealth Fighter Program. When the methodologists at Scaled Agile, Inc. (SAI) constructed their Scaled Agile Framework (SAFe) approach, they recognized that “Program” is a valid term to organize multiple subordinate activities (within multiple Agile teams) and applied it to their lexicon2. While there are several different established approaches to scaling Agile, the SAI methodologists seem to have paid the most attention to high-level budget challenges so we will spend some time on their approach in the report.

Using “Portfolios” to group multiple Programs seemed a common sense next step because of the financial implications rising from larger scale activity in an Enterprise. Agile is driven at the team level to produce software that has value but the Enterprise is driven by financial considerations to fund, extract that value and report accurately along the way.

In the SAFe 4.0 Big Picture3, which illustrates the SAFe framework, the Portfolio level is well articulated and includes Strategic Themes and the constructs of Value Streams that are budgeted or funded. Epics are represented as children of Strategic Themes that guide Value Streams toward the larger aims of the Portfolio.
Consequently early budget estimates, good ones at that, are important to the Enterprise in order to decide on funding priorities and trigger strategic and tactical initiatives. But if Epics at the top are not suitable for early estimation and you cannot bottom-up aggregate, how do you estimate or budget at all?

Estimating Early Means Uncertainty

Let’s assume an existing organization, say a large healthcare insurer, has recently acquired a smaller, complementary company. Both company’s systems have to work together in the first few years to give the combined company time to either consolidate, merge or sunset the systems. Both companies employ Agile methods so it is decided to launch a strategic Agile initiative to create application program interfaces (APIs) that will make the two systems function as one. This has great value for the organization.

Let’s assume for our example that an integration working group made up of representatives of the two companies presents a high-level design outlining the, say, thirteen API’s presumed needed. Executive managements then asks for budget estimates (hardware and software) and a more specific schedule to begin the approval process.

If the organization, through their annual planning process, has already allocated or budgeted a total overall IT spend with some software component then the question is how much to set aside for this particular initiative with little information and lots of uncertainty?

Funding Value Streams Not Projects

The 13 APIs, when delivered by the Agile teams, will provide real value to the organization. The total spend to cover all of the teams for the time period needed will be governed by the organization’s fiduciary authorities. Depending on the size and scope of the functionality needed, you could conceivable have 13 separate Agile teams each working on an API. Traditional project cost accounting is challenged by this model.

In the SAFe® 4.0 framework, strategies of Lean-Agile Budgeting5 are described to address these challenges and the tensions discussed above. The takeaway from the strategies are simple, continue the fiduciary authorities’ traditional role of overall budgeting and spend reporting while empowering the Agile Teams to own content creation using the organizing construct of one or more Value Streams.

The Agile Teams, and implicitly the Agile method, are trusted to build the right things on a day-to-day, week-to-week basis within an overall approved budget. The traditional project cost-accounting methods, that seek command and control assurance, are replaced by a dynamic budgeting5 approach within the Value Stream.

Going back to our example, the 13 APIs could have a natural affinity or grouping and drive the association of 3 separate Value Streams as illustrated in Figure 4.

Figure 4: Value Stream Example

Value Stream Example

Each of these Value Streams would be funded from the annual allocated software spend, by the fiduciary authorities but how big do you make the allocation for API Group 2?

Anchoring Reality to Functionality

At this point the Integration workgroup has only two choices: Estimate by experience or estimate by functionality.

Estimating by experience can work if the right set of circumstances occur. The estimators involved are experts in the proposed work, there is a rich history of prior work where analogy analysis could be made, re-estimation is done by the same group and the technology is familiar and stable. When these factors do not exist, risk rises as to the quality and goodness of the estimate and future estimates.

Estimating by functionality means quantifying the proposed work using industry standard sizing methods and leveraging parametric or model based estimating4. This approach leverages historical repositories of industry-similar analogous projects to create estimates that include success vs failure probabilities (risk) profiles. The organization is borrowing the past experience of others to help them predict their future. When this method is used it increases confidence because of the statistical and mathematical nature of the process. The process is suitable for internal adoption as a repeatable and tunable method. Management overhead costs do increase when this method is adopted.

Whatever method an organization chooses, it should always be considered a starting point and a work-in process number and not memorialized.

Budgeting for Value

The answer to the question, “How do I calculate estimates for Agile budget deliverables this year?” is to define the value (stream) desired and estimate using your method of choice to define the portion of the overall software spend required.

As the budget is spent, within this API Group 2 Value Stream’s allocation, multiple deliverables (Releases) would be created and their individual allocations dynamically adjusted, as needed, by the team as content (Epics and derived user stories) are elaborated, understood and converted into working software.

Figure 5: Fund Value Streams not Projects or Releases

Fund Value Streams

One or more Releases will be created and shipped supporting the expense or capitalize decision and the dynamic budget changes within each release activity updates the overall budget, governed by the fiduciary authorities.


This lean-Agile budgeting approach relieves the organization from using traditional, command and control project cost accounting methods, which are challenged by the Agile method. This approach allows the fiduciary authorities to own what they should, the overall software spend, divided by value streams at the same time the content authorities, the Agile teams, own the day-to-day, week-to-week spending. This is a big step away from the past for many organizations but a good step forward to a more Agile organization.

1. FASB http://www.fasb.org/summary/stsum86.shtml ; http://www.gasb.org/cs/ContentServer?c=Document_C&pagename=FASB%2FDocument_C%2FDocumentPage&cid=1176156442651;
2. Scaled Agile Framework, http://scaledAgileframework.com/glossary/#P
3. SAFe 4.0 Big Picture, http://scaledAgileframework.com/
4. International Society of Parametric Analysts, Parametric Estimating Handbook, http://www.galorath.com/images/uploads/ISPA_PEH_4th_ed_Final.pdf
5. SAFe® 4.0 Lean-Agile Budgeting, http://scaledAgileframework.com/budgets/


Download this report here.

Written by Default at 05:00

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!