Capability Counts 2016

Capability Counts 2016

The CMMI framework has been around for awhile now, but its use in the industry continues to persist. The framework's focus on quality improvement through the use of best practices makes it of value to almost any organization.

While the framework is still in use, the CMMI Institute has expanded its annual conference beyond a singular focus on the framework itself, to a broader focus on capability. Branding as the "Capability Counts" conference makes sense - all organizations want to build and capitalize on their capability - and this includes more than just the implementation of CMMI. 

We were excited to attend this year's Capability Counts conference in Annapolis, where the wider scope of the conference lent itself to an interesting agenda of speakers on topics from risk management to product quality measurement - and yes, CMMI.

Tom Cagley, our Vice President of Consulting, also spoke at the conference. His presentation, "Budgeting, Estimation, Planning, #NoEstimates, and the Agile Planning Onion - They ALL Make Sense," discussed the many levels of software estimation, including budgeting, high-level estimation, and task planning. He explained why all of these methods are useful, when they make sense, and in what combination.

You can download the presentation below. More information about our CMMI offerings are here - and we're already looking forward to next year's conference!

Download

 

Written by Default at 05:00

A Customized Sizing Model

We work with a lot of clients, and they vary in size, industry, and location. They also, of course, vary in the reason they come to us for help. Sometimes they're in need of training, sometimes they're looking for help in one specific area, and sometimes they need help identifying what their actual problem even is. The common theme between all of our engagements is that our focus is on value: What value can we provide to our clients that will truly impact their organization, beyond even IT?

In a recent engagement, a business came to us with a problem. They were bidding on a Navy contract. The contract required the use of function points. Their experience with sizing was minimal. Could we help?

Yes.

But, we believed that the company needed more than just one simple size for the entire project. The value we provided was in leveraging our experience to build a customized, flexible sizing model to most effectively meet the needs of the client - and for less than the cost of our competitors.

Read the case study to find out more about the engagement.

Download.

 

 

 

Written by Default at 05:00

The Mathematical Value of Function Points

"Anything you need to quantify can be measured in some way this is superior to not measuring it
all." – Gilb’s Law(1).

To assess the value of function points (any variety), it is important to step back and address
two questions.  The first is “What are function points (in a macro sense)” and secondly “Why do we measure?”

Function points are a measure of the functional size of software. What are IFPUG Function
Points? IFPUG Function Points (there are several non-IFPUG variants) are a measure of the functionality delivered by the project or application. The measure is generated by counting features and functions of the project or application based on a set of rules. In this case, the rules for counting IFPUG Function Points are documented in the IFPUG Counting Practices Manual. Using the published rules, the measure of IFPUG Function Points is a consistent and repeatable proxy for size. Consistency and repeatability increase the usefulness of estimating and measurement. An analogy for the function point size of a project is the number of square feet of a house when building or renovating. Knowing the number of square feet provides one view of the house, but not the other attributes, such as the number of bedrooms. A project function point count is a measure of the function size a project while an application count is a measure of the functional size of the application. 

The question of why do we measure is more esoteric. The stated reasons for measuring often
include:

  • To measure performance,
  • To ensure our processes are efficient,
  • To provide input for managing,
  • To estimate,
  • To pass a CMMI appraisal,
  • To control specific behaviour, and
  • To predict the future.

Douglas Hubbard (2) summarizes the myriad reasons for measuring into three basic categories.  
1. Measure to satisfy a curiosity.
2. Measure to collect data that has external economic value (selling of data).
3.Measure in order to make a decision.

The final reason, to make a decision, is the crux of why measurement has value in most organizations. The decision is the driver to the value of counting function points. The requirements for making a decision are uncertainty (lack of complete knowledge), risk (a consequence of making the wrong decision) and a decision maker (someone to make the decision).  

The attribute of uncertainty is the direct reflection that there exists more than one possible outcome for a decision. Represent the measurement of uncertainty as a set of probabilities assigned to the possible outcomes. For example, there are two possibilities for the weather tomorrow, precipitation or no precipitation. The measurement of uncertainty might be expressed as a 60% chance of rain (from the statement we can infer a 40% chance of no rain). Define risk as the uncertainty that a loss or some other “bad thing” will occur.  In this case, the risk might be that we intend to go picnic if it does not rain and must spend $30 for food the day before that will perish if we can't go on the picnic.  
Measurement of risk is the quantification of the set of possibilities that combines the probability of occurrence with the quantified impact of an outcome.  We would express the risk as a 60% chance of rain tomorrow with a potential loss of $30 for the picnic lunch that won't be eaten. In simplest terms, we measure so we can reduce the risk of a negative outcome. In our picnic example, a measure would have value if it allows us to reduce the chance that we spend $30 for a picnic on a rainy day.

A simple framework hybridized from Hubbard’s How to Measure Anything or determining the value of counting function points to support decision making is:

  • Define the decision.
  • Determine what you already know (it may be sufficient).
  • Determine if knowing functional size will reduce uncertainty.
  • Compute the value of knowing functional size (or other additional information).
  • Count the function points if they have economic value.
  • Make the decision!

The Process and an Example:

1. Define the decision.

Function points provide useful information when making some types of decisions. Knowing the size of the software delivered or maintained would address the following questions:

  • How much effort will be required to deliver a set of functionality?
  • Given a potential staffing level, is a date or budget possible?
  • Given a required level of support, is staffing sufficient?

Summarizing the myriad uses of function points into four primary areas is useful for understanding where knowing size reduces uncertainty.

a) Estimation: Size is a partial predictor of effort or duration. Estimating projects is an important use of software size. Mathematically, the effort is a function of size, behaviour, and technical complexity. All parametric estimation tools, home-grown or commercial, require project size as one of the primary inputs. The simple parametric model that equates effort to size, behavior and complexity are an example of how knowing size reduces uncertainty.
b)Denominator: Size is a descriptor that is generally used to add interpretive information
to other attributes or as a tool to normalize other attributes. When used to normalize other measures or attributes, size is usually used as a denominator. Effort per function point is an example of using function points as the denominator. Using size as a denominator helps organizations make performance comparisons between projects of differing sizes. For example, if two projects discovered ten defects after implementation, which had better quality? The size of the delivered functionality would have to be factored into the discussion of quality.  
c) Reporting: Collect the measures needed to paint a picture of project performance,
progress or success. Leverage measurement data for Organizational report cards and performance comparisons. Use functional metrics as a denominator to synchronize many disparate measures to allow comparison and reporting.
d) Control: Understanding performance allows project managers, team leaders, and project team members to understand where they are in an overall project or piece of work and, therefore, take action to change the trajectory of the work. Knowledge allows the organization to control the flow of work in order to influence the delivery of functionality and value in a predictable and controlled manner.

2. Determine what you already know (it may be enough).

Based on the decision needs, the organization may have sufficient information to reduce
uncertainty and make the decision. For example, if a table update is made every month, takes 10 hours to build and test, then no additional information is needed to predict how much effort is needed to make the change next month. However, when asked to predict a release of a fixed but un-sized backlog, collect more data.

3. Determine if knowing functional size will reduce uncertainty.

Not all software development decisions will be improved by counting function points (at
least in their purest form). Function point counting for work that is technical in nature (hardware and platform related), non-functional in nature (changing the color of a screen) or an effort to correct defects rarely provides significant economic value.

4. Compute the value of knowing the functional size (or other additional information).

One approach to determining whether measurement will provide economic value is to calculate the expected opportunity loss. As a simple example assume a high profile $10M project, estimated to have a 50% chance of being on a budget (or below) and a 50% probability of being 20% over budget.  
In table form:

Mathematical Value of Function Points

The expected opportunity loss is $1M (50% * 2M, very similar to the concept of Weighted Shortest Job First used in SAFe®). In this simple example, if we had perfect information we could make a decision to avoid a $2M over budget scenario.  The expected value of perfect information is $2M. If counting function points and modeling the estimate improves the probability of meeting the budget to 75% then the expected opportunity loss is $500K (a 50% reduction).

5. Count the function points if there is economic value.

Assuming that the cost of the function point count and the estimate is less than the improvement in the opportunity loss, there is value to counting function points.  The same basic thought process is valid to determine whether to make any measure.

6. Make the decision!

Using the reduction of uncertainty make the decision. For example, if the function point count and estimate based on that count reduce our uncertainty that we can meet the estimate by 50% we would be more apt to decide to do the project and to worry less about the potential ramifications to our career. 

Conclusion

While the scenario used to illustrate the process is simple, the basic process can be used to evaluate the value of any measurement process. The difference in the expected gain and the expected value or the percentage not spent on measurement is the value of the function point count. Modeling techniques
such as Monte Carlo Analysis and calibrated estimates are useful to address more robust scenarios in addition to the use of historical data. Counting function points reduces the amount of uncertainty so that we can make better decisions. If this simple statement is true, we can measure the economic value of counting function points.

Sources

1. Demarco, Tom and Lister, Tim. Peopleware: Productive Projects and Teams (3rd Edition). 2013.
2. Hubbard, Douglas. How to Measure Anything. (Third Edition). 2014. Wiley. 

Download

The report can downloaded here.

Written by Default at 05:00

Software Vendor Savings

Tony MannoDo you engage with software vendors to do your development? If so, then you know the drill. You provide your vendors with detailed requirements and they come back with a price for the project. But how do you really know whether you’re paying a fair price, and what’s your basis for negotiation?

If you could quantify the unit value created by the project, and know the fair market rate for those units, then you would know what you should pay for the project. You would be in a strong position to negotiate a fair price with your vendor.

Function Point Analysis can provide that information. It is a technique for measuring the functionality that is meaningful to a user, independent of technology. Function Point Analysis is governed by IFPUG, which produces the Function Point Counting Practices Manual. This manual is used by all IFPUG-certified Function Point Analysts to conduct function point counts. IFPUG is an ISO standard for software measurement.

Industry standard rates for the development of function points are available. Armed with the function point count for the project, along with the market rate, you’re ready for a win-win negotiation with your vendor.

Additional Resources:

Looking for more information? Check out these publications:

  1. An introduction to Function Point Analysis, including what it is and who would benefit from it. Download.
  2. DCG’s Function Point Analysis services. Download.
  3. DCG’s Software Vendor Savings offering. Learn more.

 If you need more information on how to use Function Point Analysis for evaluation and negotiation of vendor pricing, or if you have general questions about function points, don’t hesitate to reach out! I’m always up for a discussion!

 

Anthony Manno, III
Vice President, Outsourced Services

t.manno@softwarevalue.com

 

Written by Tony Manno at 05:00

How Do I Calculate Estimates for Budget Deliverables on Agile Projects this Year?

Scope of this Report

This report discusses the tension between organizational need of budgetary data for planned Agile deliverables vs traditional project cost accounting. Agile project lean-budgeting best practices at the portfolio level are highlighted to illuminate the importance of estimating and budgeting as Agile scales in an organization. The Scaled Agile Framework (SAFe) portfolio and value steam levels, as presented in SAFe 4.0, provide the backdrop for this discussion.

Budgetary Needs of the Organization at Scale

Small to medium sized businesses with 100 or less developers organized to develop, enhance or maintain a small group of software products can account for the labor and material needed with straightforward software cost accounting methods. This is true whether they are using traditional waterfall methods or Agile methods such as Scrum because, typically, such businesses are small enough to ignore or work around the differences between the project perspective of waterfall and the product or perspective of Agile.

Larger organizations with hundreds to thousands of developers, organized to develop and maintain a portfolio of software products, are more challenged in software cost accounting and budgetary planning or estimating early in the planning cycle for a given year. Estimating and budgeting early presents challenges to Agile credibility as well as governance.

Software leaders in the trenches deal with day to day change while higher-up executive or management seek predictability and certainty. This contributes to preliminary rough order of magnitude estimates becoming memorialized as commitments instead of a work-in-process numbers to drive conversations and decision-making. This is driven by the financial reporting needs in the executive suite rather than the workflow optimizing needs of the development activity.

Financial Reporting Tensions Rise as Agile Scales

Public accounting standards guide CPAs and CFOs shaping financial information to quantify business-created value. That value can be in the form of a hard good, such as a car, or a soft good, such as software or music. You can consult the Financial Account Standards Board (FASB) repository of standards1 and statements on software cost accounting for the specifics but in general, you can either expense or capitalize your costs when developing software. Choosing when to do so is left up to the organization (and their reporting needs) as long as they can defend the decision per the standards and they are below a certain size. Consequently, initial budgetary estimates are important inputs into the process because they allow for the reporting professionals to partition or plan the expense vs capitalize decision based on schedule

In the traditional waterfall world of software development, it is conceptually easier to decide when to expense and when to capitalize as the diagram below illustrates.

Figure 1: Waterfall Project Stages

Waterfall Project Stages

A waterfall software project has a distinct beginning and end with clear ‘phases’ identifying activity versus value created. Code, integration and test produce actual ‘capital’ value. Requirements, design and maintenance are ‘expenses’ required to produce ‘capital’ value. Also, substantial waterfall projects are longer in term and length and involve more resources, both labor and material. Longer term this makes it seem “easier” to estimate and schedule reporting events.

Agile projects, with most using the Scrum method, are inherently different in structure and execution employing an incremental value creation model as illustrated in Figures 2 and 3 below. This is a cyclical model where short bursts of activity create immediately available value (shippable product increment). Multiple cycles, iterations or sprints are crafted together over time to create releases of software product. Estimates can be made as to how much time will be spent designing, coding, testing, etc, in Agile and the work of people like Even Leybourne
suggests that it is not difficult to deal with the CAPEX/OPEX distinction but auditors like to see timesheet records to support estimates of the amount of time spent, say, designing versus coding. Splitting time recording like this for Agile teams within a sprint is hard to the point of being impossible. DCG Clients use approaches similar to those described by Leybourne to deal with this issue with little friction but only after they have convinced their auditors.

Figure 2: Agile (Scrum) Sprint Cycle

Agile Sprint Cycle

Figure 3: Multiple Sprints to Produce a Release

Multiple Sprints to Produce a Release

Estimating within an Agile cycle is a real-time small-bore activity focused on small pieces of potential value that is more often called “planning” in the Agile context. These are not estimates that cover the beginning, middle and end of an Agile initiative (or project if you prefer that term). These real-time estimates, done with relative methods and unanchored to any financial or engineering reality are unsuitable for aggregation and turning into a budget. You cannot aggregate bottom-up estimates to a budget in Agile.

However top-down needs persist for Agile budgets and early estimates to support not only prioritization but also the expense vs capitalize allocation process. As Agile scales, so do the reporting demands. Furthermore these early estimates or budget must be as reliable and consistent as possible, be based on a repeatable, verifiable process to increase confidence and accurately represent the value creation activity. If you cannot bottom-up aggregate Agile estimates, what about top-down?

Epics, Story Points and Releases

Epics are the large, high-level stories that can usually be broken down into many smaller stories. Epics drive Agile development, from the top, to create business value in the form of shippable product. Epics form the contents of portfolio and product backlogs rather than sprint backlogs and as such they are the level of stories most familiar to executive and technical leadership. Despite being high-level, they are brief, concise and unelaborated just like normal stories and, hence, almost impervious to early estimation due to the relative lack of information.
The Agile method, at the team level, is designed to decompose Epics into smaller units such as user stories. These user stories are estimated using relative methods such as story point, t-shirt sizing and other techniques. The Agile team continuously elaborate these user stories, in each sprint or iteration, leveraging end-user involvement and team dynamics to create incremental value. Discovery of requirements is an intimate process conducted by the team and not the organization.

At some endpoint in the Agile development a Release is designated ready and complete for shipment to the marketplace. This all costs money so how does the organization decide to fund one or more Agile teams when initial Epics are hard to estimate?

Portfolio Level Value Creation and Reporting

The term portfolio is common in financial and investment conversation while programs is a term common in planning and organization. For example, Federal contractors, building large military systems, use the term programs to cover the planned series of events, projects and other activities, such as in the F-22 Raptor Stealth Fighter Program. When the methodologists at Scaled Agile, Inc. (SAI) constructed their Scaled Agile Framework (SAFe) approach, they recognized that “Program” is a valid term to organize multiple subordinate activities (within multiple Agile teams) and applied it to their lexicon2. While there are several different established approaches to scaling Agile, the SAI methodologists seem to have paid the most attention to high-level budget challenges so we will spend some time on their approach in the report.

Using “Portfolios” to group multiple Programs seemed a common sense next step because of the financial implications rising from larger scale activity in an Enterprise. Agile is driven at the team level to produce software that has value but the Enterprise is driven by financial considerations to fund, extract that value and report accurately along the way.

In the SAFe 4.0 Big Picture3, which illustrates the SAFe framework, the Portfolio level is well articulated and includes Strategic Themes and the constructs of Value Streams that are budgeted or funded. Epics are represented as children of Strategic Themes that guide Value Streams toward the larger aims of the Portfolio.
Consequently early budget estimates, good ones at that, are important to the Enterprise in order to decide on funding priorities and trigger strategic and tactical initiatives. But if Epics at the top are not suitable for early estimation and you cannot bottom-up aggregate, how do you estimate or budget at all?

Estimating Early Means Uncertainty

Let’s assume an existing organization, say a large healthcare insurer, has recently acquired a smaller, complementary company. Both company’s systems have to work together in the first few years to give the combined company time to either consolidate, merge or sunset the systems. Both companies employ Agile methods so it is decided to launch a strategic Agile initiative to create application program interfaces (APIs) that will make the two systems function as one. This has great value for the organization.

Let’s assume for our example that an integration working group made up of representatives of the two companies presents a high-level design outlining the, say, thirteen API’s presumed needed. Executive managements then asks for budget estimates (hardware and software) and a more specific schedule to begin the approval process.

If the organization, through their annual planning process, has already allocated or budgeted a total overall IT spend with some software component then the question is how much to set aside for this particular initiative with little information and lots of uncertainty?

Funding Value Streams Not Projects

The 13 APIs, when delivered by the Agile teams, will provide real value to the organization. The total spend to cover all of the teams for the time period needed will be governed by the organization’s fiduciary authorities. Depending on the size and scope of the functionality needed, you could conceivable have 13 separate Agile teams each working on an API. Traditional project cost accounting is challenged by this model.

In the SAFe® 4.0 framework, strategies of Lean-Agile Budgeting5 are described to address these challenges and the tensions discussed above. The takeaway from the strategies are simple, continue the fiduciary authorities’ traditional role of overall budgeting and spend reporting while empowering the Agile Teams to own content creation using the organizing construct of one or more Value Streams.

The Agile Teams, and implicitly the Agile method, are trusted to build the right things on a day-to-day, week-to-week basis within an overall approved budget. The traditional project cost-accounting methods, that seek command and control assurance, are replaced by a dynamic budgeting5 approach within the Value Stream.

Going back to our example, the 13 APIs could have a natural affinity or grouping and drive the association of 3 separate Value Streams as illustrated in Figure 4.

Figure 4: Value Stream Example

Value Stream Example

Each of these Value Streams would be funded from the annual allocated software spend, by the fiduciary authorities but how big do you make the allocation for API Group 2?

Anchoring Reality to Functionality

At this point the Integration workgroup has only two choices: Estimate by experience or estimate by functionality.

Estimating by experience can work if the right set of circumstances occur. The estimators involved are experts in the proposed work, there is a rich history of prior work where analogy analysis could be made, re-estimation is done by the same group and the technology is familiar and stable. When these factors do not exist, risk rises as to the quality and goodness of the estimate and future estimates.

Estimating by functionality means quantifying the proposed work using industry standard sizing methods and leveraging parametric or model based estimating4. This approach leverages historical repositories of industry-similar analogous projects to create estimates that include success vs failure probabilities (risk) profiles. The organization is borrowing the past experience of others to help them predict their future. When this method is used it increases confidence because of the statistical and mathematical nature of the process. The process is suitable for internal adoption as a repeatable and tunable method. Management overhead costs do increase when this method is adopted.

Whatever method an organization chooses, it should always be considered a starting point and a work-in process number and not memorialized.

Budgeting for Value

The answer to the question, “How do I calculate estimates for Agile budget deliverables this year?” is to define the value (stream) desired and estimate using your method of choice to define the portion of the overall software spend required.

As the budget is spent, within this API Group 2 Value Stream’s allocation, multiple deliverables (Releases) would be created and their individual allocations dynamically adjusted, as needed, by the team as content (Epics and derived user stories) are elaborated, understood and converted into working software.

Figure 5: Fund Value Streams not Projects or Releases

Fund Value Streams

One or more Releases will be created and shipped supporting the expense or capitalize decision and the dynamic budget changes within each release activity updates the overall budget, governed by the fiduciary authorities.

Conclusion

This lean-Agile budgeting approach relieves the organization from using traditional, command and control project cost accounting methods, which are challenged by the Agile method. This approach allows the fiduciary authorities to own what they should, the overall software spend, divided by value streams at the same time the content authorities, the Agile teams, own the day-to-day, week-to-week spending. This is a big step away from the past for many organizations but a good step forward to a more Agile organization.

Sources:
1. FASB http://www.fasb.org/summary/stsum86.shtml ; http://www.gasb.org/cs/ContentServer?c=Document_C&pagename=FASB%2FDocument_C%2FDocumentPage&cid=1176156442651;
2. Scaled Agile Framework, http://scaledAgileframework.com/glossary/#P
3. SAFe 4.0 Big Picture, http://scaledAgileframework.com/
4. International Society of Parametric Analysts, Parametric Estimating Handbook, http://www.galorath.com/images/uploads/ISPA_PEH_4th_ed_Final.pdf
5. SAFe® 4.0 Lean-Agile Budgeting, http://scaledAgileframework.com/budgets/

 

Download this report here.

Written by Default at 05:00

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!