How can I use SNAP to improve my estimation practices?

Scope of Report

This month’s report will focus on how to improve estimation practices by incorporating the Software Non- functional Assessment Process (SNAP) developed by the International Function Point User’s Group (IFPUG) into the estimation process.

Software Estimation

The Issue

Software development estimation is not an easy or straightforward activity. Software development is not like making widgets where every deliverable is the same and every time the process is executed it is the same. Software development varies from project to project in requirements definition and what needs to be delivered. In addition, projects can also vary in what processes and methodologies are used as well as the technology itself. Given these variations it can be difficult to come up with a standard, efficient, and accurate way of estimating all software projects.

The Partial Solution

Software estimation approaches have improved but these have not been widely adopted. Many organizations still rely on a bottom-up approach. For many years, development organizations have used a bottom-up approach to estimation based on expert knowledge. This technique involves looking at all of the tasks that need to be developed and using Subject Mater Experts (SMEs) to determine how much time will be required for each activity. Often organizations ask for input separately, but often a Delphi method is used. The Delphi method was developed in the 1950’s by the Rand Corporation. Per Rand “The Delphi method solicits the opinions of experts through a series of carefully designed questionnaires interspersed with information and feedback in order to establish a convergence of opinion”. As the group converges the theory is that the estimate range will get smaller and become more accurate. This technique, and similarly Agile planning poker, is still utilized, but often is just relying on expert opinion and not data.

As software estimation became more critical other techniques began to emerge. In addition to the bottom-up method, organizations began to utilize a top-down approach. This approach involved identifying the total costs and dividing it by the number of various activities that needed to be completed. Initially this approach also was based more on opinion than fact.

In both of the above cases the estimates were based on tasks and costs rather than on the deliverable. Most industries quantify what needs to be built/created and then based on historical data determine how long it will take to reproduce. For example, it took one day to build a desk yesterday so the estimate for building the same desk today will also be one day.

The software industry needed a way to quantify deliverables in a consistent manner across different types of projects that could be used along with historical data to obtain more accurate estimates. The invention of Function Points (FPs) made this possible. Per the International Function Point User Group (IFPUG), FPs are defined as a unit of measure that quantifies the functional work product of software development. It is expressed in terms of functionality seen by the user and is measured independently of technology. That means that FPs can be used to quantify software deliverables independently of the tools, methods, and personnel used on the project. It provides for a consistent measure allowing data to be collected, analyzed, and used for estimating future projects.

With FPs available the top-down methodologies were improved. This technique involves quantifying the FPs for the intended project and then looking at historical data for projects of similar size to identify the average productivity rate (FP/Hour) and determine the estimate for the new project. However, as mentioned above, not every software development project is the same, so additional information is required to determine the most accurate estimate.

Although FPs provide an important missing piece of data to assist in estimation, they do not magically make estimation simple. In addition to FP size, the type of project (Enhancement or New Development) and the technology (Web, Client Server, etc.) have a strong influence on the productivity. It is important to segment historical productivity data by FP size, type, and technology to ensure that the correct comparisons are being made. In addition to the deliverable itself, the methodology (waterfall, agile), the experience of personnel, the tools used, and the organizational environment can all influence the effort estimate. Most estimation tools have developed a series of questions surrounding these ‘soft’ attributes that raise or lower the estimate based on the answers. For example, if highly productive tools and reuse are available then the productivity rate should be higher than average and thus require less effort. However, if the staff are new to the tools, then the full benefit may not be realized. Most estimation tools adjust for these variances and are intrinsic to the organizations’ historical data.

At this point we have accounted for the functional deliverables and the tools, methods, and personnel involved. So what else is needed?

The Rest of the Story

Although FPs are a good measure of the functionality that is added, changed, or removed in a software development or enhancement project, there is often project work separate from the FP measurement functionality that cannot be counted under the IFPUG rules. These are typically items that are defined as Non-Functional requirements. As stated in the IFPUG SNAP Assessment Practices Manual (APM), ISO/IEC 24765, Systems and Software Engineering Vocabulary defines non-functional requirements as “a software requirement that describes not what the software will do but how the software will do it. Examples include software performance requirements, software external interface requirements, software design constraints, and software quality constraints. Non-functional requirements are sometimes difficult to test, so they are usually evaluated subjectively.”

IFPUG saw an opportunity to fill this estimation gap and developed the Software Non-Functional Assessment Practice (SNAP) as a method to quantify non-functional requirements.

SNAP

History

IFPUG began the SNAP project in 2008 by initially developing an overall framework for measuring non- functional requirements. Beginning in 2009 a team began to define rules for counting SNAP and in 2011 published the first release of the APM. Various organizations beta tested the methodology and provided data and feedback to the IFPUG team to begin statistical analysis. The current version of APM is APM 2.3 and includes definitions, rules, and examples. As with the initial development of FPs, as more SNAP data is provided adjustments will need to be made to the rules to improve accuracy and consistency.

SNAP Methodology

The SNAP methodology is a standalone process; however, rather than re-invent the wheel, the IFPUG team utilized common definitions and terminology from the IFPUG FP Counting Practices Manual within the SNAP process. This also allows for an easier understanding of SNAP for those that are already familiar with FPs.

The SNAP framework is comprised of non-functional categories that are divided into sub-categories and evaluated using specific criteria. Although SNAP is a standalone process it can be used in conjunction with FPs to enhance a software project estimate.

The following are the SNAP categories and subcategories assessed:

 

Each sub-category has its’ own definition and assessment calculation. That means that each subcategory should be assessed independently of the others to determine the SNAP points for that subcategory. After all relevant subcategories have been assessed the SNAP points are added together to obtain the total SNAP points for the project.

Keep in mind that a non-functional requirement may be implemented using one or more subcategories and a subcategory can be used for many types of non-functional requirements. So the first step in the process is to examine the non-functional requirements and determine which categories/subcategories apply. Then only those categories/subcategories are assessed for the project.

With different assessment criteria for each subcategory it is impossible to review them all in this report; however, the following is an example of how to assess subcategory 3.3 Batch Processes:

Definition: Batch jobs that are not considered as functional requirements (they do not qualify as transactional functions) can be considered in SNAP. This subcategory allows for the sizing of batch processes which are triggered within the boundary of the application, not resulting in any data crossing the boundary.

Snap Counting Unit (SCU): User identified batch job

Complexity Parameters: 1. The number of Data Elements (DETs) processed by the job
2. The number of Logical Files (FTRs) referenced or updated by the job

SNAP Points calculation:

>

Result: Scheduling batch job uses 2 FTRs so High complexity. 10*25 DETs= 250 SP >/p>

Each non-functional requirement is assessed in this manner for the applicable subcategories and the SP results are added together for the total project SNAP points.

SNAP and Estimation

Once the SNAP points have been determined they are ready to be used in the software project estimation model. SNAP is used in the historical top-down method of estimating, similar to FPs. The estimator should look at the total SNAP points for the project and look at historical organization data if available, or industry data for projects with similar SNAP points to determine the average productivity rate for non-functional requirements (SNAP/Hours). Once the SNAP/Hour rate is selected the estimate can calculate effort by taking the SNAP points divided by the SNAP/Hour productivity rate. It is important to note that this figure is just the effort for developing/implementing the non-functional requirements. The estimator will still need to develop an effort estimate for the functional requirements. This can be done by taking the FPs divided by the selected FP/Hour productivity rate. Once these two figures are calculated they can be added together to have the total effort estimate for the project.

Estimate example:

Note that the SNAP points and the FPs are not added together, just the effort hours. SNAP and FP are two separate metrics and should never be added together. It is also important to make sure that the same functionality is not counted multiple times between SNAP and FPs as that would be ‘double counting’. So, for example, if multiple input/output methods are counted in FPs they should not be counted in SNAP.

This initial estimate is a good place to start; however, it is also good to understand the details behind the SNAP points and FPs to determine if the productivity rate should be adjusted. For instance, with FPs, an enhancement project that is mostly adding functionality would be more productive than a project that is mostly changing existing functionality. Similarly, with SNAP, different categories/subcategories may achieve higher or lower productivity rates. For example, a non-functional requirement for adding Multiple Input Methods would probably be more productive than non-functional requirements related to Data Entry Validations. These are the types of analyses that an organization should conduct with their historical data so that it can be used in future project estimations.

FPs have been around for over 30 years so there has been plenty of time for data collection and analysis by organizations and consultants to develop industry trends; but it had to start somewhere. SNAP is a relatively new methodology and therefore has limited industry data that can be used by organizations. As more companies implement SNAP more data will become available to the industry to develop trends. However, that doesn’t mean that an organization needs to wait for industry data. An individual company can start implementing SNAP today and collecting their own historical data, conducting their own analyses, and improving their estimates. Organizational historical data is typically more useful for estimating projects anyway.

Conclusion:

An estimate is only as good as the information and data available at the time of the estimate. Given this, it is always recommended to use multiple estimation methods (e.g. bottom-up, top-down, Delphi, Historical/Industry data based) to find a consensus for a reasonable estimate. Having historical and/or industry data to base an estimate upon is a huge advantage as opposed to ‘guessing’ what a result may be. Both FP/Hour and SNAP/Hour productivity rates can be used in this fashion to enhance the estimation process. Although the estimation process still isn’t automatic and requires some analysis, having data is always better than not having data. Also, being able to document an estimate with supporting data is always useful when managing projects throughout the life cycle and assessing results after implementation.

Sources:

  • Rand Corporation http://www.rand.org/topcs/delphi-method

  • Counting Practices Manual (CPM), Release 4.3.1; International Function Point User Group (IFPUG), https://www.ifpug.org/

  • APM 2.3 Assessment Process Manual (SNAP); International Function Point User Group (IFPUG), https://www.ifpug.org/

Written by Default at 05:00

Function Points in the Philippines?

David LambertI know what you are thinking – and I was thinking the same thing. Is there really a company using function points in the Philippines? Yes, there is. In fact, I recently traveled to the Philippines to train a lean development team on the use of function points as a measure for their metrics program.

I’d never been to the Philippines before, so it was an interesting experience to be in a new place, but it was also interesting to see how function points are being used around the world.

This particular team is way above the curve as far as their processes and documentation is concerned. They were looking for a standardized process to size their change requests in order to measure productivity and quality and help forecast future projects. Their intent is to start sizing those small change requests and establish some benchmarks for their applications. Once those benchmarks are in place and they have gathered some valuable data, the team intends to start using the data to help size their larger projects. The sizing for those projects will allow them to better manage staffing, cost, and quality related to those projects.

This, of course, is something we encourage and espouse as a best practice for development teams. So, it was great to see that this mentality has really taken hold in this organization, and that they truly understand the benefits of sizing.

I also learned while I was there that several larger technology companies are starting to use the Philippines as a source to locate their infrastructure and development teams. So you may not think of the Philippines when it comes to the IT domain, but the country is starting to make some strides towards closing the gap on the rest of IT world. I know for sure that one lean development team has closed the gap and function points have allowed them to standardize their process even further.

I look forward to seeing how the use of function points continues to develop in the country – and beyond. Is your organization using function points? If not, it’s time to catch up!

 

David Lambert
Managing Consultant

Written by David Lambert at 05:00

VIDEO: Estimation Center of Excellence

One of our key differentiators here at DCG is our Centers of Excellence. A Center of Excellence (CoE) is a fully packaged and successfully tested solution to address common software issues that plague organizations - we offer a number of them, which you can read more about here.

All CoEs are put in place using our Build, Operate, Transfer method, which means:

  1. We customize the CoE processes to suit your needs, while establishing and educating your internal resources.
  2. We operate the CoE using our staff to perform some, if not all, of the functions.
  3. We transfer full operation of the CoE to your team when/if desired.

Our most popular CoE is the Estimation CoE, which helps to optimize software estimation processes to reduce project risk, increase delivery confidence and set IT investment priorities to meet business needs and goals. We have these in place at major organizations, such as a well-known global financial solutions provider (Case Study).

Our CEO recently recorded a video about the Estimation CoE. It's a quick showcase to help explain why the CoE is so invaluable to an organization.

If you don't have time for the video, here's the takeaway:

Inaccurate software project estimates are the cause of a lot of waste in IT departments and increased project risk. An Estimation CoE can help your organization to measure, manage and accurately predict cost, schedule, performance and value for your software projects. This CoE is often used to validate estimates from third-party vendors to better manage outsourcing relationships and to improve the quality of delivered work and associated timelines.

Need more information? Here you go:

 

  • Brochure: This provides further detail about DCG's Center of Excellence solutions.
  • Business Case: Learn why this solution is necessary for your organization.
  • Article: 6 Steps to Creating an Estimation CoE.

 

Written by Default at 05:00

Capability Counts 2016

Capability Counts 2016

The CMMI framework has been around for awhile now, but its use in the industry continues to persist. The framework's focus on quality improvement through the use of best practices makes it of value to almost any organization.

While the framework is still in use, the CMMI Institute has expanded its annual conference beyond a singular focus on the framework itself, to a broader focus on capability. Branding as the "Capability Counts" conference makes sense - all organizations want to build and capitalize on their capability - and this includes more than just the implementation of CMMI. 

We were excited to attend this year's Capability Counts conference in Annapolis, where the wider scope of the conference lent itself to an interesting agenda of speakers on topics from risk management to product quality measurement - and yes, CMMI.

Tom Cagley, our Vice President of Consulting, also spoke at the conference. His presentation, "Budgeting, Estimation, Planning, #NoEstimates, and the Agile Planning Onion - They ALL Make Sense," discussed the many levels of software estimation, including budgeting, high-level estimation, and task planning. He explained why all of these methods are useful, when they make sense, and in what combination.

You can download the presentation below. More information about our CMMI offerings are here - and we're already looking forward to next year's conference!

Download

 

Written by Default at 05:00

Are Function Points Still Relevant?

Let's start with a quick overview of Function Point Analysis:

Function Point Analysis is a technique for measuring the functionality that is meaningful to a user, independent of technology.  It was invented by Allan Albrecht of IBM in 1979. Several standards exist in the industry, but the International Function Point Users Group (IFPUG) is the most widely used.  IFPUG produces the Function Point Counting Practices Manual, used by Certified Function Point Specialists (CFPS) to conduct function point counts.  IFPUG is one of the ISO standards for software sizing (ISO/IEC 28926:2009). 

Function Point Analysis considers five major components of an application or project: External Inputs, External Outputs, External Inquiries, Internal Logical Files and External Interface Files. The analyst evaluates the functional complexity of each component and assigns an unadjusted function point value. The analyst can also analyze the application against 14 general system characteristics to further refine the sizing and determine a final adjusted function point count.

Function Point Analysis

“The effective use of function points centers around three primary functions: estimation, benchmarking and identifying service-level measures.” [i] 

More and more organizations are adopting some form of Agile framework for application development and enhancement.  The most recent VisionOne State of Agile Survey reveals that 94% of organizations practice Agile.[ii]   Hot technologies such as big data, analytics, cloud computing, portlets and APIs are becoming ever more popular in the industry.

Let’s explore each of the three primary functions of function points and their relevance in today’s Agile dominated IT world and with new technologies.

Estimation:

Whether it is a move from traditional waterfall to Agile or from mainstream technologies to new innovations, project teams still have a responsibility to the business to deliver on time and within budget.  Estimates of the overall project spend and duration are critical for financial and business planning.

Parametric estimation is the use of statistical models, along with parameters that describe a project to derive cost and duration estimates.  These models use historical data to make predictions.  The key parameters necessary to describe a project are size, complexity and team experience.   Many other parameters can be used to further calibrate the estimate and increase its accuracy, including whether the project is using an Agile framework.  Several tools can be used to perform parametric estimation, including SEER, SLIM and COCOMO. 

Project size can be described in several ways, with software lines of code (SLOC) and function points being the most common.  SLOC has some inherent problems, one being that inefficient coding produces more lines of code, another being the fact that determining the SLOC size of a project before it is coded is itself an estimate.  That’s where function point analysis provides real value as a sizing tool.  Even in software developed using the latest innovations in technology, the five components of function point analysis still exist so function point counting remains a valuable tool for measuring software size.  Because a function point count can be done based on a requirements document or user stories, and the expected variance in function point counts between two certified function point analysts is between 5% and 10%, an accurate and consistent measure of the project size can be derived.  And because function point analysis is based on the users view and independent of technology it works just as well as technology evolves.

The function point size, along with the other parameters described above are then used by the parametric estimation tool to provide a range of cost and duration estimates for the entire project within a cone of uncertainty.  This information can be used for financial budgeting and business planning.  

Projects in an Agile framework can create estimates for the individual user stories with techniques like planning poker, t-shirt size or relative mass valuation.  These estimates are used for sprint planning and are refined through the backlog grooming process.  As the team measures and refines its velocity the estimates are further updated.   Ultimately all of these estimates should converge on the overall projected estimate created using parametric estimation.

Regardless of the technologies used for development, in this way estimates of the overall project through parametric estimation and Agile estimation techniques can coexist and complement each other in support of the business’s need for financial and business planning.

Benchmarking:

Whatever technology or development framework is being used, constant improvement is essential to an organizations ability to survive and thrive in a competitive environment.  Baselining an organization’s performance relative to productivity, quality and timeliness is the starting point for benchmarking and the first step toward an IT organization’s delivery improvement. 

Function points are a common currency for metrics equations.  They provide a consistent measure of the functionality delivered, allowing benchmark comparison of performance over time, of one technology against another, internally across various departments or vendors, and externally against the industry in which a company competes.  Benchmarking is also used in outsourcing governance models as a way to ensure a vendor is providing value with respect to contractual commitments and competitors in the marketplace.

A large amount of function point based industry benchmark data is available from many suppliers.  Some of the suppliers include: The Gartner Group, Rubin Systems Inc. META Group, Software Productivity Research, International Software Benchmarking Standards Group (ISBSG) and DCG Software Value. 

To execute a benchmark, data is collected for the target projects, including function point size, effort and duration.  The data is analyzed and functional metrics are created and baselined for the target projects.  Quantitative comparison of these baselines is done against suitable industry benchmarks.  Qualitative assessment is done to further analyze the target projects and determine contributing factors to performance differences with the benchmark.

Regardless of the development framework or technology used, function points is the basis for baselining and benchmarking an organization to determine their performance relative to the industry and allowing for improvements to move toward best-in-class performance.

Service-Level Measures:

Service-level metrics are most commonly used in outsourcing governance to measure the performance of the outsourcer to ensure contract compliance.  With IT’s increased alignment with the business, service-level metrics are increasingly used internally as well.  Delivery framework and technology don’t change the need for this kind of oversight. 

Outsourcing is typically done at the individual project or application level, for application maintenance, or the entire ADM environment.  Let’s examine each of these outsourcing models and how function point based service-level metrics can be used to monitor them.

Individual project or application:

In the case of individual project or application outsourcing service-level definition is based on the provider’s responsibility, the standards required by the customer and how success is defined.  Function point analysis has a role in all three of these areas. 

Definition of the outsourcer’s responsibilities helps identify the hand-off points.  Function point sizing at requirements hand-off provides an initial baseline of the project size for all metrics to be built upon.  As requirements change throughout the project the baseline can be updated through change control. 

The standards and development practices lead to establishment of compliance measures and targets for the outsourcer to meet.  Function point sizing can be used here as the basis of measures like productivity.

Success can be measured with function point based measures of delivery rate, duration and quality against contractual requirements or internal standards.

Application maintenance:

Measurement of maintenance in an outsourcing includes customer expectations, response time, defect repair, portfolio size, application expertise and others.  Let’s explore those that involve function point analysis.

Customer expectations can be thought of as the size of the portfolio being maintained, as well as the cost of maintaining it.  The portfolio size can be measured with function points to establish the maintenance baseline and its growth over time can be monitored. 

Support efficiency can be measured as the size of the support staff needed to maintain the maintenance baseline.  This can also be measured over time to show trends.

Entire ADM environment:

The measurement needs for ADM outsourcing are different from those of the previous two scenarios.  A multi-year outsourcing requires more complex measures to ensure the services provided by the outsourcer meet contractual commitments.  To do this more complex metrics dashboards are often built to allow a wide range of measurements to be analyzed. 

To build a metrics dashboard that provides the level of monitoring required, many factors must be considered including contractual requirements, end customer expectations and organizational standards and goals. 

The table below describes metrics derived from performance considerations and business drivers. [iii]

Function Points

Many of these metrics are based on functional size so function point analysis can be used to build the measurements.

For outsourcing and internal IT, effective measurement is critical to monitor performance and improvements and should be linked to the organizations goals and objectives.  Metrics based on functional size are key to a service-level measurement program without regard to the delivery framework or technology used.

Conclusion:

We have seen above that function point analysis is versatile and adaptable with changing technology and processes.  All technologies still have the five basis components of function point analysis and organizations are still asking “when it will be done?”, “how much will it cost?” and “what will I get?”.  It is for these reasons that function point analysis remains relevant in today’s IT world.

References

  1. Garmus, D. Herron, D., Function Point Analysis, Measurement Practices for Successful Projects, Addison-Wesley, 2001
  2. IFPUG Metrics View, February 2016, International Function Point Users Group
  3. 9th Annual State of Agile Survey, VersionOne Inc., 2015
Written by Default at 05:00

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!