Top Blog Posts of 2015

Every December we like to share our top blog posts from the past year. This year we thought, "Why wait until December?" Of course, we encourage you to follow the pack and see why these posts are so popular! They cover the range of our areas of focus (Agile, function points, TMMi, estimation and more!), so there's a little something for everyone! Without further adieu, here are the top 5 blog posts (the ones that have the most views this year) from January through June of 2015:

1. Estimating Software Maintenance - Learn more about a unique and proven approach for estimating maintenance and support activities using a new type of "sizing" model.

2. Agile Transformation of the Organization - The key to successfully implementing enterprise Agile is to implement strategic change. Learn how!

3. How to Manage Vendor Performance - Learn how you can use Function Point Analysis to measure your vendor's performance.

4. Scaling Agile Testing Using the TMMi - The Test Maturity Model integration (TMMi) is a framework for effective testing in an Agile environment. Learn how to put it to use.

5. Exploratory Testing and Technical Debt - Exploratory testing (ET) is a type of manual testing. Learn more about the type of technical debt it creates.

Be sure to check back in December to see how that list compares to this one!

Written by Default at 05:00

Why Do We Never Have Time to Test?

(You can download this report here.)

Scope of this Report

This paper discusses the time constraints of testing, its impact on several testing stakeholders, and possible ways to mitigate this problem. It includes:
Statistics on testing length.

  • Who are some of the stakeholders for software testing?
  • What kinds of delays do testers frequently face?
  • Making more time to test.

Testing Length

The following estimate of an average testing length is drawn from The Economics of Software Quality, by Capers Jones and Olivier Bonsignour, which is based on the authors’ clients average test cases per function point and time per case. The calculation is based on the types of tests used by 70% or more projects, and provides the following average for a 100 function point project. This assumes a thorough test of the system using an appropriate number of test cases for the size of the project.

Testing Times

Therefore, between one or two months might be spent for the testing of the sample project. Note that a 100 FP project is relatively small. Projects of ten thousand function points or more, especially for new software, are not uncommon. Testing these larger projects to the same degree could take a year or more, assuming testing time increased in a linear fashion. Testing time actually increases faster as the project grows larger. To completely test all combinations of possible processes for very large projects will take far more time than is available. For practical purposes, exhaustive testing is impossible for such projects.

The above calculation uses only a tenth of the total number of testing types. For example, integration and system testing were not included in the list of four most common testing types but are still quite common (listed as used in 60% and 40% of projects, respectively). The more testing methods applied, the better the final software quality but they will require still more time. Again, the potential testing time starts to exceed practical reasonableness. As a result, risk analysis and other measures must be applied to determine how much testing is enough.

Stakeholders

Any software project will have a number of stakeholders, each with potentially very different needs that can actively conflict with one another. Several stakeholders, with example goals, are presented here.

Testers

A goal of a software tester is to ensure that the application being tested is as free of defects as possible. As noted above, for any large project this is at best a lengthy and expensive goal, at worst an unobtainable one. At a certain point, further testing will not be practical. Some testers may naturally want to be more thorough than is necessary while others may fear reprisal if a defect is missed. Either way, the testing can easily become overlong because of these goals.

Developers

Developers want the project to be done and out the door so they can move on to the next one, or in Agile, to finish the current sprint. Testing, or having to return to the program to remove defects, delays this goal. As for the testers, this can become an even greater problem for developers penalties are imposed for defects in the code.

Customers

Customers want the application to be perfect out of the box for a low cost. The software must work without defect on all hardware combinations (especially for home users) and fulfill all requirements. Most customers realize that this is not going to happen but that is the goal. Software that does not meet the ideal will be well known in the community very quickly. This can put pressure on the business, which puts pressure on the manager, and finally on the development and testing teams.

Managers

Like the customer, the manager wants the program to be good quality and low cost, and most likely also wants a short development time. Improving any one of these goals (reducing cost, increasing quality, reducing time) requires a sacrifice in one or both of the other two. To decrease time and cost, testing may be cut or reduced. The process of solving a problem (development) is often, from the manager’s point of view, more clearly important than the process of preventing one (testing). Ultimately, management must make the decisions on how much time can be expended on any part of a project, and testing is often sacrificed in favor of the more visible development.

Delays, Delays

There is always the potential for any work to run into delays. Unforeseen problems with the software, changing personnel, and damaged equipment are all potential problems. There are too many possibilities to list here, but two will be presented: human factors and changing requirements.

Human Factors

However well-planned the software testing portion of a project might be, there is always the possibility that attitudes and habits of the development team can get in the way of completion. Distractions, attitudes towards testing, and politics can all cause delays.

Software teams, clearly, must work on computers much of their time, and computers are rife with potential distractions: social media, games, e-mail, and so on. These pursuits can sometimes improve efficiency in various ways, but they are still a lure to waste more time than is appropriate.

A number of testing types, including Subroutine and Unit testing (included in the time estimate above), are often most appropriately performed by the developers. Additionally, and pre-test defect removal will also involve the developer. Sometimes, developers do not believe that their time is properly spent on such activities. Further, even if the developers do relatively little testing themselves, a separate group of testers sending back the work due to defects, especially if this happens multiple times, can cause friction and further delays.

Changing Requirements

Most projects are going to have requirements evolve over time. Except with very small applications, as the work progresses, new features will be desired, a better view of current features will be available, and some requirements may actually be removed as unnecessary. Priorities will also shift, even if the requirements remain relatively stable. This increases both the development time and the testing time, as adaptions are made, but in this case testing is more likely to be sacrificed than development.

Making More Time for Testing

Defect Prevention

Traditionally, testing has been done after all development tasks are finished but before deployment. This means that for a small project an extra two months (according to the earlier testing time estimate) would be added to the end of the project, increasing the likelihood that the testing will be cut short. Finding defects throughout the development process (as in Figure 1) may increase the efficiency of removing defects, making the testing needed after the coding phase shorter.

Finding Defects

Test Driven Development

Test driven development (TDD) comes from Agile practices. It is a discipline in which development does not start with code, but with the development of a test case, which in turn is based on a requirement. The code is then written to pass the test case. If the code does not initially pass, it is returned to the backlog and attempted once again until it succeeds. This means that the testing is spread throughout the development process and the tests are ready at hand. However, studies of the technique show inconsistent benefits to costs: the process often costs more than other testing methods.

Test Maturity Model integration

TMMi is similar to Capability Maturity Model Integration (CMMI). It is a set of criteria that are used to determine the fitness of an organization’s testing processes. It does not dictate how the testing is done, rather it gives guidance in making improvements. It has five levels: Initial, Managed, Defined, Measured, and Optimizations. Each level except Initial has a set of requirements for an organization to be certified to it. The model is growing in popularity, as it gives a map to continuous improvement of testing systems. While not all organizations will need to obtain the highest level of TMMi, even the lower levels can lend insight to testing. Indeed, under TMMi, it is perfectly acceptable to be operating at several different levels simultaneously if those levels reflect the goals of the organization.

Conclusions

So, why is there never enough time for testing? Part of this is perception. All stakeholders want, to greater or lesser extents, an error-free application. Unfortunately, the exhaustive testing that this would require takes too much time to be possible in all but the smallest projects. As long as the goal is finding and fixing all defects, there can never be enough time to test. Proper risk assessment and prioritization is necessary before testing to reduce this problem.

Written by Default at 05:00

How Do We Know If We Are Getting Value for Our Software Vendors?

(You can download this report here.)

Scope of this Report

Discuss what is meant by value, the process of sizing and estimating the software deliverable and the benefits of those results

  • What is “Value”?
  • Functional Value
  • More on the estimation process
  • Case study example
  • Conclusion

What is “Value”?

We can look at value for software development from vendors in terms of how much user functionality is being delivered by the software vendor. In other words, how many user features and functions are impacted as a result of a project. We can also consider whether the software deliverables were completed on time and on budget to capture “value for money” and the monetary implications of timeliness. Finally, we can see if the software project delivered what was expected from a user requirements’ perspective and if it meets the users’ needs.

This last, more subjective, assessment of value gets into issues of clarity of requirements and the difficulties of responding to emergent requirements if the initial requirements are set in stone. It is outside the scope of this report but we believe the best way to address this issue is through Agile software development which is covered in several other DCG Trusted Advisor reports.

Functional Value

To quantify the software, we must first size the project. Of course, there are several ways to do this with varying degrees of rigor and cross-project comparability. Function Points and Story Points both have sizing perspectives that take a user’s perspective of the delivered software. Since Function Points Analysis is an industry standard best practice sizing technique we find that it is used more often for sizing at this Client-Vendor interface.

Function point analysis considers the functionality that has been requested by and provided to an end
user. The functionality is categorized as pertaining to one of five key components: inputs, outputs,
inquiries, interfaces and internal data. Each of the components is evaluated and given a prescribed
weighting, resulting in a specific function point value. When complete, all functional values are added
together for a total functional size of the software deliverable. After you have established the size of the software project, the result can be used as a key input to an estimating model to help derive several other metrics that could include but are not limited to cost, delivery rate, schedule and defects. A good estimating model will include industry data that can be used to compare the resulting output metrics to benchmarks to allow the client to judge the value of the current software deliverable under
consideration. Of course, there are always mitigating circumstances but at least this approach allows for an informed value conversation (which may result in refinement of the input data to the estimating
model).

5 Key Components of Function Point Analysis

Of course, if you can base your vendor contract even partially on a cost per function point metric, this provides an excellent focus on the delivery of functional value although it is wise to have an agreed independent third party available to conduct a function point count in the event of disputes.

More on the Estimation Process

We have mentioned the importance of the estimation model and the input data in achieving a fair assessment of the functional value of the delivered software. We have also hinted that these will be issues to be discussed if there is disagreement between client and vendor about the delivered value. Hence, it is worth digging a little deeper into the estimation process.

The process for completing an estimate involves gathering key data that is related to the practices, processes and technologies used during the development lifecycle of the software deliverable. DCG analyzes the various project attributes using a commercial software tool (e.g. SEER-SEM from Galorath), assessing the expected level of effort that would be required to build the features and functions that had to be coded and tested for the software deliverable. The major areas for those technical or non-functional aspects are:

  • Platform involved (Client-server, Web based development, etc.)
  • Application Type (Financial transactions, Graphical user interface, etc.)
  • Development Method (Agile, Waterfall, etc.)
  • Current Phase (Design, Development, etc.)
  • Language (Java, C++, etc.)

Sophisticated estimating models such as those built into the commercial tools considers as well that are too numerous to mention include numerous other potential inputs including parameters are related to the personnel capabilities, develop environment and the target environment.

Given the size of the software deliverable and the complexity of the software deliverable represented by some of all of the available input parameters, we also need to know the productivity of the software development team that is developing the software. This can be a sensitive topic between Client and Vendor. We have often seen that the actual productivity of a team might be different from the reported productivity as the Vendor throws people on the team to make a delivery date (bad) or adds trainees to the team to learn (good) – mostly, for value purposes, the Client only cares about the productivity that they will be billed for!

Once we have established the development team’s rate of delivery or functions points per effort month, we can then use that information along with all the previous information (size, complexity) to deliver the completed estimate.

The end result of the sizing and estimating process would show how long the project will take to complete (Schedule), how many resources will be needed in order to complete (Effort) and the overall cost (Cost) of the software deliverable.

Sizing and Estimating Process

Case Study Example

DCG recently completed an engagement with a large global banking corporation who had an ongoing engagement with a particular vendor for various IT projects. One such project involved a migration effort to port functionality from one application platform to another new platform. The company and the vendor developed and agreed on a project timeline and associated budget. However, at the end of the allocated timeline, the vendor reported that the migration could not be completed without additional time and money.

The company was reasonably concerned about the success of the project and wanted more information as to why the vendor was unable to complete the project within the agreed-upon parameters. As a result, the company brought David Consulting Group on board to size and evaluate the work had been completed to-date, resulting in an estimation of how long that piece of work should have taken.

The objectives of the engagement were to:

  • Provide a detailed accounting of all features and functions that were included in the software being evaluated
  • Calculate the expected labor hours by activity, along with a probability report (risk analysis) for the selected releases

DCG’s initial estimate was significantly lower than what the vendor was billing for that same set of development work. With such a significant difference in the totals, it was clear that something was off. DCG investigated the issue with the vendor to explore what data could be missing from the estimate, including a review of the assumptions made in the estimate regarding:

  • Size of the job
  • Degree of complexity
  • Team’s ability to perform

In the end, the company and the vendor accepted the analysis and utilized the information internally to resolve any issues relevant to the project and as a result the company also decided to use another software vendor for any future software projects resulting in a significant cost saving.

Conclusions

This case study highlights a typical business problem wherein projects are not meeting agreed-upon parameters. In cases such as these, Function Point Analysis proves to be a useful tool in measuring and evaluating the software deliverables, providing a quantitative measure of the project being developed. The resulting function point count can also be used to track other metrics such as defects per function point, cost per function point and effort hours per function point. These metrics along with several others can be used to negotiate price points with current and future software development vendors to ensure that the company is receiving the best value for their IT investment.

The estimation process helps in keeping vendors accountable for the work they are producing by providing solid data on the realistic length of a project as well as the relative cost of the project. Quantitative estimates on project length allow companies to better manage their vendor relationships with increased oversight and an enhanced understanding of the expected outcome for their software deliverables.

Written by Default at 05:00

20 Years of IFPUG Excellence

Thank You from DCG

We're excited to extend our congratulations to the CFPS Fellows recognized today by the International Function Point Users Group (IFPUG) for 20 years of service within the software measurement community!

We thank the Fellows for their dedication to the community and to the software industry!

Read the announcement from IFPUG and see the full list of honorees here.

Written by Default at 09:31
Categories :

What is the Size of an Average Project?

SizeWe're often asked about the average size of a software development project. This is a reasonable question - of course most organizations want to know how what they're doing compares to others in the industry. But, it should be no surprise that the answer is that it depends.

Not sufficient? We understand. So, we did some rough calculations to find an answer. Download our article for the full details. We discuss size in terms of function points and cost. Take a look and see how you compare!

Download, "What is the Size of an Average Project?"

Written by Default at 05:00

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!