• Prev
  • Next

Agile and Test-Driven Development: A Path to Improved Software Quality


Agile is growing in popularity - we all know this to be true. And honestly, we're happy about it. When implemented properly, Agile offers great benefits to organizations. However, even with those benefits, there is one major drawback to the framework: it does not offer a strict definition of testing. 

With no definition, it's up to the individual organization to decide how to proceed with testing. The problem with this is that most organizations tend not to prioritize testing the way that they should. Instead of early detection and prevention, they leave testing until the end of a project, or even the end of a sprint, so it's more time consuming and costly to address. 

The answer lies in Test-Driven Development (TDD). TDD naturally complements the Agile framework, and it directly leads to software that is higher in quality. It also increases communication between the developers and testers, and ultimately between IT and the business (who are writing the requirements). 

To learn more about what TDD is and how it can work in tandem with Agile, read our latest publication, "Test-Driven Development and Agile."

Questions? Comments? Let us know!



Written by Default at 05:00
Categories :

Test-Driven Development

James JandebeurTesting has always been a bit of a thorn in the side of software development, necessary as it is. It costs money and time, ties up resources, and does not result in easily tracked returns on investment. In a typical organization, the testing process beings after the development project is complete, in order to ferret out defects or make sure the software is fit for service. If there are problems, development needs to fix them, and then the testing process begins again. Do you see the problem with this cycle?

An alternative to the usual testing process is continuous testing, such as Test-Driven Development (TDD). TDD is not a new concept; it has been around since 2003, but it is still rarely used. The process is straightforward:

  1. Write a test for a single item under development.
  2. Run the test, which will fail.
  3. Write the code to enable the test to pass.
  4. Re-run the test.
  5. Improve the code and retest.

What is the point of this process? At first glance, it may appear that it would make the testing process more complicated, not less. However, it ensures that the testing process is continuous throughout the project. This means that testing is preventing defects rather than finding and repairing them after the fact, thus improving the quality of the final project. TDD provides a suite of tests for the software, almost as a side effect. The tests themselves, when well structured, can effectively provide documentation, as they show what a piece of code is intended to do. Finally, when combined with the user or product owner’s input, the process can be used to perform acceptance testing ahead of the coding, a process known as Acceptance Test-Driven Development, which can help to develop and refine requirements.

Each of these items will ultimately need to be done, regardless of whether they are a part of the traditional testing and re-coding process. TDD allows them to be done in a manner that reduces the number of times steps need to be repeated. Does your organization use TDD? Leave a comment and share your thoughts!

James Jandebeur

Written by James Jandebeur at 05:00

Why Do We Never Have Time to Test?

(You can download this report here.)

Scope of this Report

This paper discusses the time constraints of testing, its impact on several testing stakeholders, and possible ways to mitigate this problem. It includes:
Statistics on testing length.

  • Who are some of the stakeholders for software testing?
  • What kinds of delays do testers frequently face?
  • Making more time to test.

Testing Length

The following estimate of an average testing length is drawn from The Economics of Software Quality, by Capers Jones and Olivier Bonsignour, which is based on the authors’ clients average test cases per function point and time per case. The calculation is based on the types of tests used by 70% or more projects, and provides the following average for a 100 function point project. This assumes a thorough test of the system using an appropriate number of test cases for the size of the project.

Testing Times

Therefore, between one or two months might be spent for the testing of the sample project. Note that a 100 FP project is relatively small. Projects of ten thousand function points or more, especially for new software, are not uncommon. Testing these larger projects to the same degree could take a year or more, assuming testing time increased in a linear fashion. Testing time actually increases faster as the project grows larger. To completely test all combinations of possible processes for very large projects will take far more time than is available. For practical purposes, exhaustive testing is impossible for such projects.

The above calculation uses only a tenth of the total number of testing types. For example, integration and system testing were not included in the list of four most common testing types but are still quite common (listed as used in 60% and 40% of projects, respectively). The more testing methods applied, the better the final software quality but they will require still more time. Again, the potential testing time starts to exceed practical reasonableness. As a result, risk analysis and other measures must be applied to determine how much testing is enough.


Any software project will have a number of stakeholders, each with potentially very different needs that can actively conflict with one another. Several stakeholders, with example goals, are presented here.


A goal of a software tester is to ensure that the application being tested is as free of defects as possible. As noted above, for any large project this is at best a lengthy and expensive goal, at worst an unobtainable one. At a certain point, further testing will not be practical. Some testers may naturally want to be more thorough than is necessary while others may fear reprisal if a defect is missed. Either way, the testing can easily become overlong because of these goals.


Developers want the project to be done and out the door so they can move on to the next one, or in Agile, to finish the current sprint. Testing, or having to return to the program to remove defects, delays this goal. As for the testers, this can become an even greater problem for developers penalties are imposed for defects in the code.


Customers want the application to be perfect out of the box for a low cost. The software must work without defect on all hardware combinations (especially for home users) and fulfill all requirements. Most customers realize that this is not going to happen but that is the goal. Software that does not meet the ideal will be well known in the community very quickly. This can put pressure on the business, which puts pressure on the manager, and finally on the development and testing teams.


Like the customer, the manager wants the program to be good quality and low cost, and most likely also wants a short development time. Improving any one of these goals (reducing cost, increasing quality, reducing time) requires a sacrifice in one or both of the other two. To decrease time and cost, testing may be cut or reduced. The process of solving a problem (development) is often, from the manager’s point of view, more clearly important than the process of preventing one (testing). Ultimately, management must make the decisions on how much time can be expended on any part of a project, and testing is often sacrificed in favor of the more visible development.

Delays, Delays

There is always the potential for any work to run into delays. Unforeseen problems with the software, changing personnel, and damaged equipment are all potential problems. There are too many possibilities to list here, but two will be presented: human factors and changing requirements.

Human Factors

However well-planned the software testing portion of a project might be, there is always the possibility that attitudes and habits of the development team can get in the way of completion. Distractions, attitudes towards testing, and politics can all cause delays.

Software teams, clearly, must work on computers much of their time, and computers are rife with potential distractions: social media, games, e-mail, and so on. These pursuits can sometimes improve efficiency in various ways, but they are still a lure to waste more time than is appropriate.

A number of testing types, including Subroutine and Unit testing (included in the time estimate above), are often most appropriately performed by the developers. Additionally, and pre-test defect removal will also involve the developer. Sometimes, developers do not believe that their time is properly spent on such activities. Further, even if the developers do relatively little testing themselves, a separate group of testers sending back the work due to defects, especially if this happens multiple times, can cause friction and further delays.

Changing Requirements

Most projects are going to have requirements evolve over time. Except with very small applications, as the work progresses, new features will be desired, a better view of current features will be available, and some requirements may actually be removed as unnecessary. Priorities will also shift, even if the requirements remain relatively stable. This increases both the development time and the testing time, as adaptions are made, but in this case testing is more likely to be sacrificed than development.

Making More Time for Testing

Defect Prevention

Traditionally, testing has been done after all development tasks are finished but before deployment. This means that for a small project an extra two months (according to the earlier testing time estimate) would be added to the end of the project, increasing the likelihood that the testing will be cut short. Finding defects throughout the development process (as in Figure 1) may increase the efficiency of removing defects, making the testing needed after the coding phase shorter.

Finding Defects

Test Driven Development

Test driven development (TDD) comes from Agile practices. It is a discipline in which development does not start with code, but with the development of a test case, which in turn is based on a requirement. The code is then written to pass the test case. If the code does not initially pass, it is returned to the backlog and attempted once again until it succeeds. This means that the testing is spread throughout the development process and the tests are ready at hand. However, studies of the technique show inconsistent benefits to costs: the process often costs more than other testing methods.

Test Maturity Model integration

TMMi is similar to Capability Maturity Model Integration (CMMI). It is a set of criteria that are used to determine the fitness of an organization’s testing processes. It does not dictate how the testing is done, rather it gives guidance in making improvements. It has five levels: Initial, Managed, Defined, Measured, and Optimizations. Each level except Initial has a set of requirements for an organization to be certified to it. The model is growing in popularity, as it gives a map to continuous improvement of testing systems. While not all organizations will need to obtain the highest level of TMMi, even the lower levels can lend insight to testing. Indeed, under TMMi, it is perfectly acceptable to be operating at several different levels simultaneously if those levels reflect the goals of the organization.


So, why is there never enough time for testing? Part of this is perception. All stakeholders want, to greater or lesser extents, an error-free application. Unfortunately, the exhaustive testing that this would require takes too much time to be possible in all but the smallest projects. As long as the goal is finding and fixing all defects, there can never be enough time to test. Proper risk assessment and prioritization is necessary before testing to reduce this problem.

Written by Default at 05:00

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!