Test-Driven Development

James JandebeurTesting has always been a bit of a thorn in the side of software development, necessary as it is. It costs money and time, ties up resources, and does not result in easily tracked returns on investment. In a typical organization, the testing process beings after the development project is complete, in order to ferret out defects or make sure the software is fit for service. If there are problems, development needs to fix them, and then the testing process begins again. Do you see the problem with this cycle?

An alternative to the usual testing process is continuous testing, such as Test-Driven Development (TDD). TDD is not a new concept; it has been around since 2003, but it is still rarely used. The process is straightforward:

  1. Write a test for a single item under development.
  2. Run the test, which will fail.
  3. Write the code to enable the test to pass.
  4. Re-run the test.
  5. Improve the code and retest.

What is the point of this process? At first glance, it may appear that it would make the testing process more complicated, not less. However, it ensures that the testing process is continuous throughout the project. This means that testing is preventing defects rather than finding and repairing them after the fact, thus improving the quality of the final project. TDD provides a suite of tests for the software, almost as a side effect. The tests themselves, when well structured, can effectively provide documentation, as they show what a piece of code is intended to do. Finally, when combined with the user or product owner’s input, the process can be used to perform acceptance testing ahead of the coding, a process known as Acceptance Test-Driven Development, which can help to develop and refine requirements.

Each of these items will ultimately need to be done, regardless of whether they are a part of the traditional testing and re-coding process. TDD allows them to be done in a manner that reduces the number of times steps need to be repeated. Does your organization use TDD? Leave a comment and share your thoughts!

James Jandebeur
CFPS | CTFL

Written by James Jandebeur at 05:00

Agile Testing: Budgeting, Estimation, Planning and #NoEstimates

QUEST

We're a broken record when it comes to software testing. As we've made clear time and time again, testing is undervalued in IT. It is one of the most important steps in the software development process, yet it's often a hurried step, in an effort to produce a final project. As Agile adoption continues to increase (which it will), it's even more important to emphasize the value of testing.

But, testing is routinely overlooked, or organizations don't understand how to prioritize testing within the frameworks in use, like Agile. This is what Tom Cagley, our VP of Consulting and Agile Practice Manager, spoke about at this year's QUEST conference: How to utilize Agile in testing environments. He discussed the difference between budgeting, planning and estimation as applied to testing in an Agile environment - and when they make sense, when they don't and in what combination for testing. He also explained the #NoEstimates movement and its role in Agile testing.

You can download his presentation, "Budgeting, Estimation, Planning and #NoEstimates - They ALL Make Sense for Agile Testing!," here. If you have any thoughts we'd love to here them; just leave a comment below or contact Tom directly.

Download

 

Written by Default at 05:00
Categories :

Can Function Points Be Counted/Estimated From User Stories?

Trusted Advisor

Introduction

Since the invention of function points (FPs) any time new development methods, techniques, or technologies are introduced the following questions always arise:

  • Can we still use FPs?
  • Do FPs apply?
  • How do we approach FP counting?

These questions came up around middleware, real-time systems, web applications, component-based
development, and object-oriented development, to name a few. With the increased use of Agile methodologies, and therefore the increased use of User Stories, these questions are being asked again.  It is good to ask these questions and have conversations to ensure that the use and application of FPs is consistent throughout the industry in all situations.  The short answers to the questions are:

  • Can we still use FPs? YES. 
  • Do FPs apply? YES. 
  • How do we approach FP counting? The answer to this last question is what this article will address.

Getting Started – Determine the Purpose and Scope

As with any FP count, it is important to identify the purpose of the count and to fully understand how the resulting data will be used.  This will ensure that the correct timing, scope, and approach is used for the FP count.  The following are examples of situations where FPs can be useful.

Purpose: High-level estimate to determine feasibility

If the purpose is to determine the feasibility of moving forward with the project or to complete a proposal, then typically a high-level estimate in a range is adequate. For this count the timing would be "now," and the scope would be whatever functionality is going to be developed. At this point in the life cycle not all information may be available, so some assumptions may need to be made. It is important to document these assumptions so that if the project progresses differently than planned it can be explained. For example, a User Story may state that "As a User I want to have a Dashboard showing application statistics."  It may be too soon to know the exact details, so an assumption of five average complexity External Outputs (EOs) may need to be made.  

Purpose: Estimate for Project Planning

Once a detailed plan is required, then more detailed estimates for effort and cost are necessary. For this purpose, the FP sizing should be completed at the start of the life cycle and updated at each major development stage. For Waterfall it could be at Requirements, Logical Design, and Physical Design phases. For Agile, the timing could be at Program Increment (PI) planning, or Sprint planning or both.  This purpose will require more accurate and thorough data, which requires a more detailed FP count, so more detailed User Stories are typically available. For example, the above Dashboard User Story may be broken down into 5 separate User Stories each describing a specific report: "As a User I want to be able to see a pie chart showing customer complaints by type."  In this case, each report can be examined to determine uniqueness and counted accordingly.

Purpose: Manage Change of Scope

Once a project is underway it is a good idea to track changes in scope to determine if the effort, cost, and schedule are going to be impacted by the change. These types of counts can be completed at different phases or at the time the scope change is identified. Once a change is sized using FPs, estimates can be developed to determine if the change should be incorporated into the current project and/or Sprint, or moved to another project and/or Sprint. A new User Story could be "As a User I want to be able to search customer complaints by type." In this case, a new report would be identified. If the User Story was "As a User, I want to be able to choose the color of customer complaint types in the pie chart;" this would be a change to the initial report we counted.

Purpose: Measure Quality and Productivity

If the purpose of the FP count is to support measuring the actual quality and productivity achieved for a project, PI, or Sprint, then typically User Stories wouldn’t be the source document of choice. This type of count is completed once functionality has been delivered, so ideally one would want to use the "live" system or user manuals to identify the actual functionality delivered to obtain the most accurate measurement. However, if access to the system isn’t available, User Stories may be the only source documentation available. Often documentation isn’t updated after the fact to show changes of what was and what wasn’t implemented so for this purpose it is important to confirm with development staff and/or users what was actually delivered along with referencing the User Stories. 

Utilizing User Stories for FP Counting – Overall Approach

Once the purpose and scope have been determined, the actual FP counting can begin. Applying the International Function Point User Group (IFPUG) rules is the same regardless of the purpose; however, the level of detail and the inclusion of functionality may be different depending on how the data will be used.

Conducting FP counts from User Stories is a bit easier than from other documentation since the majority of User Stories focus on the User perspective of "what" functionality is desired and not on "how" the functionality will be developed and delivered. Sometimes this perspective is difficult to find when looking at Designs or even the flow of physical screens. User Stories by their nature keep the FP analyst seeing things from the User perspective.

The IFPUG counting process starts with defining scope and boundaries and then moves on to identifying data functions and transaction functions. With a list of User Stories, it is more likely that all of this will be decided together as the count develops.

When counting from User Stories, the best approach is to just start walking through them one by one.  Oftentimes User Stories are grouped by categories (e.g. Order Entry, Validations, Reporting, Financials, etc.). If that is the case, it is best to focus on one category at a time. If it is early in the life cycle and the application boundaries are uncertain, it is best to take a first cut at counting the functions. Once the full scope of functionality is known boundaries can be determined and the FP count can be adjusted as necessary.

In following the IFPUG rules, it is important to count the logical functions. This can be difficult depending on the level of User Stories. It would be wonderful if everyone followed the same format and wrote User Stories the same way, but unfortunately that is not the case. One organization may have one high-level User Story for a project, while another organization may write multiple User Stories for the same functionality. One of the benefits of using FPs for sizing is that the method is consistent across all methodologies and isn’t impacted by how the documentation is completed. For example:

High Level – One User Story

  • As a User, I want to be able to enter, update, delete and view orders in the system to avoid manual paperwork.

Lower Level – Multiple User Stories

  • As a User, I want to be able to enter new orders in the system to stop paperwork.
  • As a User, I want to be able to edit orders previously entered in the system to stop paperwork.
  • As a User, I want to be able to delete orders previously entered to avoid incorrect orders be processed.
  • As a User, I want to be able to enter selection criteria to view orders previously entered in the system to stop searching paperwork.
  • As a User, I want the system to use entered selection criteria to display the correct orders to stop searching paperwork.
  • As a User, I want the system to validate the data entered into the fields when an order is added or updated to ensure accurate data is entered.
  • As a User, I want the system to validate the ordered product is "on hand" before accepting the order.

In the above examples, the FP count would be the same. When a User Story seems to be at a high level, it is important to break it down into all of the Elementary Processes (EP). When User Stories are written at a lower level, it is important to look at all of the similar stories together to potentially combine them into the EPs.

The result of the example above is as follows:

User Stories and Function Points

User Stories typically equate to the Transactional Functions (EIs, EOs, EQs); however, it is important for the FP Analyst to also keep Data Functions in mind while analyzing the User Stories. There may not be a list of tables or a data model available, so the FP Analyst may have to assume the ILFs based on the transaction functions.

If early in the life cycle assumptions may need to be made as documented above. Since the User Stories imply the project is automating a manual system then all functions would be new. That would mean that in order to edit or display orders previously entered, they would need to be stored somewhere; hence counting the ILF. 

If at all possible, the FP Analyst should meet with Subject Matter Experts (SMEs) who understand the User Stories to get a full understanding and/or answer any questions. In addition, the FP Analyst should reference existing systems that may be comparable or past counts that may be relevant. FP Analysts usually have knowledge of many types of systems. It is okay to bring that knowledge and experience to the FP count to help identify potential functionality. Of course, everything still should be validated by the SMEs. 

If SME involvement is not possible, or if things are still not clear, then any assumptions that are made need to be documented fully. This will ensure that the FP count can be explained and updated correctly as the project progresses. In the example above, the assumptions document how the complexity was determined (e.g. Product file used for validation on Create and Edit EIs; Data Element Type (DET) assumptions). In addition, any further questions are documented (e.g. Need to check for multiple order Types that could impact the Record Element Types (RETs) and thus functional complexity of the ILF – this may also impact the number of Transactional functions).

Agile Development - Additional FP Counting Considerations

Since User Stories are typically associated with Agile development, it is worth mentioning a few items to consider for the FP counting in terms of timing and inclusion.

FP counting can be completed at the Program Increment (PI) level and/or the Sprint level. The PI usually encompasses the final delivered functionality, so the FP counting is completed normally. For an estimate, the count can be completed at PI planning. For quality and productivity measures, the counting occurs at delivery of the PI. Counting Sprints is handled a little differently.

Sprints can also be counted for estimation/planning and at the end of the Sprint for productivity and quality measures.  However, the sum of the Sprints is often greater than the PI count. The level
at which the User Stories are written can be impacted by the time boxing of the Sprints. For example, an initial User Story may be, "As a User, I want to be able to create a new order." During Sprint planning, it may be determined that the entire function cannot be completed in one Sprint, so it may be changed to two User Stories:

  • As a User, I want to be able to enter general information when creating an order.
  • As a User, I want to be able to enter order details when creating an order.

In this case, the FP count of the Transaction would be as follows:

FP count of user stories

The Sprints cannot be added together to obtain the total FP count for the project. Counting at the Sprint level is usually for internal measures to ensure the PI goals will be attained. It can also point out inefficiencies in the development process. If too much "rework" is occurring, perhaps changes need to made in how the project is being planned and managed. The ultimate goal would be to complete an entire EP in one Sprint and only have to revisit it in a later Sprint if new requirements are discovered.

Conclusion

FPs are the best measure for "size" and can be used for all methodologies and technologies. FPs can be counted from any documentation or from just interviewing SMEs. The most efficient and accurate FP counting uses both supporting documentation and information from SMEs. User Stories are an excellent source of information for FP counting. User Stories represent the User perspective and are typically written in a way that describes the functionality required. So, “Can function points be counted/estimated from user stories?” Absolutely. “What level of granularity is required?” Any level can be used; however, as with any documentation used, the more detailed the User Story the more accurate the FP count.

Written by Default at 05:00
Categories :

5000-1 Foxes in the Henhouse

Steve KitchingIn the past week, we have seen one of the most remarkable sporting achievements by an unlikely underdog here in the UK. Leicester City FC, of the English Premier League, won the league with two games to spare, beating illustrious teams such as Manchester United and Chelsea to the top.

This was a team with no stars; in fact, it barely escaped relegation a year before, which bookmakers made 5000-1 outsiders to win the title.

How did they do it? Some say it was the discovery and subsequent reburial of Richard III in Leicester Cathedral after his remains were found in a local carpark, bringing the team this run of good fortune.

The truth is that the victory was due to an incredible display of teamwork and commitment. There were no egos, just a drive to perform and support each other to consistently deliver results time after time.

Other teams failed, including my local side, Newcastle, where egos and attitudes ruled and so performances and results degraded.

Why am I talking about this on a software blog? We can all relate this situation to our experience of teamwork in the IT world. I’ve worked on teams where egos dominated. Whoever shouted the loudest would win, and inevitably the team would struggle to meet its goals. Compare this to teams who worked together harmoniously and delivered the goods time after time. The way a team works together directly affects the results.

This lesson can apply to any team in the IT space, but the mantra of teamwork should shine most predominantly in the Agile space, where the team(s) should pull together for a common goal.

How do you improve team culture and habits? We suggest the AgilityHealth Radar, which is a strategic retrospective that focuses on the top areas that affect team performance and health. With the results, there is a clear path forward to improved team culture and thus improved results.

So, are you a Leicester or something else entirely?


Steve Kitching
Estimation Specialist

Written by Steve Kitching at 05:00

Model Maturity Versus Software Value Delivery

Tony TimbolOur experience in helping organizations implement frameworks and maturity models (TMMi, CMMI) has been nothing short of positive. Maturity models are useful devices to guide understanding for a large, organized group of professionals working together. So, what about using maturity levels within a SAFe implementation? Why? In order to help different release trains (groups of people) be incentivized to improve.

On the surface, this seems reasonable and the intention sincere. However, the maturity level designation is inappropriate for the scaled Agile environment. A maturity level is a static point in time. It is a declaration of compliance or conformity. Conformity is not a core Agile value, adaptability to purpose is, which is about getting work done as fast as possible within immediate, observable timeframes.

Now the intention to provide incentives to improve is commendable. Retrospectives built into the process (within Agile and SAFe) are designed to accomplish that. Immediate impediments are addressed on a short horizon. Enterprise-level obstacles are identified but may take some time to address. Having an appraisal, inspecting artifacts and declaring an achievement of compliance does not add value. Again, in the SAFe environment, maturity level compliance is incongruous.

Incentives, in Agile, can be tied to many things. In SAFe, tying them to value delivered ultimately respects the organizing principle of the Value Stream, elemental to SAFe. Creating and publishing value delivery metrics on a per-release-train basis calls back to factory production lines. Positive peer pressure works. It’s a core principle at the team level and it is scalable.

Designing the right value delivery metrics, that not only incentivize, but also inform the organization as to throughput, capability and potential issues is a worthy internal discussion to have. What do you think?


Tony Timbol
Vice President

Written by Tony Timbol at 05:00
Categories :

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!