The 500lb Marshmallow!

Never give up and keep swinging!

MarshmallowI often describe challenges that seem to only move an inch at a time despite a ton of effort as, “Swinging at the 500 pound marshmallow!” I have found that over my career in the software industry, many problems in our customers’ environments could also be described as a 500 pound marshmallow. But, as an independent software assurance company, proServices often has both the perspective and objectivity to recognize and assess these challenges more frequently than our customers because they’re too exhausted to see what’s coming. 

These challenges range across the software lifecycle and have been around since the ENIAC (pictured below).

ENIAC Computer

Some notable challenges from the C-Suite of our customers include:

  • “Why are my projects always late?”
  • “Why are my projects always over budget?”
  • “We always find out about our defects too late and from our customers first!”
  • “What process should we be following?”
  • “How do I get more out of our testing efforts?”
  • “We need to do a better job of estimating cost, schedule and functionality upfront.”
  • “I wish we had more proactive visibility into our technical debt.”

Fantastic concerns – and ones you would expect from the C-suite. These challenges have not changed much over the years, but the technology underneath has changed dramatically, from architectures, languages, and hardware to data models, the approaches available to tackle these challenges continue to improve.

So, instead of utilizing the same old series of punches at these marshmallows, why not try something new to see if we can move it a foot instead of an inch? 

Design & Estimation

For starters, you need to put a proper estimation process in place, at the front of the process, as well as a mechanism to measure the result on the backend of the process. This ensures that you’re producing the functionality for the projected cost and schedule. Most of you would probably agree that you design quality into the system, starting at the beginning of the process. I am not an expert in software architectures, but I do understand the importance of having a well thought out strategy that incorporates growth and flexibility to accommodate constant change. Addressing these important items upfront will help mitigate substantial defects later in the lifecycle. One of the biggest benefits of having a solid estimation capability is being able to gauge how well tuned your engine (or process) is for production capacity.  This enables vital information for organizations to continually improve their process by providing constant feedback.  There are companies that offer consultants in both of these areas, as well as a number of products on the market for you to do it yourself, perhaps with some outside expert guidance. Done.        

Build it Right the First Time

How? By implementing an Agile process that includes a continuous integration DEVOPS. Once that’s in place, you should have a system that provides software risk analytic reports profiling quality, security and performance for every build. By providing early and iterative transparency into technical debt, you will be able to significantly mitigate and reduce the rate of defect injections into your software during the execution phase. Leveraging technology platforms to collect important data; correlating it to risks important for your organization to understand and proactively mitigate; and finally, having the collaborative mechanism to socialize this data across the organization from executives to engineers.  There are also companies that offer this as a service and provide platforms that integrate these technologies into the DEVOPS environment. Done.

Testing has a Seat at the Table

Now, your testing team has to be integrated with your development team by driving a test-driven development process from the beginning. Some key metrics, such as functional and performance testing success/failure ratios, should be mapped into your dashboards, as well test coverage metrics to measure the effectiveness of your testing efforts. Then, you should use automation integrated into your DEVOPS and CI environment to streamline the operations. By designing tests upfront and setting aggressive performance targets, you will significantly reduce the defect injection rates at this phase of the lifecycle. Again, there are companies that offer this as a service and technologies to make this happen. Done.

What’s my point? One of the top five complaints we have heard over the years from the C-suite involves finding defects too late in the process. Yet, we continue to swing at the 500 pound marshmallow with the same strategies, expecting different results and finding only frustration. Everything I’ve outlined above can be tackled internally by most companies, but sometimes the solution requires an objective third-party who can analyze the problem and then implement the solution on your behalf. Maybe moving the 500 pound marshmallow requires a new perspective and the help of someone else, but in the end, you need to keep swinging at these challenges to keep moving forward.

Keep Swinging! 

 

Rob Cross
PSC Vice President

 

        

 

 

Written by Rob Cross at 05:00

The Mythical Accurate Estimate

AlanI’ve been contributing to a stream on the ISBSG discussion forum on LinkedIn recently, and it amazes me, though it shouldn’t, that estimating continues to exercise so much discussion. Part of it boils down to the confusion of price and estimates. The myth persists that an estimate is a quote, but it cannot be, as an estimate will only be 100 percent accurate at delivery.

In reality, estimates are approximations based on models. That is true with expert estimating, as much as it is with SLIM, SEER and COCOCMO. The quality of the model depends on the quality and completeness of the input data.

An estimate is the most accurate view of the likely effort needed to deliver a specified set of tasks in a defined period and to a set quality, given the quality of the information available at the time the estimate is created. With large projects, estimates should be refined over time.

So, why go with parametric models rather than expert estimates? The simple answer is that over time, parametric models have been proven to be much more accurate than expert estimates. Or, maybe I should say, they are much less inaccurate than expert estimates.

Expert estimates are done by highly skilled people. Studies show that they are as likely to be wrong as right – see our Trusted Advisor Report – though when experts create a model based on historical data, they can be better.

In EDS, we used to think of the estimating process as if it were a mini-project in its own right, and I think that is still a valid proposition. You gather the data, you process it, and doing so, you identify risks associated with the information you have been given. You have to surface and manage the risks, and your estimate must indicate where the risks lie. FPA allows you to size a requirement, but a poor requirement leads to an inaccurate size measure. You must record the potential for growth as part of the sizing process and identify the risks to the project.

The worst case I ever saw was a company who had poor estimating skills and didn't recognise the risks associated with a wildly sketchy requirement. We were brought in as the programme was being canned. The original requirement was counted by my team at 7,000 FP, but we marked it as a red risk for lack of detail and potential for extreme growth. The final size was 30,000 FP and growing. They fixed the price based on the original requirement, didn't revisit the estimate, and then wondered why it all went wrong.

Many projects use multiple methods for delivery, so when you're creating your estimate you have to simplify the model so that the variables are manageable. Trying to take every variable into account means that by the time you deliver your estimate, you run the risk that the work will have already been done by someone else. In any case, if you have no size measure and no risk register associated with the measurement, you won't have a viable estimate.

An estimate is an approximation to enable you to manage your financial exposure, set a budget, construct a plan and build a team to deliver it. If the risks are too great and unmanageable, don't even start the project. Iterate the process until you can deliver a meaningful estimate, that way you are more likely to deliver the project successfully.

 

Alan Cameron
Managing Director, DCG-SMS

Written by Alan Cameron at 05:00
Categories :

Software Analytics Training

David Consulting Group is well-known throughout the industry for our expertise in Function Point Analysis and Software Estimation. As such, we offer a number of classes to share what we’ve learned over the years and to help your organization improve its Software Analytics practice, no matter your current level of implementation.

Here’s what we currently offer:

  • Function Point Fundamentals - A two-day course to teach you how to apply function point counting techniques. This class can be deployed on-site or via DCG University, our online learning platform.
  • Function Points: Advanced - A one-day course to accelerate your comprehension of the Function Point Analysis technique through intensive instruction and hands-on practical application.
  • CFPS Preparation - A one-day course to prepare participants for their certification exam. This class can be deployed on-site or via DCG University, our online learning platform.
  • Quick and Early Function Points - A one-day class that provides quick techniques for sizing units of work to support estimating and measuring progress.
  • Estimation Techniques Workshop - A two-day course to help you to understand estimation and sizing as it applies to your organization.

All of our classes can be customized to meet your needs, so if you have specific goals or questions, just ask! We’re happy to work with you in any way that we can to help you implement or improve a Function Point Analysis or Estimation program. If training isn’t what you’re looking for, we provide Advisory Services as well!

David
Contact:

David Herron, Vice President
d.herron@davidconsultinggroup.com

 

David Herron
Vice President, Software Performance Management

Written by David Herron at 05:00

Are Estimates Based on Historical Data or Subject Matter Experts Better?

This month’s Trusted Advisor report, written by Tom Cagley, Vice President of Consulting, addresses a popular question regarding estimation: Are Estimates Based on Historical Data or Subject Matter Experts Better?

There are a number of different estimation techniques, from model-based to composite to learning-oriented. However, only expertise-based methods are not based on a model or mathematical network generated from collections of historical data.  Expertise-based estimation is, by far, the most popular form of estimation in use today.

This report examines the question at hand, to see if the popular choice (expertise-based estimation) is the “better” choice. Ultimately, we conclude that estimates generated from models leveraging historical data in calibrated tools are the only logical choice. Read the full report to see how we reached this conclusion.

Read the Report.

Written by Default at 05:00

A Simpler Life Without Function Points?

DavidWe have been using function points for the past 18 months. I am responsible for our function point program and I am beginning to think that life was much simpler without them. Function Point Analysis is a technique used for measuring the functional size of a delivered piece of software. Simply stated, it provides the quantification of features and functions that my user has asked me to deliver. The methodology is pretty straightforward, using techniques and terms that are easy to understand and mathematical formulas that we learned in the third grade. I have been using function points for two purposes: to estimate / predict software delivery outcomes and to measure performance (i.e. productivity and quality).

In my shop, everyone is asked to estimate their projects. The project estimates are to include predicted levels of effort, cost and delivery schedules. We use function points as a size indicator because it makes sense that if you have to accurately predict how long and how much effort developing a piece of software is going to take, that you should have some sense of its size. Of course, there are other variables you have to solve for, like complexity and how you plan to manage the development of a particular piece of software given its size and complexity. I actually find function points to be a good indicator of size and very useful in helping me articulate exactly which features and functions we are actually going to deliver to our end user. This ultimately makes it easier to discuss estimates with our end user and to manage their expectations. So, what's the problem?

In my shop, things are always changing. Customers are frequently making last minute changes to what they want delivered or a project’s key resources are being borrowed for other projects that are in trouble. It makes it very difficult to accurately predict outcomes, and ultimately, projects are occasionally not delivered on time or within budget. And every time one of these failures occurs, it seems like folks are quick to blame function points. Function points become the convenient focal point of all that is wrong with a failed project. The logic behind this thinking is that function points must not have accurately sized the project and therefore, we could not properly predict the outcomes. I find myself constantly defending function points and pointing out what the real culprit(s) for these occasional failures are; e.g. ambiguous requirements, changing priorities, resource constraints (take your pick!).  Without function points to blame, everyone could go back to blaming each other for the project failures that occur, and my life would be much simpler.

As I mentioned earlier, we are also using function points to measure things like productivity and product quality. It is pretty cool. We measure the function points we deliver and use effort and cost to determine a rate of delivery and a cost per function point, two key performance indicators. We can also show a comparative value across projects by using function points as a normalizing factor when assessing the number of defects delivered for a piece of software, a measure known as defect density.

Sounds pretty good, doesn't it!? You would think that these measures would be good information to have. Well, here's the issue. In my shop, people are not held accountable for properly recording their time; therefore, we don't do a very good job of accurately recording time spent on our projects.  And so what happens is, once people know that performance is being measured, they get a little crazy. They start changing their behavior. They find that they can “game” the system and start recording their time such that they can make their performance numbers look better. In addition, folks aren't held accountable for recording defects, and so getting any kind of consistent or accurate defect information is a challenge as well.  Obviously, all of this distorts the measures and impacts their credibility. 

So, once again, if we stopped using these darn function points, we could find any number of alternative excuses for our project mishaps – just like we used to. It would allow for greater creativity and resourcefulness on the part of project managers that are trying to save their rear-ends. Rather than use their fingers to count function points, they could use their fingers to point at and to blame others. That way they can't be held accountable (or learn from their failures).  And it would make my life so much simpler.

 
David Herron
Vice President, Software Performance Management

Written by David Herron at 05:00

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!