Benchmarking – A Slap in the Face or an Informed Discussion?

Talking Down to the Client

Alan CameronI’ve recently been talking to some clients about their experience of software development benchmarks, either as customer or supplier, and overwhelmingly they’re telling me that it’s not often a good experience.

“Oh, I’m fed up with the whole process,” said one outsourcer. “Clients engage the blue chip benchmark suppliers, and, after a tussle and a lot of ill-will, a report lands on a desk. The results are presented as a simple answer, which is ‘the right answer,’ and we are left to scrap with our client over the results. The method of comparison and the dataset used – size and characteristics – aren’t made clear, and the result is unsatisfactory. In the worst cases, it can be contract threatening.”

I have some sympathy. Poorly created benchmarks can be misleading and in the worst cases can lead to court action. I encountered a situation once where a major outsourcer was producing a complex financial regulatory system, and the client decided to benchmark the programme. The results indicated that the software development was vastly inefficient and much too costly.

There was a dispute between client and outsourcer, which nearly went to court, until someone on the client side asked the question, “How many data points are we looking at, and what sort of applications were included in the sample?”  There was a metaphorical shuffling of feet and a sheepish reply. Basically there was one data point in the same industry and it referred to a CRM system. I don’t know who was more embarrassed, the client or the benchmarker.

Trying to commoditise the outputs may seem sensible, but the resultant model can be a gross simplification, and that is “slap in the face” benchmarking. Essentially, it’s a combative, adversarial process and it doesn’t work.

Collaborative Benchmarking

A more rational approach to benchmarking should involve a three-way discussion involving the client, outsourcer and benchmarker. Developing and enhancing applications is a skilled task, and it’s multi-dimensional. Benchmarks should reflect the things that matter to the client’s business; the benchmarker needs to be open and honest about the data used and must offer a range of answers to facilitate discussion.

We hear about time, cost and quality often and, when we benchmark, all three aspects have to be taken into account. I would add to that agility and flexibility.  Benchmark reports should balance software development business drivers against what’s being delivered. In a waterfall or similar process, where time is at a premium, either costs or defects tend to go up. Where quality is the driver, then unit costs may go up because higher skilled staff members are used, but there may be better effort productivity. If cost is key, the speed of delivery and quality may be less than optimal.

Agile should be a game changer, which is where client agility and flexibility become as important as that of the supplier. Suppliers may be well versed in delivering working software of high quality in a short time, but if the client doesn’t understand the business goals and can’t adequately groom and prioritise the product backlog, then all the benefits can be lost. But that’s a story for another time.

Go in With Your Eyes Open

The result of a benchmark should be to identify any process and cost inefficiencies in both client and supplier processes against an informed backdrop.

When you ask for a benchmark:

  • be sure what you’re asking for;
  • demand clarity and transparency from the benchmarker;
  • be prepared for rational discussion with the benchmarker and supplier;
  • collaborate and understand - there is no right answer;
  • be prepared to look inward – client processes can be as much the cause of sub-optimal performance as those of the supplier;
  • don’t trust the man (or woman) with the simple answer to the complex question.

Unless, of course, you want a slap in the face.


Alan Cameron
Managing Director, DCG-SMS

Written by Alan Cameron at 05:00
Categories :

Is Agile Working for You?

AlanBy any measure we find that Agile can be more productive in software development (see, for example, the latest ISBSG Report), but some conversations I’ve had recently make me realise that businesses only get out of Agile what they put in.

Continual “fail fast” can be a recipe for ever-decreasing circles leading nowhere – i.e. total failure.

The concept of “fail fast” is touted by many authorities as one of the strengths of Agile. One can see that this concept works fine in a controlled environment, but taken to extremes, fail fast just becomes fail. I was recently asked to look at a major programme where Agile techniques were used and yet value-for-money improvements hadn’t materialised. The client wanted to know the delivered size of the final application, and it was large (a little fewer than 3,000 function points). They compared that with the cost and were horrified – costs were no better than Waterfall. That’s why we were called in. We looked at the whole picture and the developed size turned out to be approximately 7,500 function points. They discarded fully 60 percent of what had been developed. They failed fast all right, they just didn’t learn fast.

If you’re throwing away more than you keep, maybe you don’t know where your business is going.

In Agile you still need to know what you are planning to deliver.

A development company asked us to train their staff in functional sizing to assist with estimating and measurement of value. It is an Agile shop with two-week time-boxed developments. We trained the staff and then they tried to apply their knowledge at the planning stage, only to find that in many cases the input documents weren’t fit for purpose. Basically, time-boxed planning was a guess. The client was happy with the outcome of our analysis, as it made them tighten up their backlog grooming processes.

Time-boxed development works best when you don’t waste time working out what you’re supposed to be delivering.

If you don’t own your business product, you take what you’re given.
Another client has had difficulty with getting engaged, empowered business owners, so even though they use Agile techniques, and produce working software, the result is often greeted by the business with disappointment. “Well, it’s nearly what I wanted,” or “That’s not what I wanted at all,” are all too common refrains.

If you throw unclear ideas over the wall and avoid your responsibilities, you won’t get fit-for-purpose software, and the value that Agile brings is, at best, diluted.

Vision, communication and engagement are key.

Enterprises need at least a broad vision of what their business will look like when the developers have delivered their system. Effective prioritisation and refinement of the product backlog enables key components to be delivered and change to be absorbed.

Developers should be empowered to say, “No, we will not start this work because we don’t know what we’re being asked for.”

Clients who commission work have to stay engaged. Their business representatives in the team must be empowered, and the commissioning manager must accept responsibility for what happens if they don’t or they won’t remain engaged.

So, know what you want your business to look like, manage and groom your product backlog, don’t expect developers to start work until you can see where you’re going, and remain engaged and responsive throughout the process.

Garbage in, garbage out hasn’t gone away. With Agile, you just get it more quickly.

Alan Cameron
Managing Director, DCG-SMS

Written by Alan Cameron at 04:00
Categories :

Do Classic Project Metrics Focus on Techniques and Tactics Rather Than on Outcomes?

TA June

In this month’s Trusted Advisor report, I take a look at what so-called “classic” project metrics are, how they might be defined, the consequences of the definitions, and how measures can be used effectively as part of assessing the outcome of a software development project.

Classic software metrics can be key inputs for projects that are focused on outcomes, but the metrics must be used appropriately in order to not skew the efficacy of the data. Proper use of metrics can provide clarity as to what is expected of project teams to meet the needs of end users.

The report is available for download on our website, but only for the next month for non-members! To read this report, and past reports, you can sign up for Trusted Advisor, our complimentary research service. Each month we produce a research report that answers a question submitted and selected by members of Trusted Advisor.

Do you agree with our assessment in the report? Does your IT organization utilize metrics?

Alan Cameron
Managing Director, DCG-SMS

Written by Alan Cameron at 09:08

Benchmarks and What They Tell Us

As ever, when I start to think about benchmarks, I like to remind myself of the definition. The Oxford English Dictionary states that a benchmark is:

  1. A standard or point of reference against which things may be compared or assessed.
  2. A surveyor's mark cut in a wall, pillar, or building and used as a reference point in measuring altitudes.

Synonyms include “norm, touchstone, criterion, specification, model, exemplar.”

Now, this leads us to ask what benchmarks show us? They give us a picture, an image of the current performance of an organization. The temptation is to see the result as a simple statement of fact uncluttered by its surroundings.

 I came across this notice on a trip in the States and, on the face of it, it’s a perfectly reasonable request, but when you step back and take in the whole structure you start to think again.

 Here’s the bridge – and that drop is 1,000 feet! The notice is clearly a joke, but taken at face value it’s a strong prohibition. That’s why our benchmarking service is designed to help clients see the whole picture.

Our aim is to help clients to understand not only where they are, but how they are progressing toward their goals. Single snapshots give us an indication of where we are in relation to the industry – today. An effective picture takes time and a number of data-points to fill in the missing corners.

Benchmarks should influence discussions on productivity and cost per output; they should not simply drive the conversation. The big picture is multi-facetted; productivity is affected by so many things: size, timescale, quality, process maturity, resource skills and availability, to name but a few, and any benchmark service should take that into account when engaging with clients.  

We have recently set up an agreement with Quantimetrics, an established independent benchmarking company, because we believe that the depth and breadth of data held by Quantimetrics enables us to bring even more to the benchmarking party. With our partners on board we bring a global, non-U.S. database to add to our U.S. data, plus many years of experience in data analysis.

Going back to our fishing notice, once we have the full picture, we can see that trying to fish from there is pointless. In short, the 1,000 foot view doesn’t work – let’s get down to the river and try again.

By making use of extra data and by monitoring progress over time we enable clients to progress on a journey towards better value for software development. We enable them to catch the big fish and throw back the small fry. We look forward to helping you on this journey.

What other benefits do you think bechmarking holds?

Alan Cameron
Director, DCG-SMS

Written by Alan Cameron at 08:12
Categories :

A Hans Christian Andersen Fairy Tale for the 21st Century

clip_image002I’m a child of the Fifties and Sixties, so one of my favourite entertainers was Danny Kaye, who was undoubtedly a very nice and very funny man. At work recently I’ve been reminded of his take on the H.C. Andersen story, “The King’s New Clothes.”

I was asked by a client to examine productivity for a heavily customised ERP implementation. Suffice it to say productivity was low, 3GL-like, so I set off to find documentary evidence about what we all “know,” – that is, once you customise a package beyond about 30 percent, you cannot consider it to be a package anymore and development productivity falls as a result. The more you diverge from the original the lower the productivity.

In this case the end client insisted in going way beyond the tipping point, to where the great majority of the code was handwritten, while trying to hold the developers to the notion that package development should be much less expensive than hand-cut code. In fact, the changes to the code had gone so far that the software could not be updated to the latest version, hence providing support headaches to rival the development productivity issue.

So, it should be easy to find the evidence to show the effect of extreme customisation on development productivity. Ah, but it isn’t.

No one seems willing to stand up and say, “We customised an ERP package and got really poor development productivity.” Databases only contain the successes, but that’s hardly surprising – nobody likes to look like a failure. There’s a certain amount of indirect evidence, including papers that set out the consequences of customisation, but little or no hard data. So that got me to wondering why there was no data.

The only conclusion that I can come to is that we are unwilling to look unwise. We are like the courtiers in the story, unwilling to look foolish by saying, “This will cost a lot for the software if your business is so complex that we need to heavily customise the code.” The sales pitch is still, “Buy ERP and cut your development and support costs.” The caveats that should be applied are simply not stated because honesty would scupper the sale.

Client technical experts must be honest with their management too. They should stop trying to tick the box, “We use packages to cut development costs.” Unless they too are realistic, they are guilty of sophistry at the very least.

ERP is supposed to solve all our business process needs, and it can do, but with provisos. Clients must either change their business processes to match the solution or live with the fact that to get the business benefit of using ERP they must accept that the total cost benefit of business change comes at a price – the software will be as expensive as writing it yourself.

Is this a disaster for ERP suppliers like Siebel, SAP and others? No, of course not. Their solutions bring quantifiable business benefits associated with standardisation of process development, uniform front ends and powerful portals for business; it’s just that they are not cheap.

Time for the little boy to stand up and say, “The King is in the altogether… he’s altogether as naked as the day that he was born.” A bit of honesty with clients and the setting of correct expectations would save a lot of commercial battles later.

Alan Cameron
Managing Director, DCG-SMS

Written by Alan Cameron at 06:00

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!