What's Your Most Important Software Risk?

AlanThroughout the nearly 30 years I have been in IT, risk management has been talked about and shown to be vital; yet, I think that the corporate focus follows the money too much.

Whether it’s client CIOs or supplier account managers, when I suggest that they should focus on software development risk, I get patronising comments about focussing on a less important part of clients’ spend. Typically I hear that 90% of software spend is on making sure that the current services are kept alive and functioning, so risk in software development is not important. After all, it doesn’t threaten the existence of the company to-day.

Existentially that assertion is not true. Just to-day I read about two UK supermarkets struggling to cope with the volume of orders, their websites crashing as a result. Ignoring the 10% is ignoring what is effectively the activity that primes the business pump.

My view is that such failures can arise when corners are cut in software development. For example, stress testing is not carried out sufficiently and, when loads exceed the system capacity, trouble ensues. Poor risk management, or worse, ignoring risks that can’t be managed because there’s not much money at stake, contributes to these issues.

Agile is our mantra to-day, and it’s one I subscribe to in a big way. It enables fast business change so clients are able to keep ahead of the competition by introducing unique services as differentiators.  The speed to change also increases the risk of failure. Risk management of that 10% of your service spend becomes more, not less, important.

Strategic, Agile risk management must focus on the front-end of the service flow – development. Assessing risk management is a key part of our Project Triage Solution, where we aim to assist you from commissioning to delivery by giving you an independent assessment of the state of your project.

Ignore that 10% at your peril. 

Here’s to a disaster free 2015.


Alan Cameron
DCG-SMS Managing Director

Written by Alan Cameron at 05:00
Categories :

Plus ça change, plus c’est la mȇme chose

DCG 1994 Logo 2C

This is the fifth and final post in a series that offers our management team's reflections on DCG's 20th anniversary and the state of the software industry. Previous posts herehere,here, and here.

I make no excuse for using a French expression, as it sums up things very succinctly. “The more things change, the more they stay the same” is something of a cliché, and of course it’s not completely true, but then again, it’s not completely false.

This year is DCG/DCG-SMS’ 20th anniversary. In thinking about how the industry has changed in the past 20 years, there’s no better way to describe it than the above expression.

The Need For Change

Who remembers SSADM –Structured Systems Analysis and Design Method? Developed by CCTA, who also developed ITIL, SSADM was (and is) a very structured 7-step waterfall process from Stage 0 – Feasibility to Stage 6 – Physical Design; oh, and then you had to build the application. SSADM is the culmination of the Big Process approach to application development; if all went to according to the huge WBS and GANTT chart plan, then you had a perfect system out of the sausage machine.  

It didn’t work, of course. The problem with any process-heavy approach is that the base assumption is that the world stands still while you build the application. We all know that, even with small application builds, a lot of things change our view of the business needs while we’re working.  SSADM was aimed at those government projects that would end up in the top one percent of the size range (5,000+ Function Points – FP), where change during the project is so huge that you can never get to the end point and “analysis paralysis” sets in.

Clearly there was a problem. One project I was on as a requirements manager delivered its first release at 0.5 FP/100 hrs, against industry expectations of about 6 FP/100 hrs. By the end, we were still only producing 1.5 FP/100 hrs and the application was nearly 8,000 FP. You do the math (at £400 per day base cost). The project team was huge and we were all working our socks off. We did deliver, of course, but with a lot of help from EDS, who became outsourcing partners during the project.

So now we have Agile. Large programmes are delivered through smaller applications, built incrementally by small teams, using continuous integration to deliver change quickly and much more effectively. There is process, of course, exemplified by SAFe and DSDM. to provide programme level support for scrum and the development methods at its heart.

In those organisations that become Agile throughout, a software development methodology has become a method for delivering business change. Hooray, the cavalry has come over the horizon and saved the day. Or has it?

Things Stay the Same

So now we’ve moved on. We do things so much better, but still projects fail. The 2014 PMI Pulse report shows that projects include IT waste totaling $109m of every $1bn spent. Highly Agile organisations are better at delivery, and crucially, such organisations tend to be more mature at change management, project management, and have active PMOs. They also have more visible and active project sponsors.

Sadly, in those organisations where effective agility has not been adopted, chaos reigns. Too often we hear the comment, “You don’t understand; we’re Agile, so we don’t need all that process.” Haven’t they heard of minimum marketable features? Where sponsors and teams have no idea of where their business is going, Agile becomes a mask for failure of strategy, and dreams and money get poured down the drain.

So what remains the same is that discipline, foresight, and adherence to Agile processes are key to successful delivery.  People and organisations who were poor at applying waterfall principles tend to be poor at Agile too.

Agile is a disruptive game changer; it can and does increase the rate of delivery while reducing costs, but treating it as a carte blanche for anarchy does the concepts behind it, and the businesses hoping to reap the benefits, no favours.


Alan Cameron
DCG-SMS, Managing Director

Written by Alan Cameron at 05:00

Balancing In- and Out-Sourcing

AlanRecently, Hugo Miseur, a former colleague of mine, started posting a series on outsourcing on LinkedIn. I recently commented on one of his posts, and this blog post expands on what I said there.

Every company that wants to hive off its IT via outsourcing should have a discussion surrounding the question, "What remains in the company if we outsource our IT?" There is no doubt in my mind that the most successful organisations retain enough expertise in-house to: Manage strategic requirements, to thoroughly test and accept new software and services, and to successfully manage their introduction into the client organisation.

Hugo and I met for the first time on a major government contract where, arguably, they kept too much of their development in-house, resulting in the mirroring of roles at architectural levels. This led to unclear decision making and a lack of freedom of the outsourcing supplier to be more innovative as the brake was put on by risk-averse clients. This led to some conflict and a number of decisions were forced by the client on the delivery team, leading to poor package selection, amongst other things.

Contrast that with a major client with whom I'm currently working, where the realisation has come some five to six years after outsourcing that they sold off their strategic expertise, as well as the technical skills to deliver huge change on a regular basis. The result was that suppliers, some of whom were delivering the software, were making business design and architecture decisions for the client. They are correcting that now, but it will take some time to re-build the coherent expertise.

I think that the balance between what to keep in-house and what to outsource is fraught with difficulties, but that should not stop outsourcing. Some outsourcing arrangements fail, not because the supplier failed, but because the client did not know how to manage the interface. Insourcing is sometimes the panic response, as is changing suppliers without analysing the reason the contract failed.

To me, the key is to keep enough expertise in-house to enable the retained IT organisation to play a key decision making role as part of executive management.  Once the baggage of outsourcing is done, the organisation must shift its mindset and stop treating IT as a necessary evil or merely as a support service.

The challenge of the retained IT function is to become the visionary enabler of business development. Outsourcing enables that change of purpose, but how many companies actually make that leap of faith?

 

Alan Cameron
DCG-SMS, Managing Director

Written by Alan Cameron at 05:00
Categories :

The Mythical Accurate Estimate

AlanI’ve been contributing to a stream on the ISBSG discussion forum on LinkedIn recently, and it amazes me, though it shouldn’t, that estimating continues to exercise so much discussion. Part of it boils down to the confusion of price and estimates. The myth persists that an estimate is a quote, but it cannot be, as an estimate will only be 100 percent accurate at delivery.

In reality, estimates are approximations based on models. That is true with expert estimating, as much as it is with SLIM, SEER and COCOCMO. The quality of the model depends on the quality and completeness of the input data.

An estimate is the most accurate view of the likely effort needed to deliver a specified set of tasks in a defined period and to a set quality, given the quality of the information available at the time the estimate is created. With large projects, estimates should be refined over time.

So, why go with parametric models rather than expert estimates? The simple answer is that over time, parametric models have been proven to be much more accurate than expert estimates. Or, maybe I should say, they are much less inaccurate than expert estimates.

Expert estimates are done by highly skilled people. Studies show that they are as likely to be wrong as right – see our Trusted Advisor Report – though when experts create a model based on historical data, they can be better.

In EDS, we used to think of the estimating process as if it were a mini-project in its own right, and I think that is still a valid proposition. You gather the data, you process it, and doing so, you identify risks associated with the information you have been given. You have to surface and manage the risks, and your estimate must indicate where the risks lie. FPA allows you to size a requirement, but a poor requirement leads to an inaccurate size measure. You must record the potential for growth as part of the sizing process and identify the risks to the project.

The worst case I ever saw was a company who had poor estimating skills and didn't recognise the risks associated with a wildly sketchy requirement. We were brought in as the programme was being canned. The original requirement was counted by my team at 7,000 FP, but we marked it as a red risk for lack of detail and potential for extreme growth. The final size was 30,000 FP and growing. They fixed the price based on the original requirement, didn't revisit the estimate, and then wondered why it all went wrong.

Many projects use multiple methods for delivery, so when you're creating your estimate you have to simplify the model so that the variables are manageable. Trying to take every variable into account means that by the time you deliver your estimate, you run the risk that the work will have already been done by someone else. In any case, if you have no size measure and no risk register associated with the measurement, you won't have a viable estimate.

An estimate is an approximation to enable you to manage your financial exposure, set a budget, construct a plan and build a team to deliver it. If the risks are too great and unmanageable, don't even start the project. Iterate the process until you can deliver a meaningful estimate, that way you are more likely to deliver the project successfully.

 

Alan Cameron
Managing Director, DCG-SMS

Written by Alan Cameron at 05:00
Categories :

Beyond Benchmarks 101

I usually find that having a benchmark process is treated much too simplistically.  Suppliers are often forced to sign up to black and white productivity targets by naïve, and sometimes overly aggressive, clients.  A project is delivered and the productivity is measured in terms of cost or effort per output – and then the fighting starts.

The thing is, life is complicated and single data points don’t tell a complete story.

Never Let the Facts Get in the Way of a Good Story

In today’s news-driven, overly simplistic world, single data points are thought to be representative of trends, or worse, are given without context. In late 2013, there was a spate of fatalities amongst cyclists in London – about seven in a month – when in 2012, there were 14 in total. Cue sensation in the press and questions about the safety of cycling by politicians. 

Cooler heads started to dissect the results, and sad though this spate was, the annual rate in 2013 was exactly the same as 2012, while cycle journeys are increasing in London by more than 20 percent annually.  Sure, the causes of these fatalities needs to be analysed and lessons learned, but there is no need to panic.

In other words, hot spots do happen in random distributions, and hot spots don’t indicate long-term trends.

I really like the approach taken by the BBC programme, “More or Less.” There, professional statisticians dissect news stories and put the results in proper context – do go and download the podcasts. It is so refreshing to listen to, and it is the approach I have had to fight for throughout my career in software processes.

Using the Facts to Shape Thinking

But, back to software, let me give you an example of intelligent use of data.

We have a client who is keen to reduce software production costs – which client isn’t, right? Some time ago, this client had asked us, “Where will the next big savings come from?” Our response was, “Collect the data and we can discuss it with you.”

Our preliminary work focused on assessing the accuracy of estimates and measuring the results against those estimates.

As one might expect, the experienced teams were pretty good at estimating and when benchmarked, the balance between defined speed of delivery and cost was close to industry average.

Then we started to assess the data. Speed of delivery was faster than optimal for the size of delivery, a good thing on some eyes, but the cost per function point was higher because the team size had to be higher to create the software in the time.

Comparison of size of completed projects, delivered over a year, against industry trends indicated that two options were open – reduce scope or increase duration. Reducing scope by 25 percent from 200fp to 150fp suggests that savings of up to 30 percent in cost per function point were available. Increasing the development part of the lifecycle by three weeks indicates that savings of up to a startling 60 percent were available. 

The facts tell a good story and the reaction when we presented the results was truly wonderful to see. 

This is preliminary work, and life will get in the way, but it shows that intelligent use of benchmarks can be a positive experience for all concerned. Try it, it works.

 

Alan Cameron
Managing Director, DCG-SMS

Written by Alan Cameron at 04:00
Categories :

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!