I've come across this story quite a few times here in the UK:

NHS Computer System

Summary: We're spunking 12 Billion on some health software with barely anything working.

I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need.

But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. *

If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system.

So how on earth did this project bloat to 12B without producing much useful software?

As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it?

What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project?


*This bit seemed to get a lot of attention. What I mean is I could probably do this for say, 30 users, spending a few tens of thousands of pounds. I'm not including stuff I don't know about the medical industry and government, but I think most people who've been around programming are familiar with that kind of database/front end kind of design. My point is the NHS project looks like a BIG version of this, with bells and whistles, notably security. But surely a budget millions of times larger than mine could provide this?


"But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale."

It probably went overbudget because the original project manager had this exact same attitude.


I've worked for the UK Government before on a number of projects. One of the biggest issues that I encountered across all projects was requirement ballooning. Your assessment of what is needed sounds fair, but put that through some committees, add things like eGIF compliance and what starts off as a relatively simple proposition ends up being massively complex.

The high costs come about through scope creep, cost of external suppliers and the expectation that launch day will see everything launched.

If I had one piece of advice for UK government projects, it'd be to concentrate on phased delivery and accept that not everything is going to be available on day one. Not only would this allow costs to be controlled more effectively, it also reduces the amount that you have to test, and allows suppliers to be changed between phases if they are not producing the goods.

22 accepted

Yes have been in similar project - that too in the UK.


  1. Requirements are not frozen - keep increasing all the time, till you have subsets of requirements for Phase 1/2/3 all of which are inter-linked and not deliverable.

  2. Basic SDLC issue of people working in Waterfall mode, so nothing begins until every single requirement is finalized no matter how small, this never completes as in #1

  3. Endless deliberations - all parties are happy to talk more than actually doing something. Of course they do love to talk Agile as well, but that's only for talking.

  4. IT Architects. Not ready to prototype or build anything since they're worried about wasting time building something which might be thrown away. Rather just keep defining perfect architectures on blackboards/documents without an ounce of practical thought.

  5. Somehow everyone involved loves to travel all over the country to have face-to-face meetings (rising costs) instead of anything virtual, quicker & cheaper. Usually meetings will be over 3 days and thus a whole round of jollies at public expense - hotels/booze etc.

  6. A tendency for PMs to not control costs but always justify more expenses and put a bleak outlook on things. Exaggerate risks unnecessarily so that more focus on "why-not" rather than "how to make this work"


Well, I usually see two categories:

  • Overarchitecture: some architects tend to create a far too difficult architecture just because the UML drawing looks cool. It should never be more difficult then needed, and each component should have a reason for being there.

  • Politics. A lot of this money is spent on conflicts of interest, managers, management meetings, escalations, feature discussions between people with different interests... you get the idea. This is usually a huge factor for a failed project.

It usually isn't technology, or developers that spend 12B ;).


It may well be that this project is over any reasonable budget. But I always break out in hives when someone says, "Hey, all this needs is a database with 3 tables and a few input screens. I could throw that together in a week."

If this was a school project or a demo, sure, maybe you could throw together a database with patient name, address, a big text box for "diagnosis", and a few other fields and get the whole thing up and running in a week or two. But the difference between school projects and real-world projects is that in school projects, we can make all sorts of simplifying assumptions. In real-world projects, we have to deal with the data as it actually is.

For example, not long ago I worked on a project to reconcile physical inventory against actual inventory. We wanted to have the warehouse folks walk through the warehouse with handheld scanners, scan a barcode on each item, and then match this against the database. Simple problem, you could code it in a week, right?

Except ... For starters, what's the barcode on the box? In a school problem, there would no doubt be a "stock id" of some sort, in a consistent format. But in the real world, we had to deal with some boxes have a UPC code, some have a purchase order number, some have a shipping acknowledgement number, and others have our internal stock number. We had to scan the barcode and then study the format of the identifier to figure out which it was. Like, counting how many hyphens and whether the remaining characters are all digits or a mix of alpha and digits and how many characters between hyphens, etc. Oh, and purchase order numbers are not necessarily unique within the systetm, so we had to make some rules on how to pick which of the duplicates is probably it, etc.

I'm sure a nationwide medical system has to interface with many other existing systems, systems at hospitals, pharmacies, doctors offices, etc. Each likely has their own codes and formats. Simple things like "when we get the procedure code from system A it is ten digits right-justified padded with spaces, when we get it from system B it is padded with zeros, when we get it from system C it's variable length. And system X uses the 1993 version of the codes but these were updated in 2004, and many of the old codes look like the new codes but mean completely different things. Etc.

12 billion pounds sounds like an awful lot for any system, so if you want to argue that there must be huge amounts of waste here, I'd think you're likely correct. But don't go to the other extreme and say the whole system could have been built by one person over a weekend. Years ago I worked on a system for doctors' offices, which is surely a tiny subset of what a system like this needs to do, and I can tell you that if I had to rewrite that system today, even using the most modern tools, it would take seveal man-years of effort.


One of the reasons might be scope creep

Scope creep (also called focus creep, requirement creep, feature creep, function creep) in project management refers to uncontrolled changes in a project's scope. This phenomenon can occur when the scope of a project is not properly defined, documented, or controlled. It is generally considered a negative occurrence that is to be avoided. Typically, the scope increase consists of either new products or new features of already approved product designs, without corresponding increases in resources, schedule, or budget. As a result, the project team risks drifting away from its original purpose and scope into unplanned additions. As the scope of a project grows, more tasks must be completed within the budget and schedule originally designed for a smaller set of tasks. Thus, scope creep can result in a project team overrunning its original budget and schedule.



In the case of NHS IT i seem to recall an interview with somemone involved with it complaining that there was no actually asking/working with any end users to understand what they want/need. so failure to capture the correct requirements seems like a good bet.


In the case of the NHS project: unlimited resources, completely overpaid consultants (from Accenture) -

According to the Daily Telegraph, the head of NPfIT, Richard Granger, 'shifted a vast amount of the risk associated with the project to service providers, which have to demonstrate that their systems work before being paid.' The contracts meant that withdrawing from the project would leave the providers liable for 50% of the value of the contract; however, as previously mentioned, when Accenture withdrew in September 2006, Granger chose not to use these clauses, saving Accenture more than 930m

(Granger was also working for Accenture)

Everyone involved in the project was on a gravy train without accountability. When the project has commercial pressure then people are far more accountable. In the case of the NHS there was no ROI or outgoings to claw back, and was "ready when it's ready".

Most of the delays with the NHS system were huge technical blunders and design flaws from my understanding, and as mentioned lots and lots of feature creep.

It's very common (but probably getting less so now) with public service IT projects in the UK. Another example of a huge hole that taxpayer money has been thrown into is Becta, which is now being axed by the ToryDems.


To answer the actual question of "What are best practices for large projects? Have you ever worked on such a project?"

Yes, I've worked on several large projects. One that comes to mind was modifying an existing client front end to an accounting system to support payments by credit card. Upon review (at the end), it should have taken 2 (maybe 3) people about a week to knock out. The project actually had 50 fulltime people who worked on it for 6 months. At the end it was called a success because it was delivered on time and under budget. God knows who set the budget on that one as the money to cover the pizza party afterwards should have been more than enough to cover real development costs.

Why did it take so long? My first week on the project, myself and 10 others were pulled into an "emergency" requirements meeting. The Dev Lead said that one of the business people decided we (gasp) had to collect CVV2 codes as well. He turned to the database guy and asked how long it would take to add support for that. He said "5 minutes. It is one additional field and a change to two stored procedures".

The Dev Lead then spent literally 15 minutes telling him how all of the projects they have delivered in the past were late. The dba said, "okay, how about 30 minutes?" Dev Lead said, "I don't think you quite understand how this works" and promptly wrote 3 days of time on the board for data analysis and a week for implementation.

The dev lead then turned to the programmer responsible for that area of the UI and asked him how long it would take. The programmer said, "I haven't put that screen together yet. It's just an additional field, so really, it won't take any longer." The Dev Lead shook his head and said, "Let's add 1 week for screen analysis and 1 week for implementation."

I'll skip the rest of the meeting except to say at the end, two man months were added to the project time line to add a field which, quite frankly, should have been included to begin with.

This company had approximately 2,000 developers on staff.

Getting back to the question, "what are best practices for large projects?" The answer is to simply not have so much money available. The more you have, the more that's going to be wasted.


Some projects go over budget because they think that performance is something you fix by tweaking a few knobs at the end of the build cycle.

One project I was peripherally associated with developed using a "methodology" that heavily emphasised clean architecture via strong separation of layers. A database layer would hold millions of account records; a data access layer would provide clean, strongly-typed access to the database layer. An ORM layer defined an object model that cleanly expressed the structure of an account in a highly flexible, dynamically definable way. A business layer defined all kinds of rules and transformation components across the object model, and a coordination layer defined how these rules were brought together to implement specific logic.

This was all built with a highly flexible framework, using XML as the glue to bind it all together. The system was intended to filter the several million accounts down to the roughly one million accounts that were of interest to the downstream processing engine, which was written in similar fashion.

After a massive coding frenzy, they fired up the system to see how it would perform. After timing a few processing loops, the projected estimate for completion was on the order of several hundred years.

They moved all the nice clean business logic into stored procs, but retained the per-account looping approach. This brought massive time reductions, with the new estimate being in the order of hundreds of days.

Finally, they wrote a bunch of UPDATE, INSERT, CREATE TEMP TABLE, DELETE, etc. code that took a few hours to complete.


I'd highly recommend you read: Fred Brooks 'No Silver Bullet'

This is a classical case study in software engineering. There is a long answer, I'll give you the short one --

'Any idiot can make some thing complicated, it takes a genius to make the complicated stuff simple'

so goes the saying.


"If you're not a part of the solution, there's good money to be made in prolonging the problem."



As the popular folk tale goes: If you put a frog into boiling water, it'll jump out. If you put a frog in a pot of cold water and slowly up the temperature, it will boil to death blissfully unaware of what's happening.

In projects of this size, there is often no final accountability or responsibility. Shared responsibility is no responsibility, so costs just keep ballooning until it's ridiculous enough to catch the attention of someone who cares and can make a difference.

Another reason that I've seen happening a lot is the "we're almost there, just a few more weeks to finish it" mentality. You can be "almost there" for years if no-one mans up and pulls the plug.


Just my opinion, but I have found that, for the most part, most programmers just aren't that sharp. Maybe I should rephrase that as most people are not that sharp. Companies are FULL of average to below average programmers (and employees). And some people are downright stupid, and they're making the decisions.

How many times have you been the only guy in the room who could see clearly what the problem AND the resolution was, and EVERYONE else had absolutely no clue?

That's how projects bloat that big and cost that much.


Vendors have earned 12,000,000,000 so these projects should be treated as very successful projects for our industry.


The main problem that I come across is poor planing and forcing deadlines that are unrealistic to begin with.

I would like to suggest a couple of books that I have read about managing software project that I think really helped me have a better understanding of this exact question.

The Mythical Man-Month by Frederick P. Brooks


Waltzing With Bears: Managing Risk on Software Projects by Tom DeMarco and Timothy Lister


The skill required to design a system like this is too much for 99.9% of all software designers to handle. To have a chance at a successful project you would really need the top promille of software designers supported by a team of experts from the industry with years of experience of using different previous softwares.


Many of these massive enterprise and government projects employ a waterfall methodology which is a legacy of the mainframe world. They try to gather all the requirements, design the entire application, and leave only one iteration of coding (finally, after the project has burned through 75% of budget and time) before the app gets into testing. I've seen this approach rampant in the medical vertical.

Until more companies adopt agile methodologies, be they spiral, SCRUM, XP, or whatever, we are doomed to maintain the puny 25% success rate that I.T. projects have.


By underestimating cost and overestimating functionality.

Both of which are often a prerequisite to getting a project in the first place, because an accurately estimated cost and an accurately described functionality is often NOT what the customer wants to hear.

Plebs vult decipi. ("The people want to hear the truth" :-) )


How does a project get to be a year late? ? One day at a time.
- Fred Brooks, Mythical ManMonth.

Since everyone else is referring to that classic, I suggest you go read it yourself. It really will help understand some of the why...


Having a few impossible requirements does help a lot. Wikipedia starts with: "single, centrally-mandated electronic care record for patients... providing secure and audited access to these records by authorised health professionals". We don't know how to do this. There is no known systems with this property. The current reality in hospitals is sharing of passwords and unsecured backup tapes.