A colleague of mine today committed a class called ThreadLocalFormat, which basically moved instances of Java Format classes into a thread local, since they are not thread safe and "relatively expensive" to create. I wrote a quick test and calculated that I could create 200,000 instances a second, asked him was he creating that many, to which he answered "nowhere near that many". He's a great programmer and everyone on the team is highly skilled so we have no problem understanding the resulting code, but it was clearly a case of optimizing where there is no real need. He backed the code out at my request. What do you think? Is this a case of "premature optimization" and how bad is it really?

42 accepted

It's important to keep in mind the full quote:

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

What this means is that, in the absence of measured performance issues you shouldn't optimize becuase you think you will get a performance gain. There are obvious optimizations (like not doing string concatenation inside a tight loop) but anything that isn't a trivially clear optimization should be avoided until it can be measured.

The biggest problems with "premature optimization" are that it can introduce unexpected bugs and can be a huge time waster.


Optimization is "evil" if it causes:

  • less clear code
  • significantly more code
  • less secure code
  • wasted programmer time

In your case, it seems like a little programmer time was already spent, the code was not too complex (a guess from your comment that everyone on the team would be able to understand), and the code is a bit more future proof (being thread safe now, if I understood your description). Sounds like only a little evil. :)


Id's say premature micro optimizations are the root of all evil, because micro optimizations out context. almost never behave the way they are expected.

What are some good early optimizations in the order of importance:

  • Architectural optimizations (application structure, the way it's componentized and layered)
  • Data flow optimizations (inside and outside of application)

Some mid development cycle optimizations:

  • Data structures, introduce new data structures that have better performance or lower overhead if necessary
  • Algorithms (now its a good time to start deciding between quicksort3 and heapsort ;-) )

Some end development cycle optimizations

  • Finding code hotpots (tight loops, that should be optimized)
  • Profiling based optimizations of computational parts of the code
  • Micro optimizations can be done now as they are done in the context of the application and their impact can be measured correctly.

So to answer your question :-) :

Not all early optimizations are evil, micro optimizations are evil if done at the wrong time in the development life cycle, as they can negatively affect architecture, can negatively affect initial productivity, can be irrelevant performance wise or even have a detrimental effect at the end of development due to different environment conditions.

If performance is of concern (and always should be) always think big. Performance is a bigger picture and not about things like: should I use int or long?. Go for Top Down when working with performance instead of Bottom Up.


optimization without first measuring is almost always premature.

I believe that's true in this case, and true in the general case as well.


There are two problems with PO: firstly, the development time being used for non-essential work, which could be used writing more features or fixing more bugs, and secondly, the false sense of security that the code is running efficiently. PO often involves optimising code that isn't going to be the bottle-neck, while not noticing the code that will. The "premature" bit means that the optimisation is done before a problem is identified using proper measurements.

So basically, yes, this sounds like premature optimisation, but I wouldn't necessarily back it out unless it introduces bugs - after all, it's been optimised now(!)


Personally, as covered in a previous thread, I don't believe early optimization is bad in situations where you know you will hit performance issues. For example, I write surface modelling and analysis software, where I regularly deal with tens of millions of entities. Planning for optimal performance at design stage is far superior than late optimization of a weak design.

Another thing to consider is how your application will scale in the future. If you consider that your code will have a long life, optimizing performance at design stage is also a good idea.

In my experience, late optimization provides meagre rewards at a high price. Optimizing at design stage, through algorithm selection and tweaking, is way better. Depending on a profiler to understand how your code works is not a great way of getting high performance code, you should know this beforehand.


Premature optimization is not the root of ALL evil, that's for sure. There are however drawbacks to it:

    - you invest more time durring development
    - you invest more time testing it
    - you invest more time fixing bugs that otherwise wouldn't be there

Instead of premature optimization, one could do early visibility tests, to see if there's an actual need for better optimization.


Since there is no problem understanding the code, then this case could be considered as an exception.

But in general optimization leads to less readable and less understandable code and should be applied only when necessary. A simple example - if you know that you have to sort only a couple of elements - then use BubbleSort. But if you suspect that the elements could increase and you don't know how much, then optimizing with QuickSort (for example) is not evil, but a must. And this should be considered during the design of the program.


I believe it's what Mike Cohn calls 'gold-plating' the code - i.e. spending time on things which could be nice but are not necessary.

He advised against it.

P.S. 'Gold-plating' could be bells-and-whistles kind of functionality spec-wise. When you look at the code it takes form of unnecessary optimisation, 'future-proofed' classes etc.


I suppose it depends on how you define "premature". Making low-level functionality quick when you're writing is not inherently evil. I think that's a misunderstanding of the quote. Sometimes I think that quote could do with some more qualification. I'd echo m_pGladiator's comments about readability though.


The answer is: it depends. I'll argue that efficiency is a big deal for certain types of work, such as complex database queries. In many other cases the computer is spending most of its time waiting for user input so optimising most code is at best a waste of effort and at worst counterproductive.

In some cases you can design for efficiency or performance (perceived or real) - selecting an appropriate algorithm or designing a user interface so certain expensive operations happen in the background for example. In many cases, profiling or other operations to determine hotspots will get you a 10/90 benefit.

One example of this I can describe is the data model I once did for a court case management system which had about 560 tables in it. It started out normalised ('beautifally normalised' as the consultant from a certain big-5 firm put it) and we only had to put four items of denormalised data in it:

  • One materialised view to support a search screen

  • One trigger-maintained table to support another search screen that could not be done with a materialised view.

  • One denormalised reporting table (this only existed because we had to take on some throughput reports when a data warehouse project got canned)

  • One trigger-maintained table for an interface that had to search for the most recent of quite a large number of disparate events within the system.

This was (at the time) the largest J2EE project in Australasia - well over 100 years of developer time - and it had 4 denormalised items in the database schema, one of which didn't really belong there at all.