It's fairly well recognized that some programmers are up to 10 times more productive than others. Joel mentions this topic on his blog. There is a whole blog devoted to the idea of the "10x productive programmer".
In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000).
Fred Brooks mentions the wide range in the quality of designers in his "No Silver Bullet" article,
The differences are not minor--they are rather like the differences between Salieri and Mozart. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The differences between the great and the average approach an order of magnitude.
The study that Brooks cites is:
H. Sackman, W.J. Erikson, and E.E. Grant, "Exploratory Experimental Studies Comparing Online and Offline Programming Performance," Communications of the ACM, Vol. 11, No. 1 (January 1968), pp. 3-11.
The way programmers are paid by employers these days makes it almost impossible to pay the great programmers a large multiple of what the entry-level salary is. When the starting salary for a just-graduated entry-level programmer, we'll call him Asok (From Dilbert), is $40K, even if the top programmer, we'll call him Linus, makes $120K that is only a multiple of 3. I'd be willing to be that Linus does much more than 3 times what Asok does, so why wouldn't we expect him to get paid more as well?
Here is a quote from Stroustrup:
"The companies are complaining because they are hurting. They can't produce quality products as cheaply, as reliably, and as quickly as they would like. They correctly see a shortage of good developers as a part of the problem. What they generally don't see is that inserting a good developer into a culture designed to constrain semi-skilled programmers from doing harm is pointless because the rules/culture will constrain the new developer from doing anything significantly new and better."
Paul Graham has an essay on wealth that really covers the same ground. It comes down to measurement and leverage. Even though it is hard to measure, it still exists. Paul even advocates a higher multiple than 10.
Like all back-of-the-envelope calculations, this one has a lot of wiggle room. I wouldn't try to defend the actual numbers. But I stand by the structure of the calculation. I'm not claiming the multiplier is precisely 36, but it is certainly more than 10, and probably rarely as high as 100.
Here are some other areas in which talent/ability affects pay.
- Financial traders (commodities, stock, derivatives, etc.)
- designers (fashion, interior decorators, architects, etc.)
- professionals (doctor, lawyer, accountant, etc.)
Factors that restrict programmers from becoming 'rock stars':
- Rate of technology change (new languages/platforms)
- Lack of domain experience (embedded/real-time/highly-regulated)
- Corporate technology landscape is always customized (lack of broad standards)
This leads to two questions. I'm excluding self-employed programmers and contractors. If you disagree that's fine but please include your rationale. It might be that the self-employed or contract programmers are where you find the top-10 earners, but please provide a explanation/story/rationale along with any anecdotes.
Why aren't the top 1% of programmers paid like A-list movie stars?
What would the industry be like if we did pay the "Smart and gets things done" programmers 6, 8, or 10 times what an intern makes?
[Footnote: I posted this question after submitting it to the Stackoverflow podcast. It was included in episode 77 and I've written more about it as a Codewright's Tale post 'Of Rockstars and Bricklayers']
It's unfair to exclude contractors and the self-employed. One aspect of the highest earners in other fields is that they are free-agents. The competition for their skills is what drives up their earning power. This means they can not be interchangeable or otherwise treated as a plug-and-play resource. I liked the example in one answer of a major league baseball team trying to field two first-basemen. That example also highlights the need for a standard playing field. When a baseball player is traded between teams he doesn't need to learn a new set of rules (the DH rules don't count as a whole new set). Imagine if each team had their own set of rules that were used for their home games.
Also, something that Joel mentioned in the Stackoverflow podcast (#77). There are natural dynamics to shrink any extreme performance/pay ranges between the highs and lows. One is the peer pressure of organizations to pay within a given range, another is the likelyhood that the high performer will realize their undercompensation and seek greener pastures.
Companies are not set up to reward people who want to do this. You can't go to your boss and say, I'd like to start working ten times as hard, so will you please pay me ten times as much? For one thing, the official fiction is that you are already working as hard as you can. But a more serious problem is that the company has no way of measuring the value of your work.
As an aside, it might it might help explain why companies are so willing to pay outside consultants exorbitant rates. Working as a contractor is evidence, not quite proof, that they produce more than an employee; by being more talented, more driven, etc.
It might be time to start moving some of this material into an answer. Paul's essay really crystallized the issues for me.