When you were starting to program, what was the hardest concept for you to grasp? Was it recursion, pointers, linked lists, assignments, memory management?

I was wondering what gave you headaches and how you overcame this issue and learned to love the bomb, I mean understand it.

EDIT: As a followup, what helped you grok your hard-to-grasp concept?


The compiler works fine, it's the code that's wrong.


I guess that for a C programmer, the first hard concept would be pointers. Especially references (&) and function pointers. This would require some inner understanding of the computer, which many beginner programmers don't have. Also, pointer arithmetic isn't always simple. For other languages, this could be anything from variables to OOP. This really depends. From what I've seen, I guess it might be procedural programming, because this requires some change in the way of thinking, and might even require the new programmer to design (!) his/her code.


Regular Expressions! I still need a reference when I use them.


That someone else would someday be fixing my code.

Hard to grasp, but also the thing that had the most influence on making me a better programmer.



That's gotta be lambda calculus.


How best to divide up a program into modules/classes.


Lots of people cite OOP but basic OOP really isn't that hard to understand because you can give fairly visible real-life examples of how objects work.

I found the grittier sub-topics of OOP harder to understand. I'm talking inheritance and polymorphism. I read a lot of definitions of both at university and I understood what they were saying, but I didn't understand why I'd want to use either until after I'd done a couple of large coursework projects.

Some patterns made me wonder "why?" too. If you're trying to learn, you really need a full example to see where you'd want to implement them because one-line definitions don't cut it.

Thankfully pointers made sense to me when I learned C. They're fairly logical and it was only the syntax that caused the initial problem.

MVC (in webdev) was another "why?" topic for me. I'm used to separating my data-logic from display-logic, from display code, so it seemed like what I was doing, which probably exacerbated my problems in getting used to a fixed way of doing it.

Version control is a very important topic that lots of people put-off learning until they're forced to at gunpoint.

Functional programming is something I'm still putting off learning. Again, because I can't see the point/benefit.


The beauty of simplicity.

In my early years I always preferred a solution that was harder to grasp because it seemed "geekier".




Variables. Or more specifically, the fact that a variable is not the same as the value, that it represents.

I actually took a while before fully realising this, but it made a lot things much clearer. Now, I often recognise the same fallacy with lesser experienced programmers.

There are a lot of things that are technically much more complicated, but understanding these fundamental leaps of abstractions are usually very hard.


I can't say I came across these as a beginner, but:

  • Continuations aren't immediately obvious, and I wouldn't want to be a compiler writer with the job of implementing them (which is probably why so few languages support call/cc)

  • I still don't grok how monads give rise to purely functional I/O in Haskell, mind you I haven't used Haskell since a semester class at University years ago, and have never done any I/O in it.


References in C++. It took awhile for me to accept the fact that

int &x = a;

means that x becomes an alias for a. Not a copy of a, not a weird pointer to a: x is a.


How to avoid duplication.


Performance != Optimisation

rephrased "Performance != The Highest Objective" -BCS

Performant code is fast.
Optimised code is elegant and easily extensible.


For me the hardest concept was Generalized recursion. Not the divide and conquer style like in q-sort, but the Lisp style loop via recursion. Mostly I took forever to get around to it. I saw it now and again but never really tried to figure it out. Once I actually worked with it (in a CS languages class) it became really clear and VERY handy (I do a fare share of template meta programming).


I didn't truly get OOP until about a 12 or 14 months ago. It took exposure to Smalltalk's paradigm of messages being the primary language construct to shake me up.


Everything other than writing the code, without a doubt. I borrowed or bought many C books in my early days, but suffered for years trying to understand how to really build software. None of these texts talked about anything more than writing a single small program, completely self enclosed. I wasn't exposed to detailed understandings of compilers, modules, linkers, source control, and all the other not-writing-source-code activities that often make up the bulk of development work.


Designing loosely coupled, maintainable, extendable, reusable objects(Interface) in OOP.


I think that there are several skills that a good programmer needs: the ability to abstract, the ability to think recursively, and the ability to imagine complex networks.

Since beginners have different aptitudes in each, their problems correspond: bad design/modularization/functional decomposition, recursive algorithms and structures, pointers.

It's also interesting that a lot of people (more math oriented) are good with pointers and algorithms but horrible in abstractions and decomposition. The converse is also true. I consider this to be the gap between good classic CS folks and good engineers. Very few people can fit in both categories, unfortunately.


The thing I found hardest to grasp when I was starting out when I started to program was not a programming technique, it was the weird and wonderful world of impedance mismatch. I would sometimes work for days on a feature that no one really wanted, because I listened to my boss or to a marketing person and simply did what they told me.

The lesson learned was that I should always try and "get in the customers head" and really grasp what it is that they want. When you start out programming you are constantly presented with solutions, the key is to learn how to breakdown these solutions and turn them into real business problems, before you spend way too much time on the presented solution.

alt text


When I started, OO was this weird out there thing that only awesome people must be using. Then one day, I took the time to sit down and force myself to understand OO. I don't know why I waited that long, it makes a lot of sense and clicked pretty quickly.


Not sure, but function pointers where a bit strange to me in the verry beginning.


Polymorphism was one of the weirdest concept for me to wrap my head around. Not because it was complex, but because I almost immediately understood it, but not how to use it. I was trying to make function that worked with a specific class and pass them subclass members parent class members, I was casting were I shouldn't have been, and I expected everything to work out fine. Later I learned how to structure the problem to fit the tools I had.

Learning the reasoning behind the tools is far more important to me than simply this is how to use the tools.

The exact same thing happened to me with pointers. I immediately understood the concepts, but I has no idea why such a convoluted tool existed. Then I made my first linked list. Wow, what an epiphany. Not only was there way to use this, but it did something that I was so oblivious to that I had to change the way I looked at coding. These were two of my major windfalls when it comes to coding, I am sure that I will have more I just need to keep trying to understand as much as possible.

When you learn something new, make sure that shortly after you understand the syntax, that you understand what problems that tool was intended to solve and can solve. Learning what problems it should solve can help you prevent from deploying them incorrectly, and eventually let you deploy them in creative and novel ways that still make sense.


For web development, it seems to be the difference between client side (Javascript), and server side code (PHP, ASP.Net, Java). I don't understand why, I've never had problems with it myself, but it seems to be a recurring problem among many developers posting on forums. People continually post questions about how to use C# to run some code after the page is finished loading, or how to use Javascript to store form information in a database.


For me, this was definitely Domain-Driven Design.

I found that most of the concepts of OOP were fairly simple to get. Polymorphism, Inheritance, Encapsulation (in theory at least), etc. are all simple concepts up front, but actually being able to look at a problem domain and understand how to use those tools to effectively design your system so that it is extensible and maintainable is literally something that I'm still working on (and I'm 4 years into this).

However, making that conceptual leap from just randomly using those ideas in my code whenever I felt like it made some sort of weird sense to actually saying, how does my domain require me to use OO Principles in order to make this code as maintainable and clear as possible?, was huge and very difficult for me to wrap my head around.


Research shows that there are three problems that most new programmers/students have:

1) Getting assignments.

a = 2;
b = a;

-> Value of a & b? Lot's of people don't even pass this step.

2) Recursion

3) Locking / Multithreaded programming.

The last one was the hardest for me to get.


The "a-ha" moment of functional programming.

I'd seen many people say that learning Lisp or Haskell would make you a better programmer, and that there was a brilliant moment where everything suddenly clicks.

At first I thought to myself "Bah, it's just rewriting loops as recursion. These people are probably just excited about finally understanding recursion."

But after a while I decided that I wanted to be sure. So I wrote a fractal program in Scheme. I thought to myself, "Well, that was interesting. But mainly it was rewriting loops as recursion."

I thought that was the "a-ha" moment. Clearly, I didn't get it yet.

This year, I went to a talk by Conrad Parker, who spent some of his talk on Haskell, and encouraged everyone to learn it. "Yeah," I thought, "OK." And I put some real effort into learning Haskell properly.

I think I had the real "a-ha" moment already, though maybe there's still a bigger "a-ha" moment on its way. Certainly I love Haskell and now I think the hype is justified.


When I started the most confusing things were pointers and OO-Concepts.


The first concept I had trouble understanding was variables when I tried to learn Visual Basic (my first language) many years ago. The book I was using never bothered to explain them properly, and the whole notion of "Dim X as Variable" was alien to me: Why would you need to declare variables before using them? What is the keyword called 'dim'? Why do you need variables if you could use the values directly? etc.

Then when I learned C some years later, I had trouble with pointers. I understood how to use them, but I couldn't understand why you'd need them. I guess when trying to explain difficult concepts to beginner, you should always try to give them examples of real practical use. The C tutorial I was following said you could use pointers to allocate heap memory, but didn't tell me why I'd need to allocate memory.

I never had trouble with OOP. It seemed pretty logical and intuitive to me. It's closer to the way people think.


Monads have always been somewhat opaque to me. I understand the basic laws and such, but anything beyond Haskell's Maybe monad is a little beyond me right now.


Project management

Requirements, specs, interface documents, architecture docs, test plans, etc.

It took a while, but it went from "unnecessary overhead" to "absolutely necessary" to do anything maintainable.


Windows API in general. &!#$%$"!


When I was a 9yo kid learning BASIC from the book that came with my computer, it took me a while to realize that NEXT jumped back to the top of the FOR loop.


I think that in general the hardest part is the general shift the way we thing, programmers think about most things differently then most other people, especaily when presented with a problem. When I speak with other computer people I can usually tell right off the bat weather or not they are a programmer, just by the way they think. When confronted with a problem a typical person looks at the problem as a whole and tries to "eat the entire elephant all at once", but when a programmer gets a problem they instictivly break it down into smaller easier to chew bits.

This way of thinking is not something that can be taught in a class room, some people are born with it others learn it. And I think this process of learning how to think is by far the hardest part of becoming a successful programmer.


Hardest concept for me has always been Windows geometry. From the origin being the top-left, to viewports and mappings and dialog units and dpi, from screen co-ordinates to client co-ordinates it has always a bit of a mind fornication trying to get drawing and hit-testing code right first time. And that's without mentioning rounding errors (which have caused me no end of headaches in the past).

I find it all much easier now because I've been burned in the past, but still, that was a hard thing to get my head around initially.

Besides that, the concept of what was the language and what was provided by a library was also a concept I initially struggled with. Such as "for" is a language keyword whereas "printf" is not.


For me I would have to say it was many levels of indirection. Whether it was assembler or C having pointers pointing to pointers or arrays of pointers. It gets messy pretty quick. Not to mention the additional level of confusion that segments could add to the equation on Intel 16 bit processors.

I think universally most people don't grasp memory management. Whether it's allocating and de-allocating memory and resources in C or creating collections of objects in an OOP language. The reason that I say this is because so many people get it wrong.


Mostly C/C++ related things.

printf format specifications - I never quite understood how this worked till I worked on code that mimicked printf. What made it worse was that our lecturer didn't allow us to use cin/cout even though that's what the textbook prescribed. His view was that we shouldn't use code we don't understand - and we didn't understand streams.

How to read input - This was hard because I didn't fully understand the portability issues

Placement new - The concept is easy, I just kept forgetting what it meant because I never used it

The hardest part - bar none - was understanding OOP. It took me a few years of programming to finally get it. Every time I thought I finally understood it, sooner or later it would dawn on me that I was wrong. It was a very humbling experience though. I learned what a profound statement it is to claim that you "understand" something.


Continuations. I still don't quite get them properly.

Yeah I know they're not really a beginner subject :-)


Writing an anonymous recursive function using a fixed point combinator, such as the Y combinator.


OOP is really simple - you just start to use classes. And you can make great inheritance hierarchies out of those classes to really facilitate code reuse. And of course the mighty design patterns - you can use singletons all over the place.

Sadly, for most programmers OOP means using classes for namespacing. Which is a great concept too to gasp, but as many have pointed out: the true OOP is not that easy to understand.


I had no problems with pointers, pointers to pointers, method pointers, etc..., but I got started with assembly very early, before learning C/C++, so that may be the reason. What took me a long time to get right is good class design, with all the intricacies of abstract classes, interfaces, inheritance, design patterns. OOP is deceptively easy, but it can be tricky to get right when you start dealing with more than a half dozen related classes. I still look at code from 1990-1995 and cringe.


Starting out, following the 'C' text book was pretty easy - and so exciting I stayed up half the night writing the little example programs to subtract two numbers etc. etc.

The hard part was going from there to writing programs that actually do something useful, organised into fuctions, classes and modules. In my first holiday job I was writing some test software for a hardware engineer and I wrote the whole thing as one big function :-) the hardware guy didn't notice anything wrong but on my last day another software engineer realised what I'd done and took me to one side and explained about using separate functions...


The difference between server side and client side in Web App programming


These are the things that I find new developers have the biggest problems with:

  • Variable scoping in ASP.NET. It wont be there when you next post back!
  • Just because you IM'ed or emailed me doesn't mean I'll be responding immidiately
  • It's better to try and fail than to not try at all

When something doesn't behave as expected, I'm almost always the problem.
The things that are almost never the problem include

  • The Compiler
  • The Network
  • The Database
  • The Operating System
  • The Application Server
  • The IDE
  • The Third Party Library

This is not to say that they cannot be the problem but I better assume that I am the problem and prove that I am not the problem before I spend time looking at any part of the above list.


I had a very difficult time understanding Hash Tables in my undergrad courses. I remember being scared anyone would start talking about it. It just made me nervous to think about it.

It wasn't that I didn't understand the concept. I really didn't understand how to properly use a hash table, when to use it, and why anyone would want to use one.

The first real programming job I had required me to work with them. Since then, I have gained a better understanding of how, when & why to use a hash table. I wouldn't say I'm an expert on hash tables, but I no longer recoil in fear at the mention of one.


How to structure my code (which was basically if()s and printf()s) to avoid swapping floppy disks in the two disk-drives too often as the SAS/C compiler did its thing.


I've always seen the hardest part of programming is the person explaining it - And there arrogance.

Today I picked up putty - never used it before and had rude comments thrown at me - But funnily enough I didn't give him the same treatment when he wasn't sure what MVC was.

Its the people dude. the specific types who wish coding was a hackers black box and a Masonic order and them never really 'wanting' you too know.

thats why I vow to always do my best on this here forum (don cape).


I struggled with pointers when I started doing C++.
I think I suffered from not learning enough C first.

What got me through it was a combination of re-reading the textbooks and sitting with a text editor and a compiler and trying things out until it all came together in my head.


It took me 2 months to finally understand OOP, then it took me 2 more weeks to actually GET IT, then functional programming is still giving me a bit of trouble.


Linked lists and sorting.

I was just about 12 years old statring with pascal. Up till then I was only aware of simple arrays and strings and then my uncle introduced me to the wonderful world of pointers that point to the same struct as the one they are in.
After I figured that out he tried to teach me quicksort but that was a tad too much.


Pointers had to be one of my biggest problems I struggled with. Referencing and Dereferencing them, etc. I overcame the problem by following tutorials and reading as much as I could about them. It was a happy day for me when I figured them out.


Can't remember what I struggled with, it's been too long and I was too young.

That being said, what I see most OO programmers struggle with is NullReferenceException. So many people can't grasp that you can't call methods on null.


Pointers were damn confusing.

One thing that also bugged me was optimising my code, I never knew when to stop with the minor performance tweaks that make so little difference they weren't worth the time to implement.


The fact that every single decision you make in software engineering is a tradeoff. Being able to recognize these tradeoffs is a fundamental skill that isn't necessarily explicitly talked about. There are many classic tradeoffs (memory vs. speed, security vs. performance etc). Every design decision you make is in some way a tradeoff.


From a C++ programmer, the first hard concept would be pointers. Especially references (&) and function pointers. Also, pointer arithmetic was hard until i was actual looking a memory and watching the pointer move.


Recursion. Pointers are annoying, but the concept makes sense. Object oriented programming seems intimidating, but it's intuitive once you grasp the basic concept.

But recursion? I still have a horrible time with it.


I find that most junior programmers have a hard time to know when and how to use singleton and statics in OOP. Especially if they come from a functional/procedural background. They most often use them to namespace their functions.


A couple of decades ago, but...

Moving from BASIC to Z80 assembler was difficult. Just coming to terms with how sparse a language z80 really was. (and it was positively rich by comparison to 6502)

Some time later, the move from Pascal to C I found more difficult than I should have. So many symbols, so few words.

C++ was never a problem, but templates caused a bit of confusion at the time as did moving from home spun loops to iterators.


Functions. When I started programming (in C, at 14) I had a hard time understanding what are they for and use them appropriately. Couldn't we just put all the code in main()?


I've found that variables can be a hard thing for designers learning to write code with no math background. I've had many forehead slapping moments trying to explain this to them. ("what do you mean you don't know what a variable is? didn't you take basic algebra at some point in your life?...")


For C/C++, it was always pointers and references that blew my mind. I was young at the time, though.

In Java, threaded programming hasn't necessarily blown my mind but it always ends up being stickier than originally anticipated.


The balance between trying to solve the problem directly in front as quickly as possible and trying to develop a solution that would be reusable under all conceivable circumstances. Writing code that solves the problem expeditiously, can be extended without a huge rewrite/redesign, and suggests itself for re-use.


I'd say that it's the concept of the user. That all programs basically are written for other people and it's for them one should think of first.


Why, why should I use something? Before you actually get some programming done and into a programming mindset, it's kind of hard to see where something would be useful sometimes. A lot of examples that are often given are somewhat trivial and convoluted, intended to explain the how to use and now the why to use. Often after looking at such examples someone would end up asking: Well couldn't I have used something else? or Why would I ever want to do that?

An example was my girlfriend was learning javascript for one of her web design classes, and I explained how a for loop worked, but had trouble explaining why she would want to use it, despite using them all the time myself, I had trouble coming up with a simple real life example


template programming


I am still struggling with a visitor pattern


Pointers and memory management.


The hardest thing for me was and still is using polymorphism, inheritance, and interfaces correctly. The concept of polymorphism has never been hard for me to understand but one of my biggest realizations in programming came when I started heavily looking at using polymorphism and inheritance to make writing code easier and eliminating duplicate code.


The hardest concept to grasp for me was Exceptions. And in a way I still struggle with them to this day.

Exceptions are clouded in mystery. On one hand here you have an incredibly flexible built-in idiom for error exception handling. But then you have a myriad of Best Practices rules that invariably, if followed to the letter, turn this whole framework into a rare occurrence in your code. If not even entirely absent.

From performance considerations to idiomatic dogmatism, Exceptions are one area of programming in a language like C++ that feel very much like a tempting forbidden fruit. It's right there in front of you, you stretch towards it, and promptly someone rushes in and slaps your hand. Frustrating.


The different between Thread safe and Reentrant.


Delegates here. They just seemed to be a waste of time, why would you create something with a method signature that matches your own methods? I have a ton of void methods with no parameters, so I end up with a "ReturnsVoidDelegate" I use for everything. Why would C# make me do that?

Hurrah for .NET 3.5/4.0 where we get nice inline delegates and lambda calculus :)


Pointers in C.

If you crack it you know how memory management works.