Employers need their star programmers to be leaders – to help junior developers, review code, perform interviews, attend more meetings, and in many cases to help maintain the complex legacy software we helped build. All of this is eminently reasonable, but it comes, subtly, at the expense of our knowledge accumulation rate.
From Ben Northrop’s blog
It should come at no surprise that I am almost in complete agreement with Ben in his piece. If I may nuance it a bit though, it is on two separate points : the “career” and some of the knowledge that is decaying.
On the topic of career, my take as a 100% self-employed developer is of course different from Ben’s. The hiring process is subtly different, and the payment model highly divergent. I don’t stay relevant through certifications, but through “success rate”, and clients may like buzzwords, but ultimately, they only care of their idea becoming real with the least amount of time and effort poured in (for v1 at least). While reading up and staying current on the latest flash-in-the-pan as Ben puts it, allows someone like me to understand what it is the customer wants, it is by no means a requirement for a successful mission. In that, I must say I sympathize, but look at it in the same way I sometimes look at my students who get excited about an arcane new language. It’s a mixture of “this too shall pass” and “what makes it tick?”.
Ultimately, and that’s my second nuance to Ben’s essay, I think there aren’t many fundamentally different paradigms in programming either. You look at a new language, and put it in the procedural, object oriented, or functional categories (or a mixture of those). You note their resemblances and differences with things you know, and you kind of assume from the get-go that 90% of the time, should you need to, you could learn it very quickly. That’s the kind of knowledge you will probably never lose : the basic bricks of writing a program are fairly static, at least until we switch to quantum computing (ie not anytime soon). However fancy a language or a framework looks, the program you write ends up running roughly the same way your old programs used to. They add, shift, xor, bytes, they branch and jump, they read and store data.
To me that’s what makes an old programmer worth it, not that the general population will notice mind you : experience and sometimes math/theoretical CS education brought us the same kind of arcane knowledge that exists in every profession. You can kind of “feel” that an algorithm is wrong or how to orient your development before you start writing code. You have a good guesstimate, a good instinct, that keeps getting honed through the evolution of our field. And we stagnate only if we let it happen. Have I forgotten most of my ASM programming skills? sure. Can I read it still and understand what it does and how with much less of a sweat than people who’ve never even looked at low-level debugging? Mhm.
So, sure, it’ll take me a while to get the syntax of the new fad, as opposed to some new unencumbered mind. I am willing to bet though, that in the end, what I write will be more efficient. How much value does that have? Depends on the customer / manager. But with Moore’s law coming to a grinding halt, mobile development with its set of old (very old) constraints on the rise, and quantum computing pretty far on the horizon, I will keep on saying that what makes a good programmer good isn’t really how current they are, but what lessons they learn.