Research vs Development

It has been true throughout the history of “practical” science: there seems to be a very strong border between “pure” research (as in academia, among others) and “applied” or “empirical” research (what might arguably be inventing). I’m not sure where “innovation” fits on that scale, because it depends mostly on the goals of the person using that word.

But first, a disclaimer: I have a somewhat weird view of the field. My dad, even though he routinely dabs in practical things, loves discussing theory and ideas. My mom on the other hand expresses boredom rather quickly when we digress ad nauseam on such topics. Growing up, I started with a genuine love for maths and scratching my head over theoretical problems, sometimes forgetting to even eat before I solved one of my “puzzles”, then branched to a more practical approach when I started earning a living by writing code, before going back to pure research in biology and bio-computing, which ended badly for unrelated reasons, which led to a brute force pragmatism daily life for a while, which switched again when I started teaching both theory and practicalities of programming to my students, and now… well, I’m not exactly sure which I like most.

Writing code today is the closest thing I can think of to dabbling in physics back in the 17th century. You didn’t need a whole lot of formal education, you pretty much picked up on whatever you could grab from experience and the various articles and books from the people in your field, and submitted your theories and your inventions to some kind of public board. Some of it was government (or business) funded, to give a competitive advantage to your benefactors in military or commerce or “cultural glow” terms. Some of it came from enthusiasts who were doing other things in their spare time.

Some people would say the world was less connected back in those days, so the competition was less fierce, but the world was a lot smaller too. Most of the Asian world had peaked scientifically for religious, bureaucratic or plain self-delusional reasons, the American and African continents weren’t even on the scientific map, so the whole world was pretty much Europe and the Arab countries. Contrary to what most people I’ve had a chat about that period with think, communication was rather reliable and completely free, if a little slow. Any shoemaker could basically go “hey, I’ve invented this in my spare time, here’s the theory behind it and what I think it does or proves” and submit it to the scientific community. True, it would take a long time to get past snobbery, sometimes, but the discussion was at least relatively free. Kind of like the internet today.

Back in those days, the two driving forces behind research were competition (my idea is better than yours, or I was the first to figure it out) and reputation (which attracted money and power). Our human scientific giants did morally and ethically wrong sometimes (like Galileo grabbing the telescope to make a quick buck, albeit in order to finance his famous research, or Newton ruthlessly culling papers and conferences to stay in power at the head of the royal scientific society) but to my knowledge, they never intentionally prevented any kind of progress.

That’s where the comparison kind of falls short with today’s research and development. First of all, the gap between pure research and practical research has widened considerably. No one with less than 10 years of studying a particular field is going to be granted a research post. That’s both because the amount of knowledge required to build on all that we know is simply humongous and because pure research is notably underfunded. Then there is the practical development side, which has the same kind of educational problem: the systems we deal with are complex enough with a degree, so without one… And the amount of money and effort poured by companies into these projects simply can’t tolerate failure.

That’s obviously not to say that it doesn’t exist anymore, far from it. I’ve had the chance to spend some time with the people from the ILL, a research facility devoted to neutron physics, and wow. Just wow. And Obviously, from time to time we developers are involved in some cool new project that no one has done before (hush hush). But the entry barrier is a lot higher. I wouldn’t qualify for research, even though I almost started a PhD and am not entirely stupid, and however good reviews I am given on my work, I guess I’d still have to do R&D on my own before anyone gave me a big wad of bills to pay for a project of mine.

Getting back to the point, while academia hasn’t changed much it seems in the way it operates (but changed a lot on the hurdles to get through), the practical side of research has changed dramatically. Global markets means fiercer competition. In order to attract the good persons, a company has to pay them better than rivals, and in order to do that, they have to make more money per employee. But to make more money per employee, there has to be either very few rivals (monopoly) or a clear-cut quality advantage. The second strategy requires to attract the best people and to take more risks, while the first requires a better defense.

And that’s where the slant is, today: It’s actually a lot cheaper and less risky to work secretly on something new, slap a couple of patents on it to get a de facto monopoly and live off the dividends it will assuredly bring. That’s the reasoning, anyway.

  

Leave a Reply