Wall? What Wall?

The excellent Mike Lee (@bmf on twitter) has a hilarious way of handling theoretical problems: he ignores them to solve them.

In a case of life imitating art imitating life, programmer Mike Lee explained his writing a solution to the halting problem, with the simple explanation that, lacking a formal education in computer science, he didn’t realize it was considered unsolvable.

To solve the halting problem is to write a function that takes any function—including itself—to determine whether it will run forever, or eventually stop.

The classical approach is to wait and see if the function halts, but the necessity to accept itself as input means it will end up waiting for itself.

This paradox is what makes the problem unsolvable, but Lee’s function avoids the paradox by using a different approach entirely.

“It simply returns true,” Lee explained. “Of course it halts. Everything halts. No computer is a perpetual motion machine.”

That being said, the scientists vs engineers problem is an old one. Computer science started out as a branch of mathematics, and was treated as such for the longest time. When I was in college, we didn’t have any exam on an actual machine. It was all pen and paper!

Part of the problem of any major “it can’t be done” block on the road is the sacrosanct “it’s never been done before” or “such and such guys have said it can’t be done”. The truth, though, is that technology and knowledge make giant leaps forward these days, mostly because of people like Mike who just want to get things done.

Just remember that a few decades ago, multi-threading was science fiction. Nowadays, any programmer worth their salt can have a builtin “hang detector” to monitor if a part of their program is stuck in an infinite loop, or has exited abnormally. Hell, it’s hard to even buy a single-core machine!

I distinctly remember sitting in a theoretical computer science class, listening to a lesson on Gödel’s numbers. To oversimplify what I was hearing, the theorem was about how any program could be represented by a single number, however long. And about 5 minutes in, I was saying in my head “duh, it’s called compiling the program”. Had I said that out loud though, I’d probably have gotten in a lot of trouble.

Don’t get me wrong though, I think that mathematical analysis of computer programs is important and worthwhile. I’d like to be able to talk about optimization to a whole lot more people (how you just don’t use an O(n³) sorting algorithm, please…). But whereas I see it as a valuable tool to prove something positive, I stop listening whenever something is deemed impossible.

Trust Mike (and to a lesser extent me) on this: if something is impossible, it’s probably because the right tools haven’t been used yet. Maybe they don’t exist. And I’m ready to acknowledge that there is a probability they won’t exist any time soon. But “never” is a long time for anything to (not) happen.

UPDATE: it seems that people link it with the skeuomorphism ranting from before. True, it does ring familiar: we do things like we’ve always done, because we can’t do otherwise. Right?

  

Happy Birthday Alan

Alan Turing is considered to be the father of computing (at least for those who don’t believe in mayan computers, secret alien infiltrations, or Atlantis). He would have turned 100 this year.

Computers are everywhere nowadays, and pretty much anyone can learn very quickly to use one. But you have to remember that up until the fifties, people were paid to do calculus. In the case of all the complicated operations for astronomical charts and stuff, the post of calculator was in high regard, and the fastest (and more accurate) one could name his price.

Machines have been around for a long time, but there was no adaptability to them: the intelligence was in the hand of the user. Complicated clockwork machinery could perform very delicate stuff, but not by themselves. And repurposing one of them to do something it wasn’t built for was close to impossible.

Basically that’s what Turing pioneered: a machine that could be repurposed (reprogrammed) all the time, and could modify its own behavior (program) based on its inputs.

Before Turing, what you had is an input -> tool -> output model for pretty much everything.
After him (and we can’t help but smile when seeing how pervasive these things are today — even my TV!), the model switched to input + memory -> tool -> output + modified memory (+ modified tool).

Meaning that two consecutive uses of the same tool might yield different results with the exact same inputs. Of course, it’s better if the modification is intentional and not the result of a bug, but he opened a whole new field of possibles.

So happy birthday Alan, and thanks for giving an outlet to my skills. If you hadn’t been around there would have precious few other ways for me to whore my faulty brain!

  

Research vs Development

It has been true throughout the history of “practical” science: there seems to be a very strong border between “pure” research (as in academia, among others) and “applied” or “empirical” research (what might arguably be inventing). I’m not sure where “innovation” fits on that scale, because it depends mostly on the goals of the person using that word.

But first, a disclaimer: I have a somewhat weird view of the field. My dad, even though he routinely dabs in practical things, loves discussing theory and ideas. My mom on the other hand expresses boredom rather quickly when we digress ad nauseam on such topics. Growing up, I started with a genuine love for maths and scratching my head over theoretical problems, sometimes forgetting to even eat before I solved one of my “puzzles”, then branched to a more practical approach when I started earning a living by writing code, before going back to pure research in biology and bio-computing, which ended badly for unrelated reasons, which led to a brute force pragmatism daily life for a while, which switched again when I started teaching both theory and practicalities of programming to my students, and now… well, I’m not exactly sure which I like most.

Writing code today is the closest thing I can think of to dabbling in physics back in the 17th century. You didn’t need a whole lot of formal education, you pretty much picked up on whatever you could grab from experience and the various articles and books from the people in your field, and submitted your theories and your inventions to some kind of public board. Some of it was government (or business) funded, to give a competitive advantage to your benefactors in military or commerce or “cultural glow” terms. Some of it came from enthusiasts who were doing other things in their spare time.

Some people would say the world was less connected back in those days, so the competition was less fierce, but the world was a lot smaller too. Most of the Asian world had peaked scientifically for religious, bureaucratic or plain self-delusional reasons, the American and African continents weren’t even on the scientific map, so the whole world was pretty much Europe and the Arab countries. Contrary to what most people I’ve had a chat about that period with think, communication was rather reliable and completely free, if a little slow. Any shoemaker could basically go “hey, I’ve invented this in my spare time, here’s the theory behind it and what I think it does or proves” and submit it to the scientific community. True, it would take a long time to get past snobbery, sometimes, but the discussion was at least relatively free. Kind of like the internet today.

Back in those days, the two driving forces behind research were competition (my idea is better than yours, or I was the first to figure it out) and reputation (which attracted money and power). Our human scientific giants did morally and ethically wrong sometimes (like Galileo grabbing the telescope to make a quick buck, albeit in order to finance his famous research, or Newton ruthlessly culling papers and conferences to stay in power at the head of the royal scientific society) but to my knowledge, they never intentionally prevented any kind of progress.

That’s where the comparison kind of falls short with today’s research and development. First of all, the gap between pure research and practical research has widened considerably. No one with less than 10 years of studying a particular field is going to be granted a research post. That’s both because the amount of knowledge required to build on all that we know is simply humongous and because pure research is notably underfunded. Then there is the practical development side, which has the same kind of educational problem: the systems we deal with are complex enough with a degree, so without one… And the amount of money and effort poured by companies into these projects simply can’t tolerate failure.

That’s obviously not to say that it doesn’t exist anymore, far from it. I’ve had the chance to spend some time with the people from the ILL, a research facility devoted to neutron physics, and wow. Just wow. And Obviously, from time to time we developers are involved in some cool new project that no one has done before (hush hush). But the entry barrier is a lot higher. I wouldn’t qualify for research, even though I almost started a PhD and am not entirely stupid, and however good reviews I am given on my work, I guess I’d still have to do R&D on my own before anyone gave me a big wad of bills to pay for a project of mine.

Getting back to the point, while academia hasn’t changed much it seems in the way it operates (but changed a lot on the hurdles to get through), the practical side of research has changed dramatically. Global markets means fiercer competition. In order to attract the good persons, a company has to pay them better than rivals, and in order to do that, they have to make more money per employee. But to make more money per employee, there has to be either very few rivals (monopoly) or a clear-cut quality advantage. The second strategy requires to attract the best people and to take more risks, while the first requires a better defense.

And that’s where the slant is, today: It’s actually a lot cheaper and less risky to work secretly on something new, slap a couple of patents on it to get a de facto monopoly and live off the dividends it will assuredly bring. That’s the reasoning, anyway.

  

Software Piracy & The Genuine Customer

Piracy will always exist. Get over it.

An idea, however smart and new, is going to spread. A better method of doing old things is going to be used by people who recognize the value of it without wanting to pay for that realization. The only thing that might not necessarily get plundered is a way to present things, because there’s no accounting for taste, and besides, it’s a little bit too blatant.

I know, I know. You’ve just spent a couple of years developing that piece of software that’s new, cool, hype, awesome and altogether the whole source of your pride. And just one week after you publish it, that’s that miscreant who takes it all and presents it as his own. But, surely everyone in their right mind knows that it’s just been taken from you, right? I mean, come on! Apart from these two inverted text fields, it’s the same thing… Even the logo looks the same! It might be acceptable, maybe even flattering, if the Other One didn’t make more money than you do out of it…
You are so pissed that you swear that next time, you’ll make it really hard to copy, or understand, or use without your explicit consent. If there’s a next time because right now, there seems to be confusion in your mind as to whether you should be depressed or angry.

My advice is “just drop it”. It’s not worth the outrage. There is a lot of clever people out there. Ultimately, if your idea is a profitable or just downright awesome, someone will figure out a way to put it to better (or more profitable) use.

After all, you came up with the idea, right? So why spend time and energy making it less usable because of these .01% of the human race who are going to screw you in any way you can(‘t) think of? Wouldn’t that be better to actually improve on it, and make it so perfect that 99.99% of the population will think “What the hell, I know I could spend a month and come up with an alternative, but it’ll never be as good as this one, so might as well just use it as is (and pay the somewhat small fee involved)”?

The reason why I write about it today is because, for once, I’m in the position of the customer (or customer’s aide), and I despair of all the silly measures against piracy some “fellow developers” have taken to prevent me from a fair use of their technology.

A charitable organization is holding a gathering to promote their overall goodness (and they are good people, embarking on quite a noble voyage), and to attract attention to a very real and very important problem. I might talk about that sometime later.

Trouble is, the venue doesn’t have internet access, and their website is something they are proud of and the main way to contact them, making it indispensable. So, the solution surely is to make a copy of the website (which was paid for) onto a computer inside the venue, to give access to the visitors.

I was tasked with that small request, and after a few days of talking a lot on the phone and waiting even more, I end up with the relevant files. Turns out most of them are using a custom engine (that hasn’t been included in the package), and some of the vital files are stored encoded, to be decoded on the fly by the engine when needed. Unusable.

So let me get this straight: a customer paid for a website, and they can’t show it in a private gathering for fund raising and general awareness.

This would be like owning a car for which you have to phone the manufacturer every time you want to start it up.

Now, smarter people than me have debated that field ad nauseam but the question still remains: is a piece of software a manufactured good or an idea?

In many ways, since we buy “a software”, have a copy on our hard drives, can put it where we want, and delete it on a whim, it’s akin to a piece of furniture. Instinctively, it’s “ours”.

What makes it less obvious is that it’s so easy to copy it and to give it to someone else. If you buy a table, and give it to somebody else, you don’t have a table anymore. With a piece of software (including movies, music, etc), you can give it away while keeping a copy at the same time.

The worst part in it is that most people don’t do it maliciously: it’s more out of goodness than greed. “Hey I found this program that does coffee just the way I like it, want to try it out?”. The other party, being given the goods doesn’t see it as stealing, not really. They are just trying it out, or they don’t think anyone is being robbed by this act, or that the software is being paid for by other people, given the outrageous price tag.

As with most things, I think it’s a question of education and message. If the recipient is aware that it’s wrong to accept, they will make it right in their own way, and in their own time.

I have a friend who has 10000+ CDs at home. If he likes a band, he buys the album. I have more than once got a copy from a friend of a piece of software to look at. If I ended up using it for real, or if I used it to make money, I paid for it.

How do we educate people to understand that this is someone else’s work and that it should be rewarded as such? The easiest (and to me worst) way is to be repressive about it. The current campaigns about anti-piracy in regards to music and movies makes it obvious: if you participate in the plunder, you’ll end up in jail. I think it doesn’t work, and I think it even pushes people who were “moderates” to more extreme reactions.

Come on, we’ve all been teenagers. Authority (especially faceless authority) doesn’t work half as good as Authority thinks. Besides, they don’t think they are doing anything wrong when they share something they like with their friends. If anything, they are doing the author a favor by promoting the work. Authority therefore is brutally stupid, and should be ignored.

So, how do we get these “confused” people, who think they’re not doing anything wrong, to understand that they are actually depriving us good developers (and artists) of our living? My view of the field is a little biased, as I do freelance job and know quite a lot of artists who get a reasonably big chunk of the retail price. I guess things are a little different when the middleman (major, editor, etc) takes the biggest share of the sale, but here goes:

  • Be somewhat transparent of the proceedings. The price tag has to fit the instinctive value of the software. Who the hell pays 4000 euros for a piece of software they will use once? That sounds too much like preying on desperation.
  • Make sure the end customer knows who you are. Faceless implies meaningless. I think it’s a lot harder to be robbing you if they think they know you.
  • Make sure you know who your customers are (in at least a general way). Reply to their emails, thank them for their feedback, make it clear you work for them. It’s your work, but you didn’t do it for yourself. No one likes a selfish and greedy bastard.
  • Don’t force your customers to do something they don’t want to do. If they don’t want to pay for your work, they shouldn’t profit from it, that’s agreed. But if they put some effort in it, they’ll be able to, anyway. Being hostage doesn’t automatically evolve in a Stockholm Syndrome… Most of the time it just brings resentment.
  • If it doesn’t cost you a lot, you should be flexible. The example of the above case is an obvious one: there shouldn’t be any problem exporting a “degraded but working” website that can be used offline. The customer (me, here) is usually not asking for much in their own opinion. Bowing to their small request makes the relationship more cordial and personal. Next time you tell them it’s difficult or not possible, they will understand, since you were understanding of their own problem the previous time.

Granted, this way is slow. And by doing this, we are competing with the Big Boys out there who are repressive and seemingly more efficient (at making money, if nothing else) than we are. It all depends to what kind of overall result we want to have…

I’d feel much better in a world where people understand my need to get paid for my work, and gladly submit, than in a world where they do indeed pay, but try knowingly to screw me over because they think I’m not worth being paid. We’re far from there as of today. And as I said numerous times here, I suffer chronically from it. But I think it’s a dream worth having, and worth working for.

What do you think?

[IMPORTANT UPDATE]

Already got a couple of replies. No it doesn’t mean I think all software should be open sourced. Flexible doesn’t mean giving away what you’ve worked so hard to accomplish. I’m just talking about means to distribute and get paid for it.

[IMPORTANT UPDATE 2]

Now that the feedback has abated slightly, there seems to be two major schools of thoughts: OpenSourcists (everything should be open source, that way it puts everyone back on an equal footing) and LOLYouAreSoNaive-ists (the world is unfair, accept the rules and make the best of it).

To the first ones I’ll say: I agree, it’s a good dream too. Unfortunately, a customer usually isn’t able to evaluate the quality of your work. Therefore, it’s not necessarily the best that will reap the benefits, but the ones best able to convince the customer to pay. Back to square one, I’m afraid.

To the second ones I’ll say: Yea! welcome to the Dark Ages v2.0.
Ethics should NOT be context-dependent. Otherwise, what’s the point?
Should we also abolish laws? They can be so tiresome, too… Or are they a way to keep score?
Just remember that evolution is not only “survival of the fittest”, but also about symbiotic relationships that bring a balance.
And that everyone else might take it as normal to screw you too. Including your own children.

I’m sorry, I still think there can, and should, be a better way to do things.

  

The process of building an application

I have been a freelancer for the best part of a decade now, writing code for whoever wants to hire me, usually choosing interesting projects for me, rather than public visibility.

Over the years, I have tried several methods to incrementally go from the concept to the end product, and I must say that’s usually a thorn in my side.

Basically, what’s good on paper is this:
– Decide on the specifications
– Design a UI, from a technical and an ergonomic standpoint
– Build a working prototype with all the major bricks in place, and pretty much everything working under the hood
– Tune the prototype’s UI to match the desired one
– Fix the engine parts
– Make the minor UI adjustments that are necessary
– repeat the two last steps a few times

The iPhone has kind of changed that. Since it’s UI centric and not functionality centric, every “minor” change in appearance (or more correctly in transitions, view controllers and UI updates from the backend changes) can trigger a massive rewrite of the engine.

Basically I find myself more and more reverting to stuff I tell my students NOT to do : having global variables, putting all the information in the same object that’s shared over the application, having the same code but for a couple of variables replicated in many places, and changing the behavior of my application based on the class of the untyped parameter I used in the function.

Everything’s not bleak, of course, but as soon as you hit the boundaries of what the guidelines and Apple’s developer tools suggest as good UI, you’re on your own.

At the moment, my usual development cycle is broken. Most of the “small” modification take about the same time as big ones. It’s not about changing the color of a button anymore, it’s about propagating the information back and forth to the whole. And then changing to color of the button.

Just a few days ago, I was discussing Agile with an ex-student of mine. There’s quite a few semi-formal methods to organize a development team out there, but most of them have the incremental “feature request / bug fix” cycle in at the end. I’m just trying to minimize the impact of that cycle in the last stages of the development. Agile, or its “competitors” provide some sorts of guidelines for the initial stages, but in the end we always have to face that “tuning” process.

I wonder if the variety of frameworks and languages and platforms out there, with their own sets of rules and tools, might not benefit from some kind of optimization on that last part, or if there’s a way for developers to come up with a decent solution to that universal problem. Right now, every time I switch environments, I have to design my apps and my development cycle slightly differently. And it’s very time consuming.

What was that joke again? the last 10% of the application development takes more time than the rest of it together…

  

There and back again

I always thought that to prove yourself good, you have to be out of your depths. One step to get that is to be out of your familiar surroundings. Preferably in a situation where everything’s not close to hand.

In Philadelphia, the problem was threefold : getting up to speed with a project I was part of 10 years ago, finding out how and where my help would be relevant, and prototyping what could be prototyped.

All of that had to be done without shutting down the place, rebooting the servers as seldom as possible, or disrupting their workflow.

I can’t really say that I didn’t disrupt their work, but in the end, when you have little to work with, you end up having to do better things. That’s something most of the people I encounter in the business don’t get at first : I know we have 2 Ghz machines and whatnots, but you should always target lower than your dev computer. The users will always find a way to overload their systems with resource-consuming things, anyway…

That’s really my kicks, that’s what I like to do : debug and optimization. And it’s also where the money isn’t these days. And boy am I glad that there are still a few customers who actually give a damn about it!

Anyway, since the thing went well (rather well, let’s say), I had the opportunity to spend part of the week end in the City That Never Sleeps, the Big Apple, New York City. Well, I guess this town had a bad influence on me : I didn’t sleep :D

And now I’m back, and out of my depths on another project. While this is not NYC, there are still quite a number of interesting things to do!