So… That’s How They Do Things Over There?

Every now and then, I have to fix up a project, generally huge and spaghettified. Don’t get me wrong, my own projects must look that way to other developers as well… The question is not about relative qualities of code styles and habits, it’s about getting in a boat steered by ghosts and guide it to calmer waters.

So how do you get to understand someone else’s code? We all have our own techniques, obviously, but I’ll list here a few that will help desperate people.

doxygen

First off, run a Doxygen auto-documentation, with call and caller graphs. Doxygen is a freely available tool that you can download at this address, that can deal with many many languages. Make sure you have dot (available here) installed as well.

Go to the root of your project folder, type

doxygen -g

It will produce a Doxyfile text file in which there’s a bunch of options for generation. Edit it in your text editor of choice (I like BBEdit the most), and make sure the following options are set

HAVE_DOT               = YES
CALL_GRAPH             = YES
CALLER_GRAPH           = YES

Then run

doxygen

Go make yourself a cup of coffee or something, it can be a long process. In the end, you have an html tree of files that gives you the class hierarchy, class documentations, and for each function, the caller and call graphs (what this function calls in the code, and what it’s called by). Ideal to identify the “major” functions called by a lot of others, and to evaluate the damage the changes you make will do to the rest of the code.

If for some reason, no code file is examined, you may have forgotten to tinker with the input section (folder in which the code resides and recursive search).

Alright, so, now you have a static and broad view of the code. It’ll help setting up the breakpoints in the bottlenecks and get a better idea of the overall architecture.

Instruments

Now, the problem is dynamic. How does the program behave in time? To learn about that, depending on the language and the platform, there’s a variety of tools that can provide you with this information, but since I’m mainly a Mac/iOS coder, I’ll mention Instruments. Instruments is bundled with the developer tools, and you have nothing to set up to use them. Just open your project, and in the “Product” menu, run “Profile”.

Now, there’s a bunch of types of profiles you can get. What matters most to the project is dependent on what you’re supposed to do… If it’s “making it speedy”, then you want to go for the time profile, which will give you the percentage of time spent in various functions. If it’s a memory optimization, you have the object allocations and leaks tools. Etc etc… I won’t write a manual up here, you can find a relatively thorough documentation on Apple’s website.

The important thing is that for these tools, you will get the functions the program spends the most time in, that use the most memory, that leak the most, that handle the most complex CoreData queries, or whatever else strikes your fancy. In other word, you’ll get a better picture of how the program behaves in time.

It takes a little practice to identify what’s mostly relevant from what’s highly relevant, but it’s less of a daunting task than figuring it out through printf, NSLog, or NSLogger

Tying it up together

To take an overused metaphor, you now know where the engine(s) is (are), and how the beast handles on the road. Make sure you take time to study the bottlenecks, both in the structure and in the profiles you built. That way, tinkering with stuff will be comparatively easier.

If the structural and temporal (or memory-usage) bottlenecks overlap, you have your work cut out for you. They should be the focus of your attention. If not, then it’s a little trickier, and vastly depends on the perimeter of your mission.

Just try to remember something: most of the time, the developer who wrote the code had a different set of constraints than your own. While it feels good to rant at these dimwitted douchebags, it’s not fair. Most people who would have to take over your code would say the same.

Coding is a matter of education, style, and constraints/objectives. If you switch any or all of these parameters, you end up with very different code blocks. Understanding how it works and how to fix it should take precedence, and ultimately provides the best feeling ;)

  

Chris Marker • 1921-2012

Chris Marker was a genius. And I’m not only talking about his work, which has been acclaimed worldwide for decades. He was my friend, he was kind, witty, compassionate with the deserving, very harsh with the people who were acting like fools and the most observant person I have ever known.

Through the lens of his camera, through the letters he published, the cartoons he drew, the 3d constructs he made in the virtual worlds he sometimes inhabited, through his conversations around a cup of tea or a bottle of vodka, he saw everything noteworthy.

Back when I was a young fool, he taught me gentle criticism, the way to look at things, laugh at them if they were irrevocably stupid, and see how to improve on them if at all possible.

And through his friendship, he gave me confidence in my own abilities. With all his incredible accomplishments, with his glory, his notoriety, he took time to look at my work, for him or not, and tell me very simply that it was good, when it was.

He was the man who made me smile when things were dark, and now he’s the man who’s made me cry by his absence. I loved him, and I’ll miss him.

I know he didn’t believe much in what happens next. Whatever he found out, he’s made the most of it, and he’ll start on improving whatever he can. I’ll carry on doing the same here, without him, in his memory, less foolish for having known him.

  

The Bane Of Reality

Fiction is not enough. Apparently the masses want reality. The superheroes and master spies have to be explained and “fit in” the real world (by the way, thanks to the people who did Iron Sky, it was a breather of absurdity and laughter).

In software terms, it gave us (rolling drums) skeuomorphism, the art to mimick real objects to help us poor humans deal with apparently complex functions.

Last one to date, the podcast application from Apple, and it looks like a tape deck. Seriously. Man, I mastered the art of obscure VCR controls a long time ago… And now you want to simplify my life by analoguing a defunct technology?

Don’t get me wrong, I really think interfaces should be thought about and self-explanatory, but really? Who uses a binder these days? So what’s the point of the spirals on the left of your writing interface? I’ve actually never used an agenda that I recall, so why give me that faux-leather look?

Some ideas are not based in the real world, but they quickly become THE way to do it, like pull-for-refresh, for instance, or pinch to zoom in and out. What’s the real world equivalent of those? Do we need any equivalent?

I guess I’m not a natural target for software anyway: when I take a look at a program, I want to know what it does for me. Let’s say I want an app that gives me remote control of my coffee maker. That way, I’m heading back home after a tiring day, and I want a coffee that’s strong (more coffee in it) and has been finished 5 minutes before I get home (because coffee has to cool down a little bit). Do I want to drag and drop the number of spoons from one half of the screen to the next to simulate the amount to pour in? Do I want the same kind of clumsy timers-with-arrows that exists already on these machines? Nope.

But I want to know if the coffee maker can make me coffee (because it’s all washed up and ready to go), the amount of coffee left in the reservoir, as well as the water level, I want to set up the amount in as little movements as possible while being totally reliable and I want to be able to just say “ready 5 minutes before I’m in” and let the location manager deal with it (one man can dream, right?)

There is a history behind physical controls. Some designers, ergonomists, and engineers took the time to fine tune them for daily use (with mixed results), and the ones that stayed with us for 20 years or more stayed because people “got them”, not because they liked them or thought it was a good analogy to whatever they were using before. Thank goodness, we’re not driving cars with reins-analogues, or bicycle-horns-analogues.

It’s time to do the same with software. Until we have 3d-manipulation interfaces, we’re stuck in Flatland. And that means that any control that was built for grabbing with multiple fingers at several depths, is out (you hear me rotating dial analogue?).

If you want your users to feel comfortable with your software, make sure the function of it is clear to the intended audience. Then prettify it with the help of a computer designer. Different world, different job.

  

Happy Birthday Alan

Alan Turing is considered to be the father of computing (at least for those who don’t believe in mayan computers, secret alien infiltrations, or Atlantis). He would have turned 100 this year.

Computers are everywhere nowadays, and pretty much anyone can learn very quickly to use one. But you have to remember that up until the fifties, people were paid to do calculus. In the case of all the complicated operations for astronomical charts and stuff, the post of calculator was in high regard, and the fastest (and more accurate) one could name his price.

Machines have been around for a long time, but there was no adaptability to them: the intelligence was in the hand of the user. Complicated clockwork machinery could perform very delicate stuff, but not by themselves. And repurposing one of them to do something it wasn’t built for was close to impossible.

Basically that’s what Turing pioneered: a machine that could be repurposed (reprogrammed) all the time, and could modify its own behavior (program) based on its inputs.

Before Turing, what you had is an input -> tool -> output model for pretty much everything.
After him (and we can’t help but smile when seeing how pervasive these things are today — even my TV!), the model switched to input + memory -> tool -> output + modified memory (+ modified tool).

Meaning that two consecutive uses of the same tool might yield different results with the exact same inputs. Of course, it’s better if the modification is intentional and not the result of a bug, but he opened a whole new field of possibles.

So happy birthday Alan, and thanks for giving an outlet to my skills. If you hadn’t been around there would have precious few other ways for me to whore my faulty brain!

  

One For The Money, Two For The Show,…

WWDC is just around the corner, featuring some 10.8 (dubbed “Mountain Lion”) excitement, maybe some iOS 6 news, and possibly some hardware upgrades, although I have my doubts about cluttering the developer conference with hardware announcements.

But since it’s coming I decided to have a glance at Mountain Lion, to be at least able to follow the discussions. Now, it’s true I haven’t changed my setup in a while: my traveling companion is a late 2008 black MacBook (boosted in RAM and hard drive as time passed) that’s way more than a match for its current counterparts in terms of development. And to my mind, once it’s equipped with a SSD drive (which I can swap in very easily, by the way), it’s going to be somewhere between 20 and 40% slower than its 4-years-younger rivals. Yep.

The only thing that dramatically improved these past few years are the graphics and the core redundancy. Since I don’t play on my macbook, and can wait the extra 20s it will take to finish compiling my biggest project (I tested), I feel confident this puppy will follow me a little bit more.

But! Not so fast! 10.8 won’t run on it. Wait, what? For a frickin 20% penalty, I get to buy a new laptop in which I can’t change the hard disk, upgrade its RAM, or get an extra battery? Apparently so.

The official reason is that it won’t run in 64 bits. Wait, what? It does too! It runs 64bits programs like a charm.

“No no no, you don’t get it, it won’t boot in 64 bits. That’s why we won’t support it”. Wait, what? Windows 7 boots it in 64 bits. So does Linux. What’s the game here?

So, for my laptop, not only do I have to shell out 2k euros, but for features that I don’t care about (I have a console for gaming purposes, thank you very much), and at the expense of features I actually need (given that I work a lot with video, my hard drive has a life expectancy of a couple of years, tops). OK, well… For my laptop, I might actually get convinced, given the fact that it is pretty banged up. But that’s vanity, not technical.

And it gets worse with my trusty Mac Pro. 4 cores, 8GB of RAM and some pretty good video card (for gaming… sold that way, anyway), but still no go. The same “it won’t boot in 64 bits” shenanigans.

Except, we are absolutely not in the same game, price-wise. I can’t replace my Mac Pro with an iMac. I have 4x2TB of storage in there, plus a boatload of things connected to the myriad of available ports. So I would have to replace it with a new Mac Pro. If it ever gets announced, the thing I will need will cost something like 4k to 6k. That’s a hell of a lot for a tiny teenie booting issue that got fixed on both b1 and b2 beta releases of the OS, but that got closed on the b3 for no obvious reason.

I get that Apple is a hardware company, and needs to sell hardware. And in the past, every time my computer slowed to unbearable speeds, I upgraded my hardware gladly. But this is not it. If someone forces you to do something for no other reason than “because we say so”, there’s a good chance of a backlash.

Oh and by the way? VMWare Fusion allows me to run 10.8 in a virtual machine… on these two computers. And the speed is decent too. So I hope Apple continues behaving like the good guy, and does not start using wrong tactics for commercial reasons. They have the money (that I gave freely and abundantly over the years), they can afford it.

  

Research vs Development

It has been true throughout the history of “practical” science: there seems to be a very strong border between “pure” research (as in academia, among others) and “applied” or “empirical” research (what might arguably be inventing). I’m not sure where “innovation” fits on that scale, because it depends mostly on the goals of the person using that word.

But first, a disclaimer: I have a somewhat weird view of the field. My dad, even though he routinely dabs in practical things, loves discussing theory and ideas. My mom on the other hand expresses boredom rather quickly when we digress ad nauseam on such topics. Growing up, I started with a genuine love for maths and scratching my head over theoretical problems, sometimes forgetting to even eat before I solved one of my “puzzles”, then branched to a more practical approach when I started earning a living by writing code, before going back to pure research in biology and bio-computing, which ended badly for unrelated reasons, which led to a brute force pragmatism daily life for a while, which switched again when I started teaching both theory and practicalities of programming to my students, and now… well, I’m not exactly sure which I like most.

Writing code today is the closest thing I can think of to dabbling in physics back in the 17th century. You didn’t need a whole lot of formal education, you pretty much picked up on whatever you could grab from experience and the various articles and books from the people in your field, and submitted your theories and your inventions to some kind of public board. Some of it was government (or business) funded, to give a competitive advantage to your benefactors in military or commerce or “cultural glow” terms. Some of it came from enthusiasts who were doing other things in their spare time.

Some people would say the world was less connected back in those days, so the competition was less fierce, but the world was a lot smaller too. Most of the Asian world had peaked scientifically for religious, bureaucratic or plain self-delusional reasons, the American and African continents weren’t even on the scientific map, so the whole world was pretty much Europe and the Arab countries. Contrary to what most people I’ve had a chat about that period with think, communication was rather reliable and completely free, if a little slow. Any shoemaker could basically go “hey, I’ve invented this in my spare time, here’s the theory behind it and what I think it does or proves” and submit it to the scientific community. True, it would take a long time to get past snobbery, sometimes, but the discussion was at least relatively free. Kind of like the internet today.

Back in those days, the two driving forces behind research were competition (my idea is better than yours, or I was the first to figure it out) and reputation (which attracted money and power). Our human scientific giants did morally and ethically wrong sometimes (like Galileo grabbing the telescope to make a quick buck, albeit in order to finance his famous research, or Newton ruthlessly culling papers and conferences to stay in power at the head of the royal scientific society) but to my knowledge, they never intentionally prevented any kind of progress.

That’s where the comparison kind of falls short with today’s research and development. First of all, the gap between pure research and practical research has widened considerably. No one with less than 10 years of studying a particular field is going to be granted a research post. That’s both because the amount of knowledge required to build on all that we know is simply humongous and because pure research is notably underfunded. Then there is the practical development side, which has the same kind of educational problem: the systems we deal with are complex enough with a degree, so without one… And the amount of money and effort poured by companies into these projects simply can’t tolerate failure.

That’s obviously not to say that it doesn’t exist anymore, far from it. I’ve had the chance to spend some time with the people from the ILL, a research facility devoted to neutron physics, and wow. Just wow. And Obviously, from time to time we developers are involved in some cool new project that no one has done before (hush hush). But the entry barrier is a lot higher. I wouldn’t qualify for research, even though I almost started a PhD and am not entirely stupid, and however good reviews I am given on my work, I guess I’d still have to do R&D on my own before anyone gave me a big wad of bills to pay for a project of mine.

Getting back to the point, while academia hasn’t changed much it seems in the way it operates (but changed a lot on the hurdles to get through), the practical side of research has changed dramatically. Global markets means fiercer competition. In order to attract the good persons, a company has to pay them better than rivals, and in order to do that, they have to make more money per employee. But to make more money per employee, there has to be either very few rivals (monopoly) or a clear-cut quality advantage. The second strategy requires to attract the best people and to take more risks, while the first requires a better defense.

And that’s where the slant is, today: It’s actually a lot cheaper and less risky to work secretly on something new, slap a couple of patents on it to get a de facto monopoly and live off the dividends it will assuredly bring. That’s the reasoning, anyway.