You Are Not Alone

Employers need their star programmers to be leaders – to help junior developers, review code, perform interviews, attend more meetings, and in many cases to help maintain the complex legacy software we helped build. All of this is eminently reasonable, but it comes, subtly, at the expense of our knowledge accumulation rate.

From Ben Northrop’s blog

It should come at no surprise that I am almost in complete agreement with Ben in his piece. If I may nuance it a bit though, it is on two separate points : the “career” and some of the knowledge that is decaying.

On the topic of career, my take as a 100% self-employed developer is of course different from Ben’s. The hiring process is subtly different, and the payment model highly divergent. I don’t stay relevant through certifications, but through “success rate”, and clients may like buzzwords, but ultimately, they only care of their idea becoming real with the least amount of time and effort poured in (for v1 at least). While reading up and staying current on the latest flash-in-the-pan as Ben puts it, allows someone like me to understand what it is the customer wants, it is by no means a requirement for a successful mission. In that, I must say I sympathize, but look at it in the same way I sometimes look at my students who get excited about an arcane new language. It’s a mixture of “this too shall pass” and “what makes it tick?”.

Ultimately, and that’s my second nuance to Ben’s essay, I think there aren’t many fundamentally different paradigms in programming either. You look at a new language, and put it in the procedural, object oriented, or functional categories (or a mixture of those). You note their resemblances and differences with things you know, and you kind of assume from the get-go that 90% of the time, should you need to, you could learn it very quickly. That’s the kind of knowledge you will probably never lose : the basic bricks of writing a program are fairly static, at least until we switch to quantum computing (ie not anytime soon). However fancy a language or a framework looks, the program you write ends up running roughly the same way your old programs used to. They add, shift, xor, bytes, they branch and jump, they read and store data.

To me that’s what makes an old programmer worth it, not that the general population will notice mind you : experience and sometimes math/theoretical CS education brought us the same kind of arcane knowledge that exists in every profession. You can kind of “feel” that an algorithm is wrong or how to orient your development before you start writing code. You have a good guesstimate, a good instinct, that keeps getting honed through the evolution of our field. And we stagnate only if we let it happen. Have I forgotten most of my ASM programming skills? sure. Can I read it still and understand what it does and how with much less of a sweat than people who’ve never even looked at low-level debugging? Mhm.

So, sure, it’ll take me a while to get the syntax of the new fad, as opposed to some new unencumbered mind. I am willing to bet though, that in the end, what I write will be more efficient. How much value does that have? Depends on the customer / manager. But with Moore’s law coming to a grinding halt, mobile development with its set of old (very old) constraints on the rise, and quantum computing pretty far on the horizon, I will keep on saying that what makes a good programmer good isn’t really how current they are, but what lessons they learn.


Blind Faith

“the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%”

(from PNAS)

“The death in May of Joshua Brown, 40, of Canton, Ohio, was the first known fatality in a vehicle being operated by computer systems.”

( from NYT)

It’s not that buggy software is a reality that strikes me, it’s that people think that either software is magic (for users) and therefore requires no attention, or that it will get patched soon™ enough (for “powerusers” and devs). The problem is that beta-testing, which is the new 1.0, shouldn’t be optional or amateur.


I Know People Call Me Crazy, But…

Some days in the life of a developer are a rabbit hole, especially rainy saturdays, for some reason. This following story will be utter gibberish to non developers or people who haven’t followed the news about Apple’s new language: Swift.

You have been warned and are still reading? Gosh, ok, I guess I have to deliver now. No story would be complete with a bit of background about the main protagonist: yours truly.

The story, till that fateful morning.

Despite being a mostly Apple oriented developer (it’s not exactly easy to change more than 15 years of habits), I freely admit I can be skeptical about some seemingly random changes the company makes. Swift can have lofty goals like being able to work for any task a programmer is given, or being learned super easily, it still has to deal with the grim reality that our development work is: understanding what an apparent human being has in its head, and translate it in a way a computer, which is not just apparently stupid, can understand and act as if it understood the original idea/concept/design, whatever youngsters call it these days.

It doesn’t help that Swift isn’t source compatible: any program written up to a month ago simply does not compile anymore. Yea. If there’s anything a computer is good at and humans suck at, it’s redoing the same task over and over again. I know I feel resentful when that happens.

For these reasons and more, I’ve held up switching to Swift as a primary language for my Apple related work. It doesn’t mean I don’t like it or can’t do it, just that I earn a living by writing code and making sure my customers don’t come back a month later with a little request that means I have to rewrite a bunch of code, because the language changed in between then and now.

The premise

I was excited about some things shown during WWDC2016, including the fact I may not have to learn javascript or upgrade my old and shabby knowledge of php to write decent backends for some projects I tend to refuse. IBM is hard at work on an open source and actually pretty decent Swift “Web Framework” (i.e. a server written in and extensible in Swift) named Kitura. I know there are alternatives, but Apple endorsed this one, and judging by the github activity, I have some faith in the longevity of the project.

Armed with my Contact List of Dinosaurus (+3 resist to Baffling), my Chair of Comfortness (-1 Agility, +3 Stamina), and my Giant Pot of Coffee (grants 3 Random Insights per day), I embarked on a quest that was probably meant for Lvl 9 Swift Coders rather that Lvl 10 Generalists, but it’s an adventure, right?

The beginning of the adventure, or the “But”

The aforementioned source instability means that the brave folks at IBM and their external contributors have to deal with rewrites of the API roughly every other week. Sometimes they are big, sometimes small, but it’s non trivial. At the time your truly embarks on the journey, Kitura only works on the June 6th snapshot of Swift. One notable thing about this is that it’s actually different than the one that ships with the betas Apple provided to developers for the WWDC. It requires installing a separate toolchain that, while it can work with Xcode is, to put it bluntly, a pain to work with. Special shell variables, switching Xcode to the new toolchain (which, incidentally, isn’t a project setting, but will apply to everything. You have to switch and restart the IDE every time).

After a solid hour of grinding through the process, the very simple Hello World sample finally loads in Safari, and our hero grins when the url http://localhost:8090/hello/Zino spit out

Hello, Zino!

Incidentally, it can also do the same thing using http://localhost:8090/hello/?name=Zino and its POST variant, because why the hell not, while I’m at it?

Whereupon, invigorated by the victory over the Guardian of the Door, we enter the rabbit hole proper and start making much longer headers

Once that particular beast is slain, I decide that experimenting with passing arguments through every possible means at my disposal is childish, and set my sights on middleware, and more specifically authentication. For those of you unfamiliar with the topic, good for you. It’s a mess that no amount of reading will make clearer. Encryption, storage, and tokens feature prominently, and the more you read about it the more you go ‘huh?’. Best practices are as varied as they are counter intuitive, and quite frankly, the reason why most websites out there either fail spectacularly at it or resign themselves to trusting third parties like facebook, goolge, or twitter, is because any way you look at it, it’s a tradeoff between security, sanity, and ease of use. And you can’t satisfy all three without going insane. It basically requires a complete rewrite about either how the web or humans work. And we know both are really hard to do.

Since the frameworks are in the middle of a transition to something that is already obsolete anyways, you can tell that some dependencies aren’t as much used as others. But let’s be clear on one thing: I do not blame anyone. It’s a stern chase, and a classic catch 22: why pour time and effort beyond a certain point since Swift will break everything again soon? Hopefully, now that source compatibility is on the agenda, we’ll be able to catch up.

But that’s not the point of the adventure, says the now tired Zino, it’s to gauge both my capacity to learn new tricks and my adaptability, so let’s pretend it actually serves a purpose! HTTP Basic and Digest are quickly worked out and I turn my sights to making my sample code actually do something realistic, like talking to a database. That’s what backends do, you know? They talk to stuff like PostgreSQL instances and construct responses to queries, isolating the base itself from prying eyes.

Medieval or primeval, call it what you wish

A not so quick trawling through the interwebs reveals almost nothing that deals with direct database access. There is simply no demand yet for it. Swift programs are currently user facing apps on iOS, and therefore do not need to interact with servers beyond happily chatting away with the kind of API I am currently trying to build. So, it starts looking like I will have to actually create a database driver to test my abilities. Quite honestly, I have been at it for a long time, if you look at the result. So writing a PDO-like framewok will have to wait. The only alternative that seems reasonably maintained works with CouchDB, and there is absolutely no frigging way I am going to “embrace the web” by ditching a performance-oriented piece of software for a javascript derivated “storage format”. Call me a snob, but no thank you.

Then I remember I have used successfully a SQLite wrapper in a project using Swift. SQLite may not be PostgreSQL, but at least it tries. The library in question is SwiftyDB, and I highly recommend you take a peek at it if you want to use a Swift class to SQLite (and back) that works as advertised. It doesn’t handle complex relationships yet, but for those who, like me, don’t want to have to deal with the idiosyncrasies of the SQL language and the various incompatibilities it entails, it’s quite a nifty piece of software.

So, thinks little me, all there is to do, is to port SwiftyDB to Swift 3 (moar XP!). Too easy, why not port it to the package manager Kitura and Apple use instead of bloody Cocoapods (don’t get me started on pods. It’s a good idea, a necessity, even, but implemented in a completely baffling and fragile way). Port it to the Future, kind of thing. Yea, it’s only 18:00, and it will be a good source of XP as well. From medieval Swift to the Modern Era!

Modern, my %€!&@#$~!

The Swift Package Manager is the Nth re-invention of something we’ve been doing since forever in every language, and has its own particular twists, because it’s new, and why not do things in a new way?

It relies on git. Heavily. The version of the library you’re using as a dependency is predicated on the git tags. It clones whatever URL you said the manager would find it at, looks for the version you want to use, based on the ubiquitous major, minor and build, numbers. If you want to have letters in there or a versionning system that uses some other scheme, you’re screwed. Yup. Well, that’s not a concession that would warrant much outrage, even if more than half of my versioning naming schemes don’t work like that. Never mind says I, I’ll bottle the cry for freedom, and get to it.

Did I mention that I had git issues earlier in the process? Apparently, git 2.9.0 has a bug that prevents the package manager from working correctly, and you can’t use dependencies that aren’t git based. That took a while to figure out. But figure it out I did, and everything’s dandy again.

So, let’s take a look at SwiftyDB. It depends on TinySQL (by the same guy), which depends on sqlite3. A handful of swift files in each of the two frameworks, and a lib that is installed on every mac out there. What could go wrong?

The sad story of the poor toolchain that was shoved aside

In order to use C functions (which sqlite is made of), you need to trick Swift in importing them. This is done through the use of a modulemap, which is basically a file that says “if the developer says they want to use this package, what they really mean is that you should include this C header, and link toward that library”. The file sqlite.h lives in /usr/include, so I start with that. Except it is incompatible with the Swift development snapshot for Reasons. The package build system requiring git tags and stuff for the dependencies, it’s kind of a long process to make a small modification and recompile, but after half an hour I manage to find the right combination of commands to have a working module map. Short story is: you need to use the .sdk header, not the system one. I went through gallons of coffee and potential carpal tunnel syndrome so that other adventurers don’t have to. You’re welcome.

Once that little (and awesomely frustrating) nugget has been dug up, all that’s left to do is port TinySQL, then SwiftyDB.

The package system is quirky. It requires a very specific structure and git shenanigans, but once you’ve got where everything should go, it’s just a matter of putting all the swift files in Sources and editing your Package.swift file carefully. It’s a shame that file isn’t more flexible (or expressive) but what are you going to do?

Now the Swift 2 to Swift 3 migration isn’t as simple and self explanatory as Chris lets on, let me tell you. Sure, the mandatory labels where they were optional before aren’t a big deal, but it throws the type inference system and the error checker into a fugue state that spouts gibberish such as “can’t find overloaded functor” of something or other, when I, who haven’t spent that much time writing the SwiftyDB code, can see there’s only one possible match.

Anyways, after much massaging and coaxing, and even cajoling, my SwiftyDB fork is finally swift-package-manager-compatible. With glazed eyes, I look up at the clock to see it’s now 21:00 and on a friggin saturday, too. Very few lines of actual code were involved. And that still was way longer than expected. At this point, the hero of the story wonders if he should go on with his life, or if he should at least try to make everything explode by incorporating SwiftyDB in the sample project that sits there, stupidly saying hello to everyone who type in the right URL…

Aren’t you done yet? I’m hungry!

Sorry sorry. The end was totally anti-climactic. The package integration worked first go. And SwiftyDB delivered without a hitch, I now have a database that holds stuff and a simple web-service that supports authentication and lets you store “notes” (aka blurbs of whatever text you want to throw at it) without so much as a hiccup.

But, as I’m sitting here, on the body of that slain foe, recounting my adventure for the folks who have no idea what the process of such an endeavor is, and intermittently looking at a boiling pot containing my well deserved dinner, I wonder if anyone will see what makes this an adventure: the doubts, the obstacles, the detours, and finally, hopefully, the victory. All that for something that isn’t even needed, or for all I know wanted, by anyone but a lone developer looking for an excuse to take on a challenge.

If, for reasons of your own, you want to use my work in your own package manager experiment, be it with or without Kitura, all you have to do is include this as a dependency:

.Package(url: "", majorVersion: 1, minor: 2)

Currently, it works with the same dependencies as Kitura (snapshot 06 06), and I may even update it along with it.


WWDC 2016, or close to that

My first WWDC was 15 years ago. I was part of a few youngsters who were selected for the student scholarship, and back in the day, there were a lot of empty seats during the sessions. It was in San Jose, and my friend Alex was kind enough to let me crash on his couch for my very first overseas “professional” business trip. Not that I made any money on that trip, but it was beginning to be my career and I was there in that capacity. A month later, I would be hired at Apple in Europe, and Alex would be hired by the Californian HQ a few years later, but back then, what mattered was to be a nerd in a nerd place, not only allowed to nerd out, but actively encouraged to do so.

I was 20, give or take, and every day, I would have lunch with incredible people who not only shared my love of the platform, and the excitement at what would become so huge – Mac OS X, Cocoa, and Objective-C-, but would also share their experiences (and bits and pieces of their code) freely, and for the first time in my short professional life, I was treated as a peer. I met the people who came out with the SETI@Home client, and were looking for a way to port it from Linux to 10.0 (if you’ve never seen 10.0 running, well… lucky you), I exchanged tricks with the guy who did the QT4Java integration, and met my heroes from Barebones, to name a few.

Of course, the fact that I was totally skipping university didn’t make me forget that, like every science, programming flourishes best when ideas flow easily. No one thought twice about opening a laptop and delving in code to geek out about a specific bug or cool trick. I even saw and maybe had a few lines of code in a Lockheed Martin hush hush project… Just imagine that!

Over the years I went regularly, then less so, and in recent years not at all. It’s not a “it was so much better before” thing as much as a slow misalignment of what I wanted to get out of it. Let’s get this particular thing out of the way, so that I can move on to more nerding out.

Randomness played a big part for me. I met people who were into the platform, but necessarily living off of it. Academics, server people, befuddled people sent there by their company to see if it was worth the effort porting their software onto the Mac, it was that easy to get in the conference. These days I dare you to find an attendee who has a paid ticket and isn’t making a living from developing iOS apps (either indie or contractor, or in-house). The variety in personnalities and histories, and uses of the platform is still there, but there’s zero chance I’ll see an astronomer who happens to develop as a hobby… As a side note, the chance that a presenter (or Phil Schiller, who totally did) will give me his card and have a free conversation about a nerdy thing, certain in the fact that we were part of a small community and therefore not abuse each other’s time is very close to zero as well. Then again, who else was interested in using the IRdA port of the titanium to discuss with obscure gadgets?

So, it felt a little bit like a rant, but it’s not. I recognize the world has moved, and Apple went from “that niche platform a handful of enthusiasts keep alive” to the biggest company on Earth, and there is absolutely no reason why they should treat me differently for that past role, when there are so many talented people out there who would probably both benefit more from extra attention, and prove a more valuable investment. Reminiscing brings nostalgia, but it doesn’t mean today is any worse from an imagined golden age, when the future of the platform was uncertain, and we were reminded every day that we were making a mistake by the rest of the profession. Today is definitely better, even if that means I don’t feel the need to go to the WWDC anymore.

So, back to this year, the almost live nature of the video posting meant that I coded by day and watched sessions by night, making it almost like those days when sleep was few and far between, on the other side of the world. I just wasn’t physically in San Francisco, enjoying the comfort of my couch and the possibility to pause the video to try out a few things the presenter was talking about, or the so very important bathroom break.

All in all, while iOS isn’t anything new anymore, this year in particular, I was kind of reminded of the old days. It feels like we’re on a somewhat mature platform that doesn’t revolutionize itself every year anymore (sorry users, but it’s actually better this way), the bozos doing fart apps are not that preeminent anymore, and we can get to some seriously cool code.

2016 is all about openness. Gone are the weird restrictions of tvOS (most of the frameworks are now on par with other platforms, and Multipeer Connectivity has finally landed). WatchOS is out of beta. We can plug stuff in first party apps that have been walled off for 8 years. Even the Mac is getting some love, despite the fact it lost a capital M. And for the first time in forever, we have a server session! OK it is a Big Blue Man on stage but we may have a successor to WebObjects, folks! What a day to be both a dinosaur and alive.

Not strictly part of the WWDC announcements, the proposed changes to the App Stores prefigure some interesting possibilities for people like me, without an existing following or capital that can pay for a 6 months indie project. Yes, yes I know. There are people who launch new apps every day. I’m just not one of these people. I enjoy the variety of topics my customers make me confront to, and I have very little confidence in my abilities to manage a “community” of paying customers. Experience, again, and maybe I’ll share those someday.

Anyways, Swift on Linux, using frameworks like Kitura or Perfect right now, or the future WebObjects 6.0 might allow people like me, who have a deep background in languages with more than one type to be able to write fairly rapidly and consistently a decent backend, and who knows maybe even front end. Yes, I know Haskell has allowed you to do similar things for a bit, but for some reason, my customers are kind of daunted by the deployment procedures and I don’t do hosting.

The frills around iMessage stickers don’t do much for me, but being able to use iMessage to have a shared session in an app is just incredible. So. Many. Possibilities. Completely underrated in what I heard from the fallout of the conference doesn’t even begin to describe it. Every single turn based game out there, playable in an iMessage thread. I’ll leave that out here. See? I can be nice…

MacOS (yes I will keep using the capital M because it makes more sense to me) may not get a flurry of shinies, but benefits largely from everything done for iOS, and Xcode may finally make me stop pining after Codewarrior, or AppCode, or any other IDE that doesn’t (or didn’t) need to be prodded to do what I expect it to do. Every time I have to stop writing code or debugging code to fix something that was working fine yesterday, I take a deep breath. Maybe this year will grind those disruptions to a halt, or at least be limited to the critical phases of the project cycle.

I like my watch. I may like it without having to express an almost shame about it, come September. Actually, while I’m not tempted in the least to install iOS 10 on any of my devices just yet, I might have to do it just to have a beta of the non beta version of watchOS.

In short, for not quite defined reasons, I feel a bit like I did, 15 years ago during my first WWDC. It looks like Apple is shifting back to listening to us developers who aren’t hyper high profile, that the platform is transitioning to Swift at a good pace, but not just bulldozing it over our dead bodies, and that whatever idea anyone has, it’s finally possible to wrap your head around all the components, if not code them all by yourself using a coherent approach.

Hope feels good, confidence feels better.

Mood : contemplative
Music : Muse - Time is Running Out

UIkit and AppKit unification

The latest fad in tech punditry is to claim the barrier to have iOS apps on the Mac is the fact these two graphical frameworks are so different that it makes the work of developers too complex. This is false.

Let’s start with the elephant in the room: an app generally has to bring more to the table than its UI. But the underpinning coverage is total. There is less than 5% of classes and methods that are available on iOS and not on the Mac. On the other hand, there are more methods in the same frameworks on the mac side. So for everything but the UI, “porting” only means recompiling. If your MVC is well implemented, that part at least won’t cause any issue.

The UI part is a mixed bag, but the paradigms are the same nowadays. It wasn’t the case until fairly recently but table views are cell or view based, the delegates are as expected, etc, etc. As an experiment, on uncomplicated examples, I did search and replace only, no tweaking, and it worked.

Now, of course, the issue isn’t technical: the iOS mono-window and mono-view is wasted on the mac. A lot of applications take the “landscape ipad” paradigm to make it less obvious, including Apple : you have the sidebar and the main view, and it works just like the master/detail project template that comes with Xcode.

Porting a successful app from iOS to the Mac is indeed a bit of work. The Mac is window centric and iOS is view centric. Some things you cannot do on iOS are possible on the Mac, like covering parts of your UI, or dragging and dropping elements. It is definitely a very different way to think about the user experience, and the design choices are certainly less constrained and less obvious. But there is no real technical hurdle, unless the vast majority of your app logic lives in the view controllers rather than in a separate codebase. And then again, the Mac now has NSViewController that works exactly like, who would have guessed, UIViewController, and apps can run in full screen mode, so who knows?

The tools (Xcode, IB, etc) are the same. The non UI frameworks are the same. The UI frameworks are similar where it makes sense (putting stuff on screen) and dissimilar where you have to (input methods and window management). That’s it.

Now, you can definitely agree that the Mac app landscape is very different from the iOS one. People are used to having giant things that install other things everywhere, demos, sharewares, unrestricted access to the filesystem, the possibility to copy and paste anything from anywhere, or drag and drop anything from anywhere, and to put it somewhere else, where it will do something. They have multiple apps that come and go depending on modal dialog boxes that show up, and pieces of stuff like palettes that they can arrange any damn way they please, thank you very much. For all these reasons, designing a successful Mac app is challenging. Big screens, small screens, people who like lots of little windows, a few big windows, people who use spaces, people who use keyboard shortcuts more than the mouse, people who don’t know how menus work, people who have a gazillion of menu items, fonts that can be changed systemwide, colorschemes, those are all valid reasons to dread an attempt at making an app that will appeal to most people.

But you don’t get to play the technical hurdle card. All these interactions have been studied, refined, and solved over 30 years of graphical interfaces. You have to choose what will work best for your needs, and yes, this is hard. But it’s not about code.