WWDC 2016, or close to that

My first WWDC was 15 years ago. I was part of a few youngsters who were selected for the student scholarship, and back in the day, there were a lot of empty seats during the sessions. It was in San Jose, and my friend Alex was kind enough to let me crash on his couch for my very first overseas “professional” business trip. Not that I made any money on that trip, but it was beginning to be my career and I was there in that capacity. A month later, I would be hired at Apple in Europe, and Alex would be hired by the Californian HQ a few years later, but back then, what mattered was to be a nerd in a nerd place, not only allowed to nerd out, but actively encouraged to do so.

I was 20, give or take, and every day, I would have lunch with incredible people who not only shared my love of the platform, and the excitement at what would become so huge – Mac OS X, Cocoa, and Objective-C-, but would also share their experiences (and bits and pieces of their code) freely, and for the first time in my short professional life, I was treated as a peer. I met the people who came out with the SETI@Home client, and were looking for a way to port it from Linux to 10.0 (if you’ve never seen 10.0 running, well… lucky you), I exchanged tricks with the guy who did the QT4Java integration, and met my heroes from Barebones, to name a few.

Of course, the fact that I was totally skipping university didn’t make me forget that, like every science, programming flourishes best when ideas flow easily. No one thought twice about opening a laptop and delving in code to geek out about a specific bug or cool trick. I even saw and maybe had a few lines of code in a Lockheed Martin hush hush project… Just imagine that!

Over the years I went regularly, then less so, and in recent years not at all. It’s not a “it was so much better before” thing as much as a slow misalignment of what I wanted to get out of it. Let’s get this particular thing out of the way, so that I can move on to more nerding out.

Randomness played a big part for me. I met people who were into the platform, but necessarily living off of it. Academics, server people, befuddled people sent there by their company to see if it was worth the effort porting their software onto the Mac, it was that easy to get in the conference. These days I dare you to find an attendee who has a paid ticket and isn’t making a living from developing iOS apps (either indie or contractor, or in-house). The variety in personnalities and histories, and uses of the platform is still there, but there’s zero chance I’ll see an astronomer who happens to develop as a hobby… As a side note, the chance that a presenter (or Phil Schiller, who totally did) will give me his card and have a free conversation about a nerdy thing, certain in the fact that we were part of a small community and therefore not abuse each other’s time is very close to zero as well. Then again, who else was interested in using the IRdA port of the titanium to discuss with obscure gadgets?

So, it felt a little bit like a rant, but it’s not. I recognize the world has moved, and Apple went from “that niche platform a handful of enthusiasts keep alive” to the biggest company on Earth, and there is absolutely no reason why they should treat me differently for that past role, when there are so many talented people out there who would probably both benefit more from extra attention, and prove a more valuable investment. Reminiscing brings nostalgia, but it doesn’t mean today is any worse from an imagined golden age, when the future of the platform was uncertain, and we were reminded every day that we were making a mistake by the rest of the profession. Today is definitely better, even if that means I don’t feel the need to go to the WWDC anymore.

So, back to this year, the almost live nature of the video posting meant that I coded by day and watched sessions by night, making it almost like those days when sleep was few and far between, on the other side of the world. I just wasn’t physically in San Francisco, enjoying the comfort of my couch and the possibility to pause the video to try out a few things the presenter was talking about, or the so very important bathroom break.

All in all, while iOS isn’t anything new anymore, this year in particular, I was kind of reminded of the old days. It feels like we’re on a somewhat mature platform that doesn’t revolutionize itself every year anymore (sorry users, but it’s actually better this way), the bozos doing fart apps are not that preeminent anymore, and we can get to some seriously cool code.

2016 is all about openness. Gone are the weird restrictions of tvOS (most of the frameworks are now on par with other platforms, and Multipeer Connectivity has finally landed). WatchOS is out of beta. We can plug stuff in first party apps that have been walled off for 8 years. Even the Mac is getting some love, despite the fact it lost a capital M. And for the first time in forever, we have a server session! OK it is a Big Blue Man on stage but we may have a successor to WebObjects, folks! What a day to be both a dinosaur and alive.

Not strictly part of the WWDC announcements, the proposed changes to the App Stores prefigure some interesting possibilities for people like me, without an existing following or capital that can pay for a 6 months indie project. Yes, yes I know. There are people who launch new apps every day. I’m just not one of these people. I enjoy the variety of topics my customers make me confront to, and I have very little confidence in my abilities to manage a “community” of paying customers. Experience, again, and maybe I’ll share those someday.

Anyways, Swift on Linux, using frameworks like Kitura or Perfect right now, or the future WebObjects 6.0 might allow people like me, who have a deep background in languages with more than one type to be able to write fairly rapidly and consistently a decent backend, and who knows maybe even front end. Yes, I know Haskell has allowed you to do similar things for a bit, but for some reason, my customers are kind of daunted by the deployment procedures and I don’t do hosting.

The frills around iMessage stickers don’t do much for me, but being able to use iMessage to have a shared session in an app is just incredible. So. Many. Possibilities. Completely underrated in what I heard from the fallout of the conference doesn’t even begin to describe it. Every single turn based game out there, playable in an iMessage thread. I’ll leave that out here. See? I can be nice…

MacOS (yes I will keep using the capital M because it makes more sense to me) may not get a flurry of shinies, but benefits largely from everything done for iOS, and Xcode may finally make me stop pining after Codewarrior, or AppCode, or any other IDE that doesn’t (or didn’t) need to be prodded to do what I expect it to do. Every time I have to stop writing code or debugging code to fix something that was working fine yesterday, I take a deep breath. Maybe this year will grind those disruptions to a halt, or at least be limited to the critical phases of the project cycle.

I like my watch. I may like it without having to express an almost shame about it, come September. Actually, while I’m not tempted in the least to install iOS 10 on any of my devices just yet, I might have to do it just to have a beta of the non beta version of watchOS.

In short, for not quite defined reasons, I feel a bit like I did, 15 years ago during my first WWDC. It looks like Apple is shifting back to listening to us developers who aren’t hyper high profile, that the platform is transitioning to Swift at a good pace, but not just bulldozing it over our dead bodies, and that whatever idea anyone has, it’s finally possible to wrap your head around all the components, if not code them all by yourself using a coherent approach.

Hope feels good, confidence feels better.

  
Mood : contemplative
Music : Muse - Time is Running Out

The New Space Age

If you know me a little bit, you know I’m a sucker for space stuff. And research in general. Doing something that has never been done before, or furthering an agenda that goes into that direction has always been something that gives me goose bumps in an awesome way.

2014 has been a wonderful year for space buffs, but two very recent missions have hopefully recaptured the interest for everything interplanetary, Rosetta/Philae and Orion.

“It’s like hitting a bullet with a smaller bullet, while wearing a blindfold, riding a horse”

In march 2004, some people thought it would be a cool thing to achieve. Rosetta was supposed to come close enough to a comet to take detailed pictures and perform analysis, why not try to land on it too with Philae?

Think about it: a route spanning 6.4 billion kilometres in 10 years, to hit a rock 4 kilometres in diameter ( 1/1600th of Earth ). Mind boggling. And yet, it was done, in the name of science. There are a lot of reasons to do such a thing, and the ESA explains it nicely.

“To Infinity and Beyond!”

Earth isn’t doomed just yet (even though it’s getting there), but we all know in a corner of our minds that we will have to leave it for another planet at some point in the future. Almost 50 years after our first baby steps in interplanetary travel and the Apollo Program, NASA tested a new craft designed to take us back to the Moon, and even Mars. Even if it’s currently empty, it signals a commitment to a spacefaring culture once more. Sure, we are nowhere near having a solution for interstellar travel, but when we start colonizing the Solar System in earnest, we’ll be closer to the stars.

THIS is why funding research is important

Does it make any difference today to know what that comet is made of and what it’s seen during its travels? Does landing on mars allow me to have a summer house there? Of course not. But our grandchildren will be thankful we didn’t spend to much time navelgazing as if the universe was restricted to Earth.

  

Frauds in the US Patent System

Remote Workers Are A Pain To Manage (sic)

This is not exactly news anymore, but a fraud-related scandal was uncovered a few days ago in the US patent body of regulation.

This hits me on two different levels, completely unrelated to one another : work-at-home mechanics, and the actual concept of patenting stuff.

My distrust of any patent system (especially for software) in today’s day and age has popped in once or twice already.

The work-at-home side of this story is distressing to say the least. In the last 15 years, I have worked for maybe a couple of months in an actual office with actual people. It’s no secret I don’t enjoy it, and it’s no due to any of the fine folks I was sitting next to. It’s just that my habits of cursing loudly at my screen, and my need for a total lack of distraction when I’m focusing on a particularly thorny problem, make having people sitting right next to me a difficult fit.

But because of stories like this, and because it is so easy to cheat bosses/customers of actual working time when they don’t have their eye resting directly on you, working from home is a very real deal-breaker for my interactions with customers sometimes. Trust issues aside, on an hourly basis, I get more than regular employees, and I can Do It in my bathtub! Holy granola! From the outside it looks like I have some totally unfair advantages over everyone

As Seen From The Other Side

Truth be told, working from home is hard.

Let’s start with the beginning of the day: it’s so easy to snooze the alarm and go back to bed. Really. Especially if you have been working late the day before. Then whatever your routine in the morning might be, taking your time to read the news, catch up on social stuff, etc is tempting. Then you realize it’s really late and you might have to cram everything before lunch, which could last longer because you’re enjoying it in front of the TV, etc etc…

Basically, if you have any procrastinating tendencies, they are all very easy to succumb to. Structure helps, like having “office hours” to simulate the real thing, planning your customer phone calls early in the morning, or at any other time you might be tempted to do anything else but work. Life hacks such as this are easy to implement and adhere to, and every one should know themselves well enough to know which ones are important and how their personal procrastinating tendencies surface. Because the key thing about working from home isn’t replicating a workplace at home.

To be able to work from home, you need to know exactly how your brain works.

To take the only example I know well enough, I tend to be very code efficient right after I wake up. So I have two known times where I cram my most urgent/important stuff : early morning, and after my nap. Yes I do take naps, partly because of this, and in a regular office, it’s not generally the norm. After roughly 2h straight of coding, my mind tends to wander. I start checking news, chat with people. So I use that time to do my support / client stuff. But even that is tiring, so I generally cap that out at 1h. Then I do the code that’s less neuron-consuming, which might (or might not) get me in the zone again for more important stuff.

The important part of all this is that I spread my work hours larger than strictly necessary. I usually have a 8-8 work day, and I sometimes work for a few hours on week ends as well. Because I can, and because it doesn’t impede on other things I consider vital. And during the day, I have free time to run errands, have a cuppa with people, etc. The very fact it’s spread out a bit means that I can contract it if necessary to stay on a deadline that is whistling dangerously close, or expand it a bit if I have time and am feeling under the weather or uninspired.

The Root Of The Problem

Applying “office rules” at home seems completely stupid and backwards to me. Either you give people the option to work from home until they can’t achieve what they said they would do anymore, whatever way they want to organize themselves, or you force them to be under scrutiny in an office. Giving them restrictions in their own homes will lead to resentment and “cheating”, and there should be no shame in saying after a while “look, it doesn’t seem to work when you do it remotely, come back in an office”, to potentially be tried again at a later date. The remote workforce problem embodies to me a fundamental flaw in how people’s work is valued : results vs time.

It’s perfectly ok for people whose job it is to be available (to interact with customers who may or not call, for instance) to be paid / valued in good part relative to the time they spend on the job. But for developers, to take an example I know only too well, it’s all about what we do deliver. Time is second.

Let me take an example. Company A contacts me, for a contract on an app that displays news for their product and allows for support contact, and social sharing. The very first question they ask is how long it’s going to take. Which is fine and normal. But based on that, they derive the amount of money they will assign to the project. While my time is as valuable as anyone’s, we can all agree that there are some things I will do faster than others with my level of experience (to take the seniority out of the equation). If it takes a colleague of mine 1 month to do that app, and I take only 2 weeks, should I be paid less? No but the second question they ask is “what is your daily rate?”. So in essence, if I have a fixed rate that’s close to the market I will be paid less for the same job, and if I double it, I probably won’t have the contract. How is that fair?

I can hear sniggering in the back : “why don’t you just SAY you will take a month?”. The ethical value of that comment is left as open for debate.

But once again, we circle back to the problem of assigning a value to someone’s job, and the perversity of contemplating cheating to “fix an intrinsic wrong”. I refuse to think every single human on the planet is prone to cheating in every circumstances. Most of the time, mostly honest people who try to game the system to do less while earning the same financial compensation feel cheated themselves. It is indeed a HR problem, but not in a “let’s put more restrictive measures in place to increase productivity” way, more in a “let’s see why these experts in their fields feel like they aren’t paid enough”. And remove the actual bad apples based on results.

  

[WWDC14] Thoughts

I won’t go into details, the WWDC keynote has been covered far and wide.

  • New Look : √
  • New APIs : √
  • New ways to do old things : √
  • New Language : errrrr √

Response among the community was unanimous, this is xmas come early. And it’s true that for us developers, there a lot to be excited about. The new “official” way to communicate with other apps through the extensions mechanism is awesome, the integration of TestFlight will make a lot of things easier, especially for us small teams, and the new language will hopefully make us more productive (yay, less code to write).

There are some blurry or grey areas about these changes that will probably cause some problems, but hey, we’re Da Dream Team, right? We’ll manage.

The only thing that struck me as a slight cognitive dissonance is the fact that outwardly, Apple publicly recognizes our role in the success of the platform (huge), but kind of changes nothing in the way we are treated. I am definitely not asking for exclusive access to the thought process of Apple regarding what’s secretly being working on, I think opening up betas to pretty much everyone defuses the rumor mill, and might help get better .0 releases.

Since we are the people who make the “normals” want to get an iPhone/iPad, why is it so hard to have any handle on how we do it?

Xcode tends to get better, but there is still no way to expand its capabilities, or adapt it slightly to the way our brains handle code-writing. Third party IDEs (like AppCode for instance) that may not be perfect by any stretch of the imagination, yet still give us more flexibility, have a hard time adapting to the internals of the build process. We still have proprietary/opaque file formats for vital parts of the development (I’m looking at you XIBs and CoreData models). Cocoapods have become mainstream, but are still iffy to integrate (and might break).

For the social side of things, since WWDC is harder to get to than a Prince concert, same deal, it’s Apple’s campus, or community based (read no help from Apple whatsoever) things. Kitchens? Local dev events? Access to labs? If you’re not in California, tough luck.

So, yes. We are the main booster for the success of the platform, but we have absolutely no handle on things, in any way, shape, or form.

Am I excited that we get shiny new things to play with? Sure. Is my head buzzing with ideas? Yup.

But I am also a bit bitter that, sometimes, it feels like we’re not working together.

  

Twitter, AppNet, and Other Social Thingummies

I became a backer of AppNet as soon as I became aware of it. I love the twitter concept of micro-blogging, and use it a lot. And I’d rather pay some money to have it working rather than a shady ads/user-data selling/unknown revenue stream for the parent company. That’s the short reason for it, and it’s more a matter of principles than anything else.

That being said, when a new service comes to life, especially backed by developers, the first weeks/months are incredibly exciting. It feels like every discussion is interesting, every feature request gets implemented, and that you’re part of something that’s taking off the ground.

But some discussions led to some critical thinking on my part. One in particular that ended up with “… so app.net is like twitter but not free, right? And there’s far less people on it, right? So why bother?”

Let’s pretend for a minute that it’s as simple as that, and reduce the question to “why bother being on a Twitter-like service anyway?”

When I started using the service in 2007, there were mostly techies and geeks on it. At the time, I heard it described as “the virtual office”: some place where all the people in your field were sharing thoughts, ideas, rants… And it’s true that it is something akin to conversations you overhear from your desk and can join in if you feel like it.

But there’s still a lot of noise on twitter. I’m not saying that other people shouldn’t talk about their cats or rant about the neighbor playing his bagpipes too loudly. There are ways to filter out these things.

What got me thinking is “how is it social”? Like 90% of twitter users out there, my list of followers contains in a vast majority people I actually know. And my following list contains usually them plus a few people I don’t know, but of whom I respect the work or the thoughts they tweet. If someone I follow says something that’s worth sharing, I will re-tweet it to my list of followers (I might as well say “friends”). But since I have most of my friends’ contact information, what’s the difference between that and mass-mailing them? Ease of use, and that’s about it. Technically, it’s the same thing.

But for the twitter heavyweights, it’s not the case. Twitter isn’t about talking and sharing with your friends, but with actual followers. It’s a news service, with the added twist that anyone can comment (@reply…) on whatever it written. Most of the time, when you @reply a heavyweight, he/she won’t reply back to you. That’s normal: they have potentially 250k replies to their tweet, they choose to reply only to the most pertinent ones if they have time to read them all, or the ones by their actual friends.

Let’s set that aspect aside, because it is human nature and quite understandable, to boot. Quite frankly, I wouldn’t want to have 10k followers, it would give me a responsibility about what I say that would detract me from my actual pleasures in life. But it’s “social”, which means that somehow, it should allow me to participate in a conversation with people I actually don’t know about something that interests me, or to “meet” (always a complicated word in our world) new people I have a kinship of sorts with.

To understand the somewhat foggy point of this post, let me state that I’m a dinosaur. I met most of the people I actually considered (and in some case still consider) my friends on the internet at the turn of the last century. It was done through newsgroups and IRC. The entry point to these worlds is the topic. You joined a newsgroup or a channel based on what you assumed was talked about within. The actual people that were discussing things sometimes were famous, in their respective fields anyhow, but anyone could post things that would be read by everyone and replied to or ignored based on content. True, it left the door wide open for flame wars and spamming, but hey, there are downsides to everything, right?

Then at the beginning of my professional life, I met most of the people I now like and/or trust at conferences, big and small. Sometimes I worked with them. Sometimes we just happened to be at the same place at the same time, and struck a conversation. And sometimes we kept in touch. But the beauty of it has always been how easy it is to engage the first contact.

Back to Twitter/AppNet.

The biggest question to my mind is how you find someone you’d like to “follow”, and how easy that first contact actually is.

Today, there’s a handful of us on AppNet, so it’s rather easy. Everyone is ecstatic, pioneer and all that, so the entry barrier is not high to “just say hello”, or to actually interject in someone else’s conversation. After all, we’re all part of something grand, so there can’t be that many people you’ll end up regretting having a conversation with.

When the service hits its stride (or with twitter), it’s going to be much much harder.

So, how do I find someone interesting?

Today, with twitter, there are basically three ways to find someone. You either know the person (through their website, or talk, or personally), find their handle, and click follow. Same applies on AppNet. The second most popular is to see someone you follow talk with someone you don’t follow, and decide to follow the second person as well. And then there’s the somewhat shaky “I saw a clever thing said in the global feed, and I decided to follow this fella”. Given the number of messages per minute, let’s say the chance of finding a random interesting person like that is low.

So let’s pretend I want to find an iOS developer who speaks Russian and has interesting stuff to say. How would I go about it?

Searching for “russian developer” is out: it will give me all the people who have either russian or developer in their name or bio. And even if it were russian AND developer, it would still yield “incorrect” results. If I went the google way, it would be the same problem. I’d have to sift through thousands of hits. Kind of defeats the “let’s get to know some new person” vibe.

With a swiss guy on AppNet, I tried to think about that particular aspect more, and refined my inkling of an idea of tags. Tagging a person with intelligent keywords would be swell. It’d have to be multilingual, obviously, and have some kind of loose-link between keywords. Ideally, if I’m looking for the tag russian, it should come up with all its translations, but also all it’s implied relations. For instance, a guy who lives in Moscow but wasn’t tagged “russian” should come up.

So that’s one feature I’d like on AppNet. Next up!

Let’s pretend we have a way of finding a dozen match on a somewhat decent set of critera. Who’s “interesting”?

The twitter metric is threefold: the number of followers, the number of posts, and the number of people who think these posts are interesting (manually flagging them as “favorites”). Out of these three, if I’m looking for new people to interact with, only the third one is relevant. A person with 10k followers is less likely to have a chat with me out of the blue, but that shouldn’t prevent me from contacting them. And someone with 10k followers isn’t more social than anybody else, I think. They just happen to be famous, usually for reasons outside of Twitter. So that doesn’t help me decide. Someone who posts every 10 minutes might be someone who likes to chat or some kind of news junkie. It doesn’t factor in my decision either. The third one is more interesting and accurate, provided your level of trust in the general population is high. If you think that most people are real people who flag a post as a favorite because they genuinely liked it, you’re almost out of the woods.

So by default, having a first-glance measure of the number of “favorited” posts, or re-posted posts (which is the same thing, basically) seems like a good idea. Obviously, if the level of trust in the general population goes lower, it would be a good second-glance measurement to have the same statistics, but from people not too distant from you. Arbitrarily, I’d say 2 levels of follow would provide a good enough coverage, but I might be wrong. We could probably have something that lets me parametrize this, but “who in the people I follow and the people they follow says the things that are most shared” would be a good filter for me.

So that feature too makes a lot of sense, especially if the average quality of the people on the platform is good. Number two feature I’d like to see in that new social network project.

But the real question is “what do we expect of a Twitter competitor?”

Right now, the consensus is leaning towards “it has to do everything Twitter does, but with a clearer business model, and less shady business practices”. Which I think we all agree is the minimum requirement.

But I personally have higher expectations. I’d like to have my “virtual office”, and I’d like to randomly or willingly bump into interesting people I wish to have that casual and loose relationship with, be they famous or not. That’s what I’m investing in.