The Time Constant Of Computing Science

Moore’s Law be damned, my upgrade/compile/download times remain more or less constant.

I was musing about that while upgrading my 2 main computers to 10.11 and my 2 main iOS devices to version 9 (9.0.1 soon followed): my Retina Macbook Pro may be faster than all my old computers rolled into one, it still takes me roughly a day to upgrade to a new major release. Between the system itself, the apps to update, the various libraries to check etc, it is a huge time sink.

And the same goes for compilation times. It used to take me 2-5 minutes to compile my biggest project on my old clamshell iBook (time enough to fix myself a cup of coffee), and it’s still the same in 2015.

We always tend to use our devices to capacity. Drives are full (who’s ever going to need more than a gigabyte?), networks are “too slow”, projects are complex enough to take forever to build,…

I taught Android development for a week recently and the need for instantaneous results is omnipresent, even though mobile development is kind of a reset in that way : small capacities, shoddy connectivity, lack of space in general. We are so used to manipulating 40MB gifs and 1GB video files that we forget these things were science-fiction only 10 years ago. And don’t get me started on Swift compile times…


[MeatSpace] Why We Won’t Get Rid Of Hardware-Minded Programmers Anytime Soon

The real world can be tough. Your magnificent app passed all the automatic testing in your nice development environment with flying colors? And now that it’s shipped, bug reports regarding code you were sure couldn’t fail are coming in? Welcome to meatspace.

One of the great things mobile development has brought back from the dead, is constraints.

Back when I started, in the glorious time of Mac OS Classic, processors could only go so far, you had a single core, and limited memory. That meant the UI could stutter and die under heavy load, and that you could run out of memory, forcing you to free some, or crash.

The predominant attitude when I taught was “who cares if some users don’t have enough RAM? next year, or 2 years from now, they will all have doubled their specs, we’re fine”. Which at the time was somewhat true, if a bit optimistic. Nowadays, on our computers at least, we do have almost infinite virtual memory, making RAM preservation a bonus, something “elegant” to do when releasing an app. As an example, I have 3 tabs open in Safari, totaling with its helpers a magnificent 400MB of used memory. That’s one quarter of the “standard” 2GB we had only a few years ago. And, the quarter of what we have today on high-end phones. 3 tabs. 150MB per tab. Except it’s not true. Virtual memory has to be shuffled around, making every application, including your own slow.

But, I hear someone shout, we do a lot less on our phones! Really? Don’t you think that having apps like Facebook, always reacting to incoming messages, or Twitter, streaming in the background need some RAM to function? It is so easy to forget your app isn’t running all alone on your phone isn’t it? That’s why I was shocked to discover that “killing apps” isn’t a power feature at all, it’s a standard one. I saw a young woman doing it in the subway and straight up asked here why she was. She told me it made her phone faster. When I asked her why, she answered she wasn’t technically oriented, but that she guessed it limited the number of things her phone was trying to do at the same time, maybe?

Again, it is common knowledge at the base user level that killing apps isn’t a thing you do once a year to fix a nasty glitch, but something you do on an hourly basis so that your phone doesn’t heat up, slow down, or eat its battery. Back in the day, I had trouble explaining to my dad he had to quit his mac applications if he wasn’t using them. Now, maybe he will tell me he kills phone apps every hour to make his device work better.

So, ok, sure, the mobile hardware is following a similar curve the computer was in the nineties and the naughties, but just like the computer curve did, the mobile curve will plateau. And your app could be running on a 3 years-old phone that has half the RAM and three times less CPU power than the fancy new one you are developing on.

A few things to remember when building your next awesome app:

  • use system frameworks as much as possible for your animations. They are built to skip frames in case of intensive use, and to coalesce changes to avoid a lot of graphical commands to the GPU
  • queue your network commands. Run one or two at maximum speed rather than 20 at slow as hell speed. Worst comes to worst, you can pause your queue and resume commands when the CPU grants you some time
  • in the same vein, don’t assume everyone is on ultra fast LTE, with the latest phone, and a full battery. Check for that, pause your queues when you don’t have a good enough network, and resume later. Bonus point to people dumping on disk in case of app termination
  • Don’t immediately compute a ton of stuff once you receive a new piece of data. Redesigning the whole screen, or database, because one letter has changed? what’s that? 1967?

[AppleTV] Hyped again, after 8 years

There is an undeniable gamer part in me. I like challenge and I like the escapism it allows. If I’m stuck on a tricky line of code, or if there is some data I need to digest before formulating a plan, I find that sending spaceships in space helps me let go of my block. Writing sometimes does that to me as well, which is why I’m going to try and write some more.

Casual gaming wasn’t really a thing back when I wrote about the original Apple TV and the opportunities it might have provided. Sadly, hacking it to unlock the OS features underneath was (and still is, till the new one ships) the only way to get that sort of games available on the big screen.

“Casual gaming” is something of a strange beast. It usually refers to games that can be picked up and let go on a whim, or maybe games that appeal to people who don’t want too much difficulty in their gaming experience, or maybe games that require minimal input, or maybe games that cost as much as a cup of coffee to buy, or maybe cost less than 100k to make. No one has actually explained to me what “casual gaming” was, and why it was inferior (or indeed any different) than regular gaming. I know people who have sunk in Candy Crush (a popular “casual game”) way more time than I ever did in Zelda game, for instance.

To me, gaming, on a computer, a console, on a table, or in the recess yard, is just a way to do something that makes you feel good and doesn’t directly translate to your “obligations” (school, work, housekeeping, whatever). There is a part of us that wants to “slack off”, and games is a way to express that side. That being said, games also reward you with useful things for those “obligations”, such as a better understanding of teamwork, strategy, communication, coordination, and a lot of less obvious perks. Gaming is good, in general, since it provides you with a risk-free environment to test things. Whether or not we are aware of it when we’re playing, it changes the way we look at some of the non-game activities in our lives. I am fully aware that games also provide a risk-free environment for the most abhorrent behaviors, but that’s a topic for another day.

So from now on, I will just ignore the “casual” part of “casual gaming”, because ultimately it makes very little sense. Now why would anyone want to play games on the Apple TV?

For the same reasons we play them on our phones.

I know it’s a radical notion, but phones weren’t invented for us to throw birds across the screen. They were built originally so that we can talk to other human beings. Then we tacked a few other things on it, mostly because why the hell not, but also because it made good use of a device that we always have in our pocket anyway. Since it’s used as a phone for very little time overall, why not make it more useful when its primary function isn’t active anyways. So… other forms of communication? text messages, emails, social networks, etc? Yup, sure. But when it can do that, it can do a lot more. And people wanted to play games when they didn’t have anything else to do on their phone and couldn’t access any other device (yes, that was the reasoning I heard at the time). Turns out, sometimes, some people almost exclusively want to play games on their phones, with a phone call or message here and there for good measure. People just like to play games, it seems.

Back to the TV : the current TV model is a comfortable 70+ year old thing. I arbitrarily decided that the current TV model started when the first ads were aired, during WWII, but feel free to disagree. You have a big screen somewhere in your home, like 80% of Earth’s population, you turn it on, select a channel, and watch. You may switch channels too, but that’s TV 2.0. Someone out there decides what airs at what time, and you decide if you want and can watch it. In recent decades, we also added the ability to decide when you want to watch it, via recording. And even more recently, we added the optional bonus of not even having to record to to decide when to watch it. But the model stays roughly the same : someone creates something you might want to watch, sets a price (that you can pay with “sitting through ads”, or in decent money), and you decide if, and sometimes when, you want to watch it.

Just like the phone, your TV set (and added boxes) has a primary function, and supposedly does it well enough for the vast majority of the human race to actually own one.

Is there enough down time to justify putting a game on the screen?

That was the argument for the phone, and it makes sense to use it on the TV as well, at least as a first step. And the answer is yes. Console games sell really well. You could argue it’s because people haven’t transitioned yet to on-demand content, and therefore play between things they want to watch, if you think people would rather enjoy something passively than gaming. It’s a valid notion, when you look at the kind of restricted and small offering we get on-demand in most countries. You can also argue that playing games appeals more to hyperactive people, who usually play games on their phone while watching TV anyways. Plus, switching channels or browsing through on-demand isn’t exactly “watching TV”, so some people might want to repurpose that browsing time into a more active endeavor.

At any rate, people like games and there is enough “not tv” time on a tv set to justify their existence, even to the most hardcore “one device per function” oriented minds out there.

All in all, this is why I have always wanted to be able to make/play games on the AppleTV, and why I think games will be a decent success on that platform. People are lazy. If they choose an Apple TV for their passive content, they will get games as well, because switching to a different device may cross the “too much work” threshold. Just like with the phone. And for the same reasons you guys made games for the phone, you should for the TV.


[VMWare] Using VMWare & Clonezilla to expand your horizons

Probably something everyone but me knew already, but at least next time I will have to do something similar I’ll have a trace.


My original Bootcamp partition was a smallish thing at the end of the first disk I had in that machine. I have bigger needs and a new disk and all that jazz, so I want to clone the partition to the new one. Forget about disk utility for that, NTFS is not its forte, I’ve had weird partition size issues with it, so I use clonezilla, which works better, performs checks and repairs and is overall smarter.

I could restart on the CD and have my computer doing nothing but that for a couple of hours, but I would rather use VMWare Fusion.


I have prepared my new partition as ExFAT on the mac (the closest it knows to creating a new NTFS one). By default, Bootcamp virtual machines grab the whole disk for the purposes of booting (even though it only uses and unmounts the NTFS partition), so I decided to use the same for the new one.

VMWare won’t let you add an existing disk directly using the UI but it does work with command line. Using mount I check which disk has the partition I want to use as destination

$ mount
/dev/disk2s1 on /Volumes/WIN7 (msdos, asynchronous, local, noowners)

So disk2 is my target.

With VMWare, you can create “proxy” disks for existing real hard drives with the command vmware-rawdiskCreator. So I create the vdmk file

$ /Applications/VMware\ create /dev/disk2 fullDevice ~/Desktop/Win7.vmdk ide

For some weird reason, the VMWare Fusion app won’t let you add that disk as is, so you need to right click on the bootcamp virtual machine, reveal it in finder, open package contents and edit the .vmk file. At the end of the file, I added

ide1:1.present = "TRUE"
ide1:1.fileName = "/Users/zino/Desktop/Win7.vmdk"
ide1:1.redo = ""

I then added the Clonezilla iso to the virtual CDROM, set the machine to boot off of it, and clicked start.


Clonezilla has a plethora of modes, machine to machine over the network being an awesome one for instance, but what I want to do is first check the partition is the right one and formatted correctly.

So I enter the command prompt and fdisk the drives. There should be 2, sda and sdb, one being the source the other the target. Use the “p” command within fdisk to check the partition scheme and the availability of the disk. For me, the destination partition was sdb2. So I formatted it to NTFS:

$ sudo -i
# mkfs.ntfs -Q /dev/sdb2

Double check everything if you are unsure of which partition you will completely erase! Do not come back to me afterwards complaining your disk was erased… My partition was the 2nd one of the 2nd disk (sd b 2), but yours might be different.

Then go back to the clonezilla menu, select local, then local partition to local partition, choose the right ones as source and destination, and enjoy the show.


Well because that way I could write that post while it was doing the transfer. Because I trust clonezilla with copies, with all its smart things and its checks, even if it’s kind of daunting if you fear the command line. And because I generally hate the idea of having a computer stuck for hours on something that uses les than a percent of a percent of what it could do.

Oh and because that way I can tell people who didn’t know about the way of mounting real disks into VMWare, and give a shoutout to clonezilla.

Let me know if it’s useful to you!


The New Space Age

If you know me a little bit, you know I’m a sucker for space stuff. And research in general. Doing something that has never been done before, or furthering an agenda that goes into that direction has always been something that gives me goose bumps in an awesome way.

2014 has been a wonderful year for space buffs, but two very recent missions have hopefully recaptured the interest for everything interplanetary, Rosetta/Philae and Orion.

“It’s like hitting a bullet with a smaller bullet, while wearing a blindfold, riding a horse”

In march 2004, some people thought it would be a cool thing to achieve. Rosetta was supposed to come close enough to a comet to take detailed pictures and perform analysis, why not try to land on it too with Philae?

Think about it: a route spanning 6.4 billion kilometres in 10 years, to hit a rock 4 kilometres in diameter ( 1/1600th of Earth ). Mind boggling. And yet, it was done, in the name of science. There are a lot of reasons to do such a thing, and the ESA explains it nicely.

“To Infinity and Beyond!”

Earth isn’t doomed just yet (even though it’s getting there), but we all know in a corner of our minds that we will have to leave it for another planet at some point in the future. Almost 50 years after our first baby steps in interplanetary travel and the Apollo Program, NASA tested a new craft designed to take us back to the Moon, and even Mars. Even if it’s currently empty, it signals a commitment to a spacefaring culture once more. Sure, we are nowhere near having a solution for interstellar travel, but when we start colonizing the Solar System in earnest, we’ll be closer to the stars.

THIS is why funding research is important

Does it make any difference today to know what that comet is made of and what it’s seen during its travels? Does landing on mars allow me to have a summer house there? Of course not. But our grandchildren will be thankful we didn’t spend to much time navelgazing as if the universe was restricted to Earth.


[Xcode] Broken IPA Generation

As of Xcode 6.0.1, you can only generate an IPA with a certificate/provisionning profile pair that matches a team you are part of (it offers only choices present in your Accounts preference pane).

Before ranting about why this this is stupid as hell, here’s a workaround:

xcrun -sdk iphoneos PackageApplication [path to the .app] -v -o [path to the ipa] --embed ~/Library/MobileDevice/Provisioning\ Profiles/[profile to use] --sign "[matching developer NAME*]"

NAME is the name that appears in the popup menu in your xcode build settings, not its ID

After a somewhat lively discussion on Twitter about that, two things:

  • I know Apple would prefer that developers are organized in teams managed on their servers. It’s just not practical for a lot of them, and there’s even a few good reasons not to go that way
  • It’s stupid to have that drastic a change, when 3 weeks ago, the official way of having a third-party developer generate IPAs for your company was to give him a p12 and a .mobileprovision and let him do his thing, and to not warn people about the change

For those of you who don’t know how development of a mobile application works, yet, here’s a quick run down.

A customer contacts me for a development. We agree on a timeframe and a price. I write the code, provide them from time to time with testable betas. When we agree it’s finished, I give them the final IPA for putting on the store and we call it a day.

Providing betas and giving an IPA for the App Store work exactly the same: a binary is produced, which is put in an IPA (kind of installer package for iOS), then that IPA is signed, and transmitted. On the other end of the wire (be it the customer or the App Store), the IPA is decompressed, the signature checked for validity (by app ID, device, and status of the apple account), and the app can be run or put on sale.

In that scenario, if I use my certificate, I have to enter the device IDs the customer will test the app on, of which my developer account allows for 100, in total. So if I have 10 customers with 10 devices a year, I can’t work anymore. So, most of the time, the customer has to provide the relevant information for me to give access to the betas, and of course, since they’re releasing it under their own name, the relevant information to produce the final version, which is a different pair of keys.

So far, so good, all they had to do up until now was give me a couple of p12 (key archives) and the corresponding profiles, and manage themselves the test devices, the release, etc.
It allows whoever’s in charge to retain access and knowledge about what the company is doing. Does that person want me to see they are also working on a concurrent product to something I’m doing for somebody else? Of course not. And there’s no reason to give me that kind of access. Oh and if the customer wants to prevent me from using that certificate again, all they have to do is revoke it.

The new way of doing things is to have the customer invite the developer in the team (in the Apple sense of the term), which gives the developer access to every piece of information under the sun (even when we can’t use it directly).

This is part of an ongoing cycle of making life difficult for contractors. We always have to find workarounds. The idea that almost every ios/mac developer out there is writing code for the structure they belong to, who will then release it in their own name for the general public is ludicrous. It hinges on something that has been gnawing at me for years: the idea that code and binary are the same thing, and is what I’m selling.

That idea is false. When you get Unity3D for your game development, you DO NOT GET THE CODE. For Pete’s sake, we don’t get the code of the OS we are developing on! The idea that when a developer is hired, the customer automatically owns the code is one of the many fallacies I have to deal with on a monthly basis. You hire a developer for his/her expertise first, and foremost. It might be then applied to internal code, which the dev won’t own in the end anyways, or to a newly minted piece of code which might or might not be given with the ability to use said code as part of something that has value. It is a separate item on the negotiation list.

I might delve into the pros and the cons of giving out your source code with the binary in a later post, but let’s get back on topic.

If, like me, you don’t always give the code with the binary to the customer, you’re screwed. Of course they won’t give you access to their company’s secrets by adding you on the team, if they don’t want to. And, obviously, you can’t release the binary under your own name for a customer who wants an app.

Please give me back a way to sign IPAs for customers, directly from my IDE.

Thank you, and sorry for ranting.