UIkit and AppKit unification

The latest fad in tech punditry is to claim the barrier to have iOS apps on the Mac is the fact these two graphical frameworks are so different that it makes the work of developers too complex. This is false.

Let’s start with the elephant in the room: an app generally has to bring more to the table than its UI. But the underpinning coverage is total. There is less than 5% of classes and methods that are available on iOS and not on the Mac. On the other hand, there are more methods in the same frameworks on the mac side. So for everything but the UI, “porting” only means recompiling. If your MVC is well implemented, that part at least won’t cause any issue.

The UI part is a mixed bag, but the paradigms are the same nowadays. It wasn’t the case until fairly recently but table views are cell or view based, the delegates are as expected, etc, etc. As an experiment, on uncomplicated examples, I did search and replace only, no tweaking, and it worked.

Now, of course, the issue isn’t technical: the iOS mono-window and mono-view is wasted on the mac. A lot of applications take the “landscape ipad” paradigm to make it less obvious, including Apple : you have the sidebar and the main view, and it works just like the master/detail project template that comes with Xcode.

Porting a successful app from iOS to the Mac is indeed a bit of work. The Mac is window centric and iOS is view centric. Some things you cannot do on iOS are possible on the Mac, like covering parts of your UI, or dragging and dropping elements. It is definitely a very different way to think about the user experience, and the design choices are certainly less constrained and less obvious. But there is no real technical hurdle, unless the vast majority of your app logic lives in the view controllers rather than in a separate codebase. And then again, the Mac now has NSViewController that works exactly like, who would have guessed, UIViewController, and apps can run in full screen mode, so who knows?

The tools (Xcode, IB, etc) are the same. The non UI frameworks are the same. The UI frameworks are similar where it makes sense (putting stuff on screen) and dissimilar where you have to (input methods and window management). That’s it.

Now, you can definitely agree that the Mac app landscape is very different from the iOS one. People are used to having giant things that install other things everywhere, demos, sharewares, unrestricted access to the filesystem, the possibility to copy and paste anything from anywhere, or drag and drop anything from anywhere, and to put it somewhere else, where it will do something. They have multiple apps that come and go depending on modal dialog boxes that show up, and pieces of stuff like palettes that they can arrange any damn way they please, thank you very much. For all these reasons, designing a successful Mac app is challenging. Big screens, small screens, people who like lots of little windows, a few big windows, people who use spaces, people who use keyboard shortcuts more than the mouse, people who don’t know how menus work, people who have a gazillion of menu items, fonts that can be changed systemwide, colorschemes, those are all valid reasons to dread an attempt at making an app that will appeal to most people.

But you don’t get to play the technical hurdle card. All these interactions have been studied, refined, and solved over 30 years of graphical interfaces. You have to choose what will work best for your needs, and yes, this is hard. But it’s not about code.


The Long Journey (part 3)

Despite my past as a Java teacher, I am mostly involved in iOS development. I have dabbled here and there in Android, but it’s only been for short projects or maintenance. My first full fledged Android application now being behind me (mostly), I figured it might be helpful for other iOS devs out there with an understandable apprehension for Android development.

Basically it boils down to 3 fundamental observations:

  • Java is a lot stricter as a language than Objective-C, but allows for more “catching back”
  • Android projects should be thought like you would a web project, rather than monolithic as an iOS app
  • Yes, fragmentation of the device-scape sucks
Fragmentation is the main problem

It’s been said a bunch of times, the Android market is widely spread in terms or hardware and OS versions. My opinion on the matter is that device fragmentation is the obvious downside of the philosophy behind the agnostic OS development, and not a bad thing in and of itself. Having a SDK that allows you to write applications for things ranging from phones to TVs has a lot of upsides as well.

The thing that really irks me is the OS fragmentation. Android is evolving fast. It started pretty much horrible (I still shiver at the week I spent with Donut, version 1.6) but Jelly Bean feels like a mature system. However, according to the stats out there, half of the devices out there are still running 2.3, which lacks quite a lot of features… I’ll try to avoid ranting too much about back-porting or supporting older systems, even though I have to say most of my fits of rage originated in that area.

Designing with fragmentation in mind

As I said in part 2, my belief is that designing for a large scope of screen sizes is closely related to designing for the web: you can choose to go unisize, but you’d better have a very good reason to do so.

What I think I will recommend to designers I might work with on Android applications, is to create interfaces arrayed around an extensible area. Stick the buttons and stuff on the sides, or have a scrollable area without any preconceived size in one of the two directions. Think text editors, or itemized lists like news feeds. It’s a luxury to have only a couple of form factors, when you think about it: Desktop applications don’t have it, web applications don’t have it, iOS might not have it for long. But when in the past, most of the designers I had to work with were more print-oriented (working with a page size in mind), nowadays it tends to be easier to talk about extensibility and dynamic layouts. But if all you’ve done is working with iOS applications recently, it might be a little painful at first to go back to wondering about item placements when you resize the window. Extra work, but important work.

Coding with fragmentation in mind

According to Wikipedia, less than one third of the users are actually running 4.0 and above. The previous version is something like a quarter, and the vast majority of the rest runs 2.x.

The trouble is, many things that are considered “normal” for iOS developers started appearing in 3.0, or 4.0. Just the ActionBar (which can be like a tab bar or a button bar, or both) is 3.0 and above (SDK version 11). When you think about it, it’s mind boggling: half of the users out there have custom-coded tab bars… That’s why Google has been providing backwards-compatibility libs left and right to support the minimal functionalities of all these controls we take for granted.

But it also means that as a developer, you have to be constantly on the lookout for capabilities of your minimal target device.

There. I didn’t rant too much. Sighs with relief.

But wait, that’s not the only problem there is with fragmentation! The other problem is power and RAM. Graphical goodies are awesome for the user, but they also use an inordinate amount of memory and processing power. As a result, the OS constantly does things to the UI elements: they get destroyed, shelved, or something, and switching between two tabs rapidly might end up in recreating the views from the XML each and every time, which means that you as a developer have to keep constant track of the state of the items onscreen. Again, this might be counterintuitive to iOS developers who are pretty much guaranteed to have all the views in the hierarchy pre-loaded, even though some of its memory might have been dumped (after having informed the developer it needed space and given the app a chance to clean things up… Otherwise the app is killed, to keep things simple).

The funny part of this is this is completely obvious. We are, after all, running on devices that for all their power compared to my trusty (Graphite, clamshell) iBook of 2000, are still tight and limited. Last millenium, I ranted a lot towards developers who didn’t understand that all the computers out there are not as powerful as their shiny new G4 with Altivec enabled. With mobile computing, we’re back to having to worry about supporting 3GS and iPhone 5s, and for Android, the area to cover is just wider.

Debugging with fragmentation in mind

Last, but not least, all of these previous points mean that making sure your app works fine enough on all your target devices means a lot of testing. And I mean a lot.


The few big Android projects I have seen from the inside had to have at least a dozen devices on-site to test/debug on. The wide variety of hardware components, combined to the specificities of each and every manufacturer (who like to override the standard hooks in the system with their branded replacements) made that a requirement for actual grown-up development.

Maybe that’s why Android devices sell more than iOS devices, since each developer out there owns a bunch of phones and a bunch of tablets, where iOS devs have at the most two devices to test on: the oldest supported one and the one the dev is actually using.


Sorry. That troll might be a way for my subconscious to punish me about reining in the rants earlier.

For the somewhat (in theory) simple project I did all the coding of, we had to conscript a lot of friends for testing purposes. And most of the time, it turned out it was indeed necessary, as the wide variety of screen sizes, component parts, connectivity options, os versions, and system configurations (both on the manufacturer side and the user side), while it made for a fascinating survey, provided us with a borad spectrum of potential problems. Of course it wasn’t made any easier by the fact that setting up a computer to get the log of the device requires some technical skills on the testers’ part.

But, again, finding out a way to work fine on most devices forces developers to be careful with CPU and RAM, and designers to be more focused and precise in their work, which is something I’m all for. Java gets a lot of crap for being slow and clunky but the reality is that developers always took for granted a number of things, including a competent garbage collector, and some cool syntactic sugar, which made most of the programs I reviewed really bloated and… non-careful. I think that if a good Android app exists out there, it means that there are some very talented developers behind it, and that they might even be more aware of the constraints embedded/mobile programming thrusts upon us than iOS/console programmers.

In conclusion, because there must be an end to a story

All in all, I enjoyed developing that app for Android. I happen to like the Java programming language in itself, even though it gets a bad press from the crappy VMs, and the crappy “multiplatform” ports out there.

I might be a more masochistic developer than most, but I like the fact it forces both me and the designer to have a better communication (because of all the unknowns to overcome), and to write more robust (and sometimes elegant) code.

Of course, in many respects, when you compare it to iOS development (which has its own set of drawbacks), it might not feel as a mature environment. With all its caveats, its backward-compatible libraries, its absence of centralized UI “way”, and its fragmented landscape, it forces you to be somewhat of an explorer, in addition to being a developer.

But with 3.0 (and it’s especially visible in 4.0), it’s starting to converge rapidly, and one can hope that, once Gingerbread is finally put out of its long-lasting agony, we will soon have a really complete way of writing apps that can work on a very wide range of devices (can’t wait to see what the Ouya will have in store, for example) with very little tweaks.

But, once again, I used to write stuff in assembly or proprietary languages/environments, for a lot of embedded devices, and I might have a bias on my enthusiasm.


The Long Journey (part 2)

Despite my past as a Java teacher, I am mostly involved in iOS development. I have dabbled here and there in Android, but it’s only been for short projects or maintenance. My first full fledged Android application now being behind me (mostly), I figured it might be helpful for other iOS devs out there with an understandable apprehension for Android development.

Basically it boils down to 3 fundamental observations:

  • Java is a lot stricter as a language than Objective-C, but allows for more “catching back”
  • Android projects should be thought like you would a web project, rather than monolithic as an iOS app
  • Yes, fragmentation of the device-scape sucks
Android projects as Web projects

This is not about technology and opposing app developers to web developers. Android, with its Intents and Activities, layouts and drawables, resources of many kinds, memory management schemes, etc, felt like web development should be, to me. Now, let me put it out there: as you can see from the surrounding pages, I am not a web developer. But the HTML/CSS/Javascript feels much closer to Android development than iOS techniques.

The basics of View Management

I’ll set aside the hardcore way of doing things (that also applies to iOS dev), with only code. I’m talking about the more standard XML layout + corresponding class here.

In the resources, there is a bunch of subfolders for supporting multiple screen sizes, ratios, input types, and languages. Think responsive design in CSS. Once the correct set of items has been selected, it is passed along to the Java class (think Javascript) which will then access each identified item to modify it.

In the Activity code, it translates to inflating resources, then assigning stuff to them, including contents, onClick (familiar, yet?) responders and the like. But If you don’t do anything, it’s just a static page.

public class MainActivity extends FragmentActivity {
    protected void onCreate(Bundle savedInstanceState) {
        setContentView(R.layout.activity_main); // this will select the activity_main.xml in the most appropriate res folder
        TextView helloLbl = (TextView) findViewById(R.id.hello); // find me the view corresponding to that ID
        helloLbl.setText("Yay"); // replace the current contents with "Yay"
        // ...
    // ...

Graphically, since devices have a somewhat big range of ratios and pixel sizes, having a design that “just works” is kind of difficult. In my opinion, which is the one from a non-designer developer, it’s a lot easier to decide early on that one of the dimensions is infinite like on a web page (otherwise, precisely tailored boxes change ratios on pretty much every device), or to have controls clustered on one end of the screen and a big scrollable “contents” area that will resize depending on the screen you’re on (kind of like a text editor). Any other arrangement is a bag of hurt…

Contents lifecycle

Maybe it’s just the project(s) I have been working on, but it feels as though the mechanics of the view resembles what I have done and seen in dynamic web apps built around HTML5 and Javascript, rather than a pure PHP one where the server outputs “static” HTML for the browser to handle:

  • The view is loaded from the XML. At that point, it’s “just” a structure on-screen.
  • The callback methods (onCreate, onResume, …) are called on the Java side
  • The Java code looks for graphical items with an id (or a class), and fills them, or moves/resizes them, or duplicates them…

Some will say that the iOS side of development works the same (with XIBs and IBOutlets etc), and maybe they are right, to a certain extent. It just feels that the listener approach gives a potentially much more varied way of doing things, and that they are called very often: for example, since a text label will resize according to its contents by default, it will trigger a resize/reorganizing of the layout, which will trigger other changes.

And since any object in the program can be a listener / actuator for these events, there’s a lot of competition and guesswork as to what will actually happen. The text label may respond a certain way (which can be overridden), its superview/layout engine another way (idem), all the way up the chain, which will trigger some changes down the branches again. During an ideal (and somewhat simple) load layout -> fill data boxes non repeating cycle, my onMeasure method (responsible for giving out the “final” size my control wishes to have depending on a bunch of parameters) was called up to 8 times in very close succession.

But that same listener mechanism, so pervasive in everything Java, also opens a lot of ways to catch anything that happens in your application, from any object:

  • you can detect layout changes from the buttons it contains
  • you can detect a tap on a control (usually non tappable) from its neighboring on a graphical point of view, thus extending the “tappable area”
  • you can react to content changes and limit/alter them on the fly

But so do the other views in the frame, not written by you!

View lifecycle

For memory-obsessed geeks such as myself, the view retain/release cycle was some work to get used to: They are kind of like the tabs in your mobile browser. Sometimes, you browse a website, open another tab, do some stuff, and when you get back to the first tab, it’s completely blank, and reloaded. Or not. It depends on the browser’s memory handling techniques and the available amount of RAM, processing power, and the page’s javascript running scripts.

Since the views might be re-created from scratch each and every time, the strategy to hold on to some data becomes critical. Do you keep them as instance variables of the controller, potentially hogging up the RAM, but being readily accessible instantly? Do you serialize them in the Bundle that the system uses for that kind of things (which I guess is written to disk when the RAM is full) every time the view goes away, and therefore test on all the restoring methods (onCreate, onViewStateRestored, …) that some data is present, deserialize it and put them where they belong? Do you reload it from whatever web service they are coming from? Do you serialize them yourself on disk?

All of these mechanisms require a lot of testing on various low memory devices, because there are out-of-view things happening that will destroy your data from memory if you’re not careful. And although you won’t find yourself having pointers that look ok, but have been freed, some data might be invalid at the time you’re trying to use them.

To Be Continued!

Next time, we’ll discuss the extremely varied range of android devices your app might be running on!


The Long Journey (part 1)

Despite my past as a Java teacher, I am mostly involved in iOS development. I have dabbled here and there in Android, but it’s only been for short projects or maintenance. My first full fledged Android application now being behind me (mostly), I figured it might be helpful for other iOS devs out there with an understandable apprehension for Android development.

Basically it boils down to 3 fundamental observations:

  • Java is a lot stricter as a language than Objective-C, but allows for more “catching back”
  • Android projects should be thought like you would a web project, rather than monolithic as an iOS app
  • Yes, fragmentation of the device-scape sucks
Java vs Objective-C

Forget everything you think you know about Java by other means than actually coding something useful in it. It pains you when somebody says Objective-C is not a real language? That C is just a toy for S&M fetishists? Java has been around for a long time, and therefore is mature. What might make it slow (disproved by stats a bunch of times) or a memory hog (it has an efficient garbage collector) is due to bad programming. I had to code a full Exposé-like interface in Java, and I was going 60fps on my 2008 black mackbook.

But for an ObjC developer, it does have its quirks (apart from basic syntax differences).

Scopes are respected

That’s right. Even knowing the pointer address of an object and its table of instance variables won’t help. If it’s private, it’s private. That means, that, for once, you’ll have to think of the class design as well before writing some code. Quick reminder:

  • private means visible to this class only
  • protected means visible to this class and its descendants
  • package (default) means visible to descendants and members of the same package (scope)
  • public means accessible to everybody
Types are respected

Same here. You can’t cast a Fragment to an Activity if it’s not an Activity. Failure to comply will lead to a massive crash.

What you can do is test if object is of type Activity by doing

if(object instanceof Activity)
Interface doesn’t mean the same thing

In ObjC, interface is a declaration. It basically exposes everything that should be known by the rest of the program about a class. In Java, once it’s typed, it’s known. You make a modification in a class, it’s seen by every other class in the program. Interfaces are kind of like formal protocols. If you define a class that implements an interface it has to implement all the functions of the interface.

public interface MyInterface {
    public void myMethod();
public class MyClass implements MyInterface {
    public void myMethod() {
        // mandatory, even if empty
There’s some weird funky anonymous classes!

Yup. It’s common place to have something like:

getActivity().runOnUiThread(new Runnable() {
    public void run() {
        // do something

What it means is “I think I don’t need to create an actual class that implements the Runnable interface just to call a couple of methods”, or “I’m too lazy to actually create a separate class”, or “this anonymous class needs to access a private variable of the containing class”.

Let me substantiate the last part (and pointedly ignore the first two): according to one of the above paragraphs, a private variable isn’t accessible outside of that class. So a class, anonymous or not doesn’t have access either, right? Wrong.

A class can have classes (of any scope, public, protected or private) defined within. These internal classes, if you will, are part of the class scope. And therefore have access to the private variables.

public class MyClass {
    private int count;
    public MyClass(int c) {
        this.count = c;
    public void countToZero() {
        Thread t = new Thread(new Runnable() {
            public void run() {
                while(count >= 0) {
        }); // anonymous class implementing Runnable
        t.start(); // start and forget
    // Just for fun
    public class MySubClass {
        private int otherCount;
        public void backupCount(MyClass c) {
            otherCount = c.count;

is a perfectly valid class. And so is MyClass.MySubClass.

Beware of your imports

Unfortunately, because of the various cross-and-back-support libraries, some classes are not the same (from a system point of view) but have the same name. For example, if you compile using ICS (4.0+) and want to support older versions, chances are you’ll have to use the Support library, which backports some mechanisms such as Fragment.

It means that you have two classes named “Fragment”:

  • android.support.v4.app.Fragment
  • android.app.Fragment

From a logical point of view they are the same. But they don’t have the same type. Therefore, they are not related in terms of inheritance. Therefore they are not exchangeable.

The order of the imports matters: the last one has a higher priority than the first one. So if you have:

import android.support.v4.app.Fragment;
import android.app.Fragment;

Fragment will be the modern version. It helps to see imports as macros: basically, you tell the compiler that “Fragment” expands into “android.support.v4.app.Fragment”. If you have two macros with the same name, the last definition wins. The same rule applies here.

Exceptions are important

A Java program never crashes. It bumps exceptions to the top-level calling function, then exits with a bad status code if that exception isn’t caught.

On the other hand, the IDE gives you a lot of warnings and errors at code-writing (ie compile) time that should be heeded. Methods declare the kind of exceptions they might throw most of the time, and not catching them (or not passing them to the caller) isn’t allowed. There are very few exceptions to this rule, most of them being in the “You didn’t program defensively enough” or “this is a system quirk” variants.

Say a method reads

public void myMethod() throws IOException;

The calling method has to either be something like

public void myOtherMethod() throws IOException {

or it has to catch the exception

public void myOtherMethod() {
    try {
    } catch(IOException e) {
        // do something

Of course, catching a more generic exception (Exception being a parent of IOException, for instance) will mean less catch-cases.

The exceptions that aren’t explicit are (mostly) stuff like NullPointerException (you tried to call a method on a null object, you bad programmer!) or IndexOutOfBoundsException (trying to access the 10th element in a 5-long array, eh?). The other category is more linked to system stuff: you can’t change a view’s attribute outside of the UI thread, or make network calls on the UI thread, that kind of thing.

To Be Continued!

Next time, we’ll see the project management side of things!


[CoreData] Honey, I Shrunk The Integers

Back in the blissful days of iOS4, the size you assigned to ints in you model was blissfully ignored: in the SQLite backend, there are only two sizes anyway – 32bits or 64bits. So, even if you had Integer16 fields in your model, they would be represented as Integer32 internally anyway.

Obviously, that’s a bug: the underlying way to store the data shouldn’t have any impact on the way you use your model. However, since using a Integer16 or an Integer32 in the model didn’t have any impact, a more insidious family of bugs was introduced. The “I don’t care what I said in the model, it obviously works” kind of bug.

Fast forward to iOS5. The mapping model (the class that acts as a converter between the underlying storage and the CoreData stack) now respects the sizes that were set in the model. And the insidious bugs emerge.

A bit of binary folklore for these people who believe an integer is an integer no matter what:

Data in a computer is stored in bits (0-1 value) grouped together in bytes (8 bits). A single byte can have 256 distinct values (usually [0 – 255] or [-128 – 127]). Then it’s power-of-two storage capacities: 2 bytes, 4 bytes, 8 bytes, etc…

Traditionally, 2 bytes is called a half-word, and 4 bytes is called a word. So you’ll know what it is if it seeps in the discourse somehow.

2 bytes can take 65636 values ([0 – 65 635] or [-32 768 – 32 767]), 4 bytes can go much higher ([0 – 4 294 967 295] or [-2 147 483 648 – 2 147 483 647]). If you were playing with computers in the mid-to-late nineties, you must have seen your graphic cards offering “256 colors” or “thousands of colors” or “millions of colors”. It came from the fact that one pixel was represented either in 8, 16 or 32 bits.

Now, the peculiar way bits work is that they are given a value modulo their maximum width. On one bit, this is given by the fact that:

  • 0 + 1 = 1
  • 1 + 1 = 0

It “loops” when it reaches the highest possible value, and goes to the lowest possible value. With an unsigned byte, 255 + 1 = 0, with a signed byte, 127 + 1 = -128. This looping thing is called modulo. That’s math. That’s fact. That’s cool.

Anyway, so, in the old days of iOS4, the CoreData stack could assign a value greater than the theoretical maximum for a field, and live peacefully with it. Not only that, but you could read it back from storage as well. You could, in effect, have Integer16 (see above for min/max values) that would believe as an Integer32 would (ditto).

Interestingly enough, since this caused no obvious concern to people writing applications out there, some applications working fine on iOS4 stopped working altogether on iOS5: if you try to read the value 365232 on an Integer16, you’d get 37552. If your value had any kind of meaning, it’s busted. The most common problem with this conversion thing is the fact that a lot of people love using IDs instead of relations. You load the page with ID x and not the n-th child page.

So, your code doesn’t work anymore. Shame. I had to fix such a thing earlier, and it’s not easy to come up with a decent solution, since I couldn’t change the model (migrating would copy the truncated values, and therefore the wrong ones, over), and I didn’t have access to, or luxury to rebuild, the data used to generate the SQLite database.

The gung-ho approach is actually rather easy: fetch the real value in the SQLite database. If your program used to work then the stored value is still good, right?

So, I migrated

NSPredicate *pred = [NSPredicate predicateWithFormat:@"id = %hu", [theID intValue]];
[fetchRequest setPredicate:pred];
NSError *error = nil;
matches = [ctx executeFetchRequest:fetchRequest error:&error];


NSPredicate *pred = [NSPredicate predicateWithFormat:@"id = %hu", [theID intValue]];
[fetchRequest setPredicate:pred];
NSError *error = nil;
matches = [ctx executeFetchRequest:fetchRequest error:&error];
// in case for some reason the value was stored improperly
if([matches count] == 0 && otherWayToID.length > 0) { // here, I also had the title of the object I'm looking for
  int realID = -1;
  NSString *dbPath = [[[[ctx.persistentStoreCoordinator persistentStores] objectAtIndex:0] URL] absoluteString];
  FMDatabase* db = [FMDatabase databaseWithPath: dbPath];
  if (![db open]) {
      NSLog(@"Could not open db.");
  FMResultSet *rs = [db executeQuery:@"select * from ZOBJECTS where ZTITLE = ?", otherWayToID];
  while ([rs next]) {
      realID = [rs intForColumn:@"zid"];
  [rs close];  
  [db close];
  if(realID >= 0) {
      pred = [NSPredicate predicateWithFormat:@"id = %u",[identifiant intValue]];
      [fetchRequest setPredicate:pred];
      error = nil;
      matches = [lapps.managedObjectContext executeFetchRequest:fetchRequest error:&error];

In this code, I use Gus Mueller’s excellent FMDatabase / SQLite3 wrapper
Obviously, you have to adapt the table name (z<entity> with CoreData), the column name (z<field> with CoreData), and the type of the value (I went from unsigned Integer16 to unsigned Integer32 here)

Luckily for me (it’s still a bug though, I think), CoreData will accept the predicate with the full value, because it more or less just forwards it to the underlying storage mechanism.

Hope this helps someone else!