The Long Journey (part 3)

Despite my past as a Java teacher, I am mostly involved in iOS development. I have dabbled here and there in Android, but it’s only been for short projects or maintenance. My first full fledged Android application now being behind me (mostly), I figured it might be helpful for other iOS devs out there with an understandable apprehension for Android development.

Basically it boils down to 3 fundamental observations:

  • Java is a lot stricter as a language than Objective-C, but allows for more “catching back”
  • Android projects should be thought like you would a web project, rather than monolithic as an iOS app
  • Yes, fragmentation of the device-scape sucks
Fragmentation is the main problem

It’s been said a bunch of times, the Android market is widely spread in terms or hardware and OS versions. My opinion on the matter is that device fragmentation is the obvious downside of the philosophy behind the agnostic OS development, and not a bad thing in and of itself. Having a SDK that allows you to write applications for things ranging from phones to TVs has a lot of upsides as well.

The thing that really irks me is the OS fragmentation. Android is evolving fast. It started pretty much horrible (I still shiver at the week I spent with Donut, version 1.6) but Jelly Bean feels like a mature system. However, according to the stats out there, half of the devices out there are still running 2.3, which lacks quite a lot of features… I’ll try to avoid ranting too much about back-porting or supporting older systems, even though I have to say most of my fits of rage originated in that area.

Designing with fragmentation in mind

As I said in part 2, my belief is that designing for a large scope of screen sizes is closely related to designing for the web: you can choose to go unisize, but you’d better have a very good reason to do so.

What I think I will recommend to designers I might work with on Android applications, is to create interfaces arrayed around an extensible area. Stick the buttons and stuff on the sides, or have a scrollable area without any preconceived size in one of the two directions. Think text editors, or itemized lists like news feeds. It’s a luxury to have only a couple of form factors, when you think about it: Desktop applications don’t have it, web applications don’t have it, iOS might not have it for long. But when in the past, most of the designers I had to work with were more print-oriented (working with a page size in mind), nowadays it tends to be easier to talk about extensibility and dynamic layouts. But if all you’ve done is working with iOS applications recently, it might be a little painful at first to go back to wondering about item placements when you resize the window. Extra work, but important work.

Coding with fragmentation in mind

According to Wikipedia, less than one third of the users are actually running 4.0 and above. The previous version is something like a quarter, and the vast majority of the rest runs 2.x.

The trouble is, many things that are considered “normal” for iOS developers started appearing in 3.0, or 4.0. Just the ActionBar (which can be like a tab bar or a button bar, or both) is 3.0 and above (SDK version 11). When you think about it, it’s mind boggling: half of the users out there have custom-coded tab bars… That’s why Google has been providing backwards-compatibility libs left and right to support the minimal functionalities of all these controls we take for granted.

But it also means that as a developer, you have to be constantly on the lookout for capabilities of your minimal target device.

There. I didn’t rant too much. Sighs with relief.

But wait, that’s not the only problem there is with fragmentation! The other problem is power and RAM. Graphical goodies are awesome for the user, but they also use an inordinate amount of memory and processing power. As a result, the OS constantly does things to the UI elements: they get destroyed, shelved, or something, and switching between two tabs rapidly might end up in recreating the views from the XML each and every time, which means that you as a developer have to keep constant track of the state of the items onscreen. Again, this might be counterintuitive to iOS developers who are pretty much guaranteed to have all the views in the hierarchy pre-loaded, even though some of its memory might have been dumped (after having informed the developer it needed space and given the app a chance to clean things up… Otherwise the app is killed, to keep things simple).

The funny part of this is this is completely obvious. We are, after all, running on devices that for all their power compared to my trusty (Graphite, clamshell) iBook of 2000, are still tight and limited. Last millenium, I ranted a lot towards developers who didn’t understand that all the computers out there are not as powerful as their shiny new G4 with Altivec enabled. With mobile computing, we’re back to having to worry about supporting 3GS and iPhone 5s, and for Android, the area to cover is just wider.

Debugging with fragmentation in mind

Last, but not least, all of these previous points mean that making sure your app works fine enough on all your target devices means a lot of testing. And I mean a lot.


The few big Android projects I have seen from the inside had to have at least a dozen devices on-site to test/debug on. The wide variety of hardware components, combined to the specificities of each and every manufacturer (who like to override the standard hooks in the system with their branded replacements) made that a requirement for actual grown-up development.

Maybe that’s why Android devices sell more than iOS devices, since each developer out there owns a bunch of phones and a bunch of tablets, where iOS devs have at the most two devices to test on: the oldest supported one and the one the dev is actually using.


Sorry. That troll might be a way for my subconscious to punish me about reining in the rants earlier.

For the somewhat (in theory) simple project I did all the coding of, we had to conscript a lot of friends for testing purposes. And most of the time, it turned out it was indeed necessary, as the wide variety of screen sizes, component parts, connectivity options, os versions, and system configurations (both on the manufacturer side and the user side), while it made for a fascinating survey, provided us with a borad spectrum of potential problems. Of course it wasn’t made any easier by the fact that setting up a computer to get the log of the device requires some technical skills on the testers’ part.

But, again, finding out a way to work fine on most devices forces developers to be careful with CPU and RAM, and designers to be more focused and precise in their work, which is something I’m all for. Java gets a lot of crap for being slow and clunky but the reality is that developers always took for granted a number of things, including a competent garbage collector, and some cool syntactic sugar, which made most of the programs I reviewed really bloated and… non-careful. I think that if a good Android app exists out there, it means that there are some very talented developers behind it, and that they might even be more aware of the constraints embedded/mobile programming thrusts upon us than iOS/console programmers.

In conclusion, because there must be an end to a story

All in all, I enjoyed developing that app for Android. I happen to like the Java programming language in itself, even though it gets a bad press from the crappy VMs, and the crappy “multiplatform” ports out there.

I might be a more masochistic developer than most, but I like the fact it forces both me and the designer to have a better communication (because of all the unknowns to overcome), and to write more robust (and sometimes elegant) code.

Of course, in many respects, when you compare it to iOS development (which has its own set of drawbacks), it might not feel as a mature environment. With all its caveats, its backward-compatible libraries, its absence of centralized UI “way”, and its fragmented landscape, it forces you to be somewhat of an explorer, in addition to being a developer.

But with 3.0 (and it’s especially visible in 4.0), it’s starting to converge rapidly, and one can hope that, once Gingerbread is finally put out of its long-lasting agony, we will soon have a really complete way of writing apps that can work on a very wide range of devices (can’t wait to see what the Ouya will have in store, for example) with very little tweaks.

But, once again, I used to write stuff in assembly or proprietary languages/environments, for a lot of embedded devices, and I might have a bias on my enthusiasm.


The Long Journey (part 2)

Despite my past as a Java teacher, I am mostly involved in iOS development. I have dabbled here and there in Android, but it’s only been for short projects or maintenance. My first full fledged Android application now being behind me (mostly), I figured it might be helpful for other iOS devs out there with an understandable apprehension for Android development.

Basically it boils down to 3 fundamental observations:

  • Java is a lot stricter as a language than Objective-C, but allows for more “catching back”
  • Android projects should be thought like you would a web project, rather than monolithic as an iOS app
  • Yes, fragmentation of the device-scape sucks
Android projects as Web projects

This is not about technology and opposing app developers to web developers. Android, with its Intents and Activities, layouts and drawables, resources of many kinds, memory management schemes, etc, felt like web development should be, to me. Now, let me put it out there: as you can see from the surrounding pages, I am not a web developer. But the HTML/CSS/Javascript feels much closer to Android development than iOS techniques.

The basics of View Management

I’ll set aside the hardcore way of doing things (that also applies to iOS dev), with only code. I’m talking about the more standard XML layout + corresponding class here.

In the resources, there is a bunch of subfolders for supporting multiple screen sizes, ratios, input types, and languages. Think responsive design in CSS. Once the correct set of items has been selected, it is passed along to the Java class (think Javascript) which will then access each identified item to modify it.

In the Activity code, it translates to inflating resources, then assigning stuff to them, including contents, onClick (familiar, yet?) responders and the like. But If you don’t do anything, it’s just a static page.

public class MainActivity extends FragmentActivity {
    protected void onCreate(Bundle savedInstanceState) {
        setContentView(R.layout.activity_main); // this will select the activity_main.xml in the most appropriate res folder
        TextView helloLbl = (TextView) findViewById(; // find me the view corresponding to that ID
        helloLbl.setText("Yay"); // replace the current contents with "Yay"
        // ...
    // ...

Graphically, since devices have a somewhat big range of ratios and pixel sizes, having a design that “just works” is kind of difficult. In my opinion, which is the one from a non-designer developer, it’s a lot easier to decide early on that one of the dimensions is infinite like on a web page (otherwise, precisely tailored boxes change ratios on pretty much every device), or to have controls clustered on one end of the screen and a big scrollable “contents” area that will resize depending on the screen you’re on (kind of like a text editor). Any other arrangement is a bag of hurt…

Contents lifecycle

Maybe it’s just the project(s) I have been working on, but it feels as though the mechanics of the view resembles what I have done and seen in dynamic web apps built around HTML5 and Javascript, rather than a pure PHP one where the server outputs “static” HTML for the browser to handle:

  • The view is loaded from the XML. At that point, it’s “just” a structure on-screen.
  • The callback methods (onCreate, onResume, …) are called on the Java side
  • The Java code looks for graphical items with an id (or a class), and fills them, or moves/resizes them, or duplicates them…

Some will say that the iOS side of development works the same (with XIBs and IBOutlets etc), and maybe they are right, to a certain extent. It just feels that the listener approach gives a potentially much more varied way of doing things, and that they are called very often: for example, since a text label will resize according to its contents by default, it will trigger a resize/reorganizing of the layout, which will trigger other changes.

And since any object in the program can be a listener / actuator for these events, there’s a lot of competition and guesswork as to what will actually happen. The text label may respond a certain way (which can be overridden), its superview/layout engine another way (idem), all the way up the chain, which will trigger some changes down the branches again. During an ideal (and somewhat simple) load layout -> fill data boxes non repeating cycle, my onMeasure method (responsible for giving out the “final” size my control wishes to have depending on a bunch of parameters) was called up to 8 times in very close succession.

But that same listener mechanism, so pervasive in everything Java, also opens a lot of ways to catch anything that happens in your application, from any object:

  • you can detect layout changes from the buttons it contains
  • you can detect a tap on a control (usually non tappable) from its neighboring on a graphical point of view, thus extending the “tappable area”
  • you can react to content changes and limit/alter them on the fly

But so do the other views in the frame, not written by you!

View lifecycle

For memory-obsessed geeks such as myself, the view retain/release cycle was some work to get used to: They are kind of like the tabs in your mobile browser. Sometimes, you browse a website, open another tab, do some stuff, and when you get back to the first tab, it’s completely blank, and reloaded. Or not. It depends on the browser’s memory handling techniques and the available amount of RAM, processing power, and the page’s javascript running scripts.

Since the views might be re-created from scratch each and every time, the strategy to hold on to some data becomes critical. Do you keep them as instance variables of the controller, potentially hogging up the RAM, but being readily accessible instantly? Do you serialize them in the Bundle that the system uses for that kind of things (which I guess is written to disk when the RAM is full) every time the view goes away, and therefore test on all the restoring methods (onCreate, onViewStateRestored, …) that some data is present, deserialize it and put them where they belong? Do you reload it from whatever web service they are coming from? Do you serialize them yourself on disk?

All of these mechanisms require a lot of testing on various low memory devices, because there are out-of-view things happening that will destroy your data from memory if you’re not careful. And although you won’t find yourself having pointers that look ok, but have been freed, some data might be invalid at the time you’re trying to use them.

To Be Continued!

Next time, we’ll discuss the extremely varied range of android devices your app might be running on!


The Long Journey (part 1)

Despite my past as a Java teacher, I am mostly involved in iOS development. I have dabbled here and there in Android, but it’s only been for short projects or maintenance. My first full fledged Android application now being behind me (mostly), I figured it might be helpful for other iOS devs out there with an understandable apprehension for Android development.

Basically it boils down to 3 fundamental observations:

  • Java is a lot stricter as a language than Objective-C, but allows for more “catching back”
  • Android projects should be thought like you would a web project, rather than monolithic as an iOS app
  • Yes, fragmentation of the device-scape sucks
Java vs Objective-C

Forget everything you think you know about Java by other means than actually coding something useful in it. It pains you when somebody says Objective-C is not a real language? That C is just a toy for S&M fetishists? Java has been around for a long time, and therefore is mature. What might make it slow (disproved by stats a bunch of times) or a memory hog (it has an efficient garbage collector) is due to bad programming. I had to code a full Exposé-like interface in Java, and I was going 60fps on my 2008 black mackbook.

But for an ObjC developer, it does have its quirks (apart from basic syntax differences).

Scopes are respected

That’s right. Even knowing the pointer address of an object and its table of instance variables won’t help. If it’s private, it’s private. That means, that, for once, you’ll have to think of the class design as well before writing some code. Quick reminder:

  • private means visible to this class only
  • protected means visible to this class and its descendants
  • package (default) means visible to descendants and members of the same package (scope)
  • public means accessible to everybody
Types are respected

Same here. You can’t cast a Fragment to an Activity if it’s not an Activity. Failure to comply will lead to a massive crash.

What you can do is test if object is of type Activity by doing

if(object instanceof Activity)
Interface doesn’t mean the same thing

In ObjC, interface is a declaration. It basically exposes everything that should be known by the rest of the program about a class. In Java, once it’s typed, it’s known. You make a modification in a class, it’s seen by every other class in the program. Interfaces are kind of like formal protocols. If you define a class that implements an interface it has to implement all the functions of the interface.

public interface MyInterface {
    public void myMethod();
public class MyClass implements MyInterface {
    public void myMethod() {
        // mandatory, even if empty
There’s some weird funky anonymous classes!

Yup. It’s common place to have something like:

getActivity().runOnUiThread(new Runnable() {
    public void run() {
        // do something

What it means is “I think I don’t need to create an actual class that implements the Runnable interface just to call a couple of methods”, or “I’m too lazy to actually create a separate class”, or “this anonymous class needs to access a private variable of the containing class”.

Let me substantiate the last part (and pointedly ignore the first two): according to one of the above paragraphs, a private variable isn’t accessible outside of that class. So a class, anonymous or not doesn’t have access either, right? Wrong.

A class can have classes (of any scope, public, protected or private) defined within. These internal classes, if you will, are part of the class scope. And therefore have access to the private variables.

public class MyClass {
    private int count;
    public MyClass(int c) {
        this.count = c;
    public void countToZero() {
        Thread t = new Thread(new Runnable() {
            public void run() {
                while(count >= 0) {
        }); // anonymous class implementing Runnable
        t.start(); // start and forget
    // Just for fun
    public class MySubClass {
        private int otherCount;
        public void backupCount(MyClass c) {
            otherCount = c.count;

is a perfectly valid class. And so is MyClass.MySubClass.

Beware of your imports

Unfortunately, because of the various cross-and-back-support libraries, some classes are not the same (from a system point of view) but have the same name. For example, if you compile using ICS (4.0+) and want to support older versions, chances are you’ll have to use the Support library, which backports some mechanisms such as Fragment.

It means that you have two classes named “Fragment”:


From a logical point of view they are the same. But they don’t have the same type. Therefore, they are not related in terms of inheritance. Therefore they are not exchangeable.

The order of the imports matters: the last one has a higher priority than the first one. So if you have:


Fragment will be the modern version. It helps to see imports as macros: basically, you tell the compiler that “Fragment” expands into “”. If you have two macros with the same name, the last definition wins. The same rule applies here.

Exceptions are important

A Java program never crashes. It bumps exceptions to the top-level calling function, then exits with a bad status code if that exception isn’t caught.

On the other hand, the IDE gives you a lot of warnings and errors at code-writing (ie compile) time that should be heeded. Methods declare the kind of exceptions they might throw most of the time, and not catching them (or not passing them to the caller) isn’t allowed. There are very few exceptions to this rule, most of them being in the “You didn’t program defensively enough” or “this is a system quirk” variants.

Say a method reads

public void myMethod() throws IOException;

The calling method has to either be something like

public void myOtherMethod() throws IOException {

or it has to catch the exception

public void myOtherMethod() {
    try {
    } catch(IOException e) {
        // do something

Of course, catching a more generic exception (Exception being a parent of IOException, for instance) will mean less catch-cases.

The exceptions that aren’t explicit are (mostly) stuff like NullPointerException (you tried to call a method on a null object, you bad programmer!) or IndexOutOfBoundsException (trying to access the 10th element in a 5-long array, eh?). The other category is more linked to system stuff: you can’t change a view’s attribute outside of the UI thread, or make network calls on the UI thread, that kind of thing.

To Be Continued!

Next time, we’ll see the project management side of things!


Sectar Wars II : Revenge Of The Lingo

Every so often, you get a tremor of a starting troll whenever you express either congratulations or displeasure at a specific SDK, language, or platform.

Back in the days where people developing for Apple’s platforms were very very few (yea, I know, it was all a misunderstanding), I would get scorned at for not having the wondrous MFC classes and Visual Basic and the other “better” and “easier” ways of having an application made.¬† You simply couldn’t do anything remotely as good as the Windows equivalent, because, face it, Mac OS was a “closed system”, with a very poor toolbox, and so few potential users. But hey, I was working in print and video, and MacOS had the best users in both fields at the time. And the wonders of QuickTime… sigh

Then it would be a ProjectBuilder versus Codewarrior (I still miss that IDE every now and then…). Choosing the latter was stupid: it was expensive, with minimal support for NIBs, was sooooooo Carbon,… But it also had a great debugger, a vastly superior compiler, and could deal with humongous files just fine on my puny iBook clamshell…

Once everyone started jumping on the iOS bandwagon, it was stupid to continue developing for Mac.

Every few months, it’s ridiculous to develop in Java.

There seems to be something missing for the arguments of every single one of these trolls: experience.

Choosing a set of tools for a task is a delicate thing. Get the wrong language, IDE, library, for a project and you will end up working 20 times as more for the same result. Granted, you can always find a ton of good examples why this particular choice of yours at that moment in time is not ideal. But that doesn’t mean it’s not good in general.

“QuickTime is dead”, but it’s still everywhere in the Mac OS. “Java is slow” is the most recurrent one. Well for my last project I reimplemented the “Spaces” feature in Java. Completely. Cross platformly. And at a decent speed. I’d say that’s proof enough that, when someone puts some care and craft in his/her work, any tool will do.

It all boils down to experience: with your skillset, can you make something good with the tools at your disposal? If the answer is yes, does it matter which tools you use? Let the trolls rant. The fact that they can’t do something good with this or that platform/tool doesn’t mean no one can.