Despite my past as a Java teacher, I am mostly involved in iOS development. I have dabbled here and there in Android, but it’s only been for short projects or maintenance. My first full fledged Android application now being behind me (mostly), I figured it might be helpful for other iOS devs out there with an understandable apprehension for Android development.
Basically it boils down to 3 fundamental observations:
- Java is a lot stricter as a language than Objective-C, but allows for more “catching back”
- Android projects should be thought like you would a web project, rather than monolithic as an iOS app
- Yes, fragmentation of the device-scape sucks
Fragmentation is the main problem
It’s been said a bunch of times, the Android market is widely spread in terms or hardware and OS versions. My opinion on the matter is that device fragmentation is the obvious downside of the philosophy behind the agnostic OS development, and not a bad thing in and of itself. Having a SDK that allows you to write applications for things ranging from phones to TVs has a lot of upsides as well.
The thing that really irks me is the OS fragmentation. Android is evolving fast. It started pretty much horrible (I still shiver at the week I spent with Donut, version 1.6) but Jelly Bean feels like a mature system. However, according to the stats out there, half of the devices out there are still running 2.3, which lacks quite a lot of features… I’ll try to avoid ranting too much about back-porting or supporting older systems, even though I have to say most of my fits of rage originated in that area.
Designing with fragmentation in mind
As I said in part 2, my belief is that designing for a large scope of screen sizes is closely related to designing for the web: you can choose to go unisize, but you’d better have a very good reason to do so.
What I think I will recommend to designers I might work with on Android applications, is to create interfaces arrayed around an extensible area. Stick the buttons and stuff on the sides, or have a scrollable area without any preconceived size in one of the two directions. Think text editors, or itemized lists like news feeds. It’s a luxury to have only a couple of form factors, when you think about it: Desktop applications don’t have it, web applications don’t have it, iOS might not have it for long. But when in the past, most of the designers I had to work with were more print-oriented (working with a page size in mind), nowadays it tends to be easier to talk about extensibility and dynamic layouts. But if all you’ve done is working with iOS applications recently, it might be a little painful at first to go back to wondering about item placements when you resize the window. Extra work, but important work.
Coding with fragmentation in mind
According to Wikipedia, less than one third of the users are actually running 4.0 and above. The previous version is something like a quarter, and the vast majority of the rest runs 2.x.
The trouble is, many things that are considered “normal” for iOS developers started appearing in 3.0, or 4.0. Just the ActionBar (which can be like a tab bar or a button bar, or both) is 3.0 and above (SDK version 11). When you think about it, it’s mind boggling: half of the users out there have custom-coded tab bars… That’s why Google has been providing backwards-compatibility libs left and right to support the minimal functionalities of all these controls we take for granted.
But it also means that as a developer, you have to be constantly on the lookout for capabilities of your minimal target device.
There. I didn’t rant too much. Sighs with relief.
But wait, that’s not the only problem there is with fragmentation! The other problem is power and RAM. Graphical goodies are awesome for the user, but they also use an inordinate amount of memory and processing power. As a result, the OS constantly does things to the UI elements: they get destroyed, shelved, or something, and switching between two tabs rapidly might end up in recreating the views from the XML each and every time, which means that you as a developer have to keep constant track of the state of the items onscreen. Again, this might be counterintuitive to iOS developers who are pretty much guaranteed to have all the views in the hierarchy pre-loaded, even though some of its memory might have been dumped (after having informed the developer it needed space and given the app a chance to clean things up… Otherwise the app is killed, to keep things simple).
The funny part of this is this is completely obvious. We are, after all, running on devices that for all their power compared to my trusty (Graphite, clamshell) iBook of 2000, are still tight and limited. Last millenium, I ranted a lot towards developers who didn’t understand that all the computers out there are not as powerful as their shiny new G4 with Altivec enabled. With mobile computing, we’re back to having to worry about supporting 3GS and iPhone 5s, and for Android, the area to cover is just wider.
Debugging with fragmentation in mind
Last, but not least, all of these previous points mean that making sure your app works fine enough on all your target devices means a lot of testing. And I mean a lot.
The few big Android projects I have seen from the inside had to have at least a dozen devices on-site to test/debug on. The wide variety of hardware components, combined to the specificities of each and every manufacturer (who like to override the standard hooks in the system with their branded replacements) made that a requirement for actual grown-up development.
Maybe that’s why Android devices sell more than iOS devices, since each developer out there owns a bunch of phones and a bunch of tablets, where iOS devs have at the most two devices to test on: the oldest supported one and the one the dev is actually using.
Sorry. That troll might be a way for my subconscious to punish me about reining in the rants earlier.
For the somewhat (in theory) simple project I did all the coding of, we had to conscript a lot of friends for testing purposes. And most of the time, it turned out it was indeed necessary, as the wide variety of screen sizes, component parts, connectivity options, os versions, and system configurations (both on the manufacturer side and the user side), while it made for a fascinating survey, provided us with a borad spectrum of potential problems. Of course it wasn’t made any easier by the fact that setting up a computer to get the log of the device requires some technical skills on the testers’ part.
But, again, finding out a way to work fine on most devices forces developers to be careful with CPU and RAM, and designers to be more focused and precise in their work, which is something I’m all for. Java gets a lot of crap for being slow and clunky but the reality is that developers always took for granted a number of things, including a competent garbage collector, and some cool syntactic sugar, which made most of the programs I reviewed really bloated and… non-careful. I think that if a good Android app exists out there, it means that there are some very talented developers behind it, and that they might even be more aware of the constraints embedded/mobile programming thrusts upon us than iOS/console programmers.
In conclusion, because there must be an end to a story
All in all, I enjoyed developing that app for Android. I happen to like the Java programming language in itself, even though it gets a bad press from the crappy VMs, and the crappy “multiplatform” ports out there.
I might be a more masochistic developer than most, but I like the fact it forces both me and the designer to have a better communication (because of all the unknowns to overcome), and to write more robust (and sometimes elegant) code.
Of course, in many respects, when you compare it to iOS development (which has its own set of drawbacks), it might not feel as a mature environment. With all its caveats, its backward-compatible libraries, its absence of centralized UI “way”, and its fragmented landscape, it forces you to be somewhat of an explorer, in addition to being a developer.
But with 3.0 (and it’s especially visible in 4.0), it’s starting to converge rapidly, and one can hope that, once Gingerbread is finally put out of its long-lasting agony, we will soon have a really complete way of writing apps that can work on a very wide range of devices (can’t wait to see what the Ouya will have in store, for example) with very little tweaks.
But, once again, I used to write stuff in assembly or proprietary languages/environments, for a lot of embedded devices, and I might have a bias on my enthusiasm.