TwitterFeed

[Game] Kerbal Space Program

So true (applies to time I spent reading/playing with physics simulators, too):

XKCD -KSP

KSP is available here, but don’t click if you fear you might become addicted…

To the Mün and beyond!

  

Bluetooth, or the new old technology

Connected devices are the new thing. Thanks to the growing support of BTLE, we now have thingies that can perform a variety of tasks, and take commands or give out information to a smartphone.

When I started a project that relies on BTLE, my first instinct was to take a look at the documentation for the relevant frameworks and libraries, both on iOS and Android. According to these documentation files, everything is dandy… There’s a couple functions to handle scanning, a few functions/methods/callbacks that manage the (dis)connections, and some read/write functions. Simple and peachy!

Well… Yes and no. Because of its history, BTLE has a few structural quirks, and because of its radio components, a few pitfalls.

The very first thing you have to understand when doing hardware realted projects is that there a huge number of new (compared to pure software development) things that can go wrong and have to be taken into account: does it matter if the device disconnected in the middle of an operation? If so, what kind of power management strategy should you employ? The failsafe mechanisms in general become highly relevant, since we’re not in a tidy sandbox anymore. Also, hardware developers have a different culture and a different set of priorities, so an adjustment might be necessary.

Bluetooth is an old serial-over-radio thing

Yes. Not only is it radio, with all its transmission potential hiccups, it also is a 20 years old technology, made to have a wireless RS-232 like protocol. Its underpinnings are therefore way less network-ish than most remote-toting APIs. There’s very little in the way of control mechanisms, and no guarantee the packets will arrive in a complete and orderly fashion.

As far as I can understand it, it’s kind of like AT commands on a modem, and therefore prone to errors.

On top of it, BTLE adds a “smart” layer (smart being the name, not a personal opinion), which as a very singular purpose: syncing states.

Again, I see it from the vantage point of having a few successful RC projects under my belt, not an expert in the BTLE stack of the OS or the device. But as far as I can see, BTLE is a server over serial connection that exposes a hierarchy of storage units. These storage units have contents, and the structure of the storage, as well as the bytes it contains are periodically synced with the ghost copy in the other end of the link.

So, a device handles its Bluetooth connection as usual (pairing, bonding, etc), then exposes a list of services (top level storage containers of the tree, akin to groups), each of which containing characteristics (a storage space with a few properties, a size, and a content).

In theory, once you’re connected to the device, all you have to do is handle the connection, and read/write bytes (yep, bytes… No high level thing here) in characteristics, for Things to happen.

As you can see, having a good line of communication with the hardware developers is a necessity: everything being bytes means being sure about the format of the data, and knowing which characteristics require a protocol of their own, as well as which writes will trigger a reboot.

All in all, provided you are methodical and open minded enough, it can be fun to figure out a way to copy megabytes of data using two characteristics that can handle 16 bytes at the most. Welcome back to the dawn of serial protocols!

BTLE is all about data sync

Since the device is likely to be a state machine, most of the APIs mirror that: the overall connection has disconnected, connecting, connected, and disconnecting states, and synchronizing the copy in-system of the data with the copy in-device is highly unsynchronous. Not only that, but there’s no guarantee as to order in which they are transmitted, received, or timing thereof. You are a blind man playing with the echo, here.

If you want to transmit long structured data, you have to have a protocol of your own on top of BTLE. This includes, but is not restricted to, in-device offset management, self correcting algorithms, replayable modularity, etc etc.

Not to mention that more often than not the OS tells you you are disconnected after an inordinate amount of time, sometimes never. So the constant monitoring of the connection state is also paramount.

Last but not least, background execution of said communication is patchy at best. Don’t go into that kind of development expecting an easy slamdunk, because that is the very best way to set some customer’s house on fire, as the thingie you are managing was stuck on “very hot” in the middle of a string of commands and you never realized this was the case.

The software side, where it’s supposed to come together

Let’s imagine you mastered the communication completely, and know for sure the state and values of the device at any given time. First, let me congatulate you. Applaud non ironically, even.

Conveying the relative slowness of serial communications in the age of high speed communication in a manner the user will a/ accept and b/ understand is not a small feat whatsoever. It takes up to 4 minutes for us to push 300k of data in mega-slow-but-safe mode. Reading is usually 10 times faster, but we are still talking orders of magnitudes above what’s considered normal on these devices for these sizes.

One trick is to make the data look big: plenty of different ways to visualize the same 12 entries, another is to make it look like the app is waiting for a lot of small data points to compute a complex looking graph. All in all, it’s about making the user feel at home with the comparatively large “doing nothing visible” moments.

What I am hoping for the (very near) future

Bluetooth has always felt kind of wonky and temperamental. It actually has a very stable radio side, the problem largely lays in the total absence of the kind of control structures protocols like TCP uses to recover from errors, or slow and changing connections. At its core, the whole system seems to be built on the assumption that “once the connection is established, everything will be alright”. A lot of effort has therefore been put on discovery and power management issues, rather than any kind of self-correcting way to talk. It is a very complex system intended to establish a serial connection, then a level of abstraction on top of it in the form of a “server” that organizes the data in a somewhat discreete fashion. And that’s it.

If changing the protocol is too hard, and I’m totally ready to assume that it is, then the API providers need to figure out a way to manage the various states in a way that represents way more the reality of the transmission. Otherwise it’s a permanent struggle to circumvent the system or to coax it in doing what you know is what should happen.

  

[iOS7] Woes Unto Me, For I Am Undone

While I’m waiting for a full restore of my iPhone that will hopefully get me to a usable state again, there’s a couple of things you should know about iOS programming that has subtly changed.

First off: I like the new interface. I like the new OS. So this is not a mindless rant.

[UIKit]

UIKit has never been threadsafe. Ever. This is why we got used to performSelectorOnMainThread back in the pre-block days, then dispatch_async. The non-threadsafety of UIKit is a total mystery to me, seeing that I can change text in labels, images, and the like, mark the views as dirty, and then the main GUI should take care of the updates however long afterwards. That being said, Android does the same, so there must be a technical reason I’m not seeing. However, Android doesn’t even let me change anything. It’s an immediate exception at runtime. UIKit, on the other hand, lets me do pretty much anything I want, then crashes with a cryptic message, when there is one. After digging and step-by-step debugging, I usually find that what I assumed runs on the main thread, does not, in fact.

But that’s the kicker, right here. Up till iOS6, some conventions (or habits to be more accurate) led me to believe that this callback or notification would always be called on the main thread, thus not forcing me to use a very performance-expensive dispatch block. And then… it changes. The app starts behaving erratically, when it doesn’t outright crash. And debugging that erratic behavior is difficult, because it’s basically a race-condition. It could work just fine on an iPhone 4 but not on a 5, and vice-versa.

For a new codebase, I guess the problem doesn’t arise as much, but for us developers who work on older projects, sifting through thousands of lines of code to figure out which one causes that is time consuming, to say the least.

Case in point : navigation controllers. They used to be somewhat thread-safe, pushing and popping being stacked and executed one after the other. Of course, every now and again, you would have to tweak a little bit, make sure some essential data was loaded before pushing the next one, but nothing illogical.

In iOS 7, it’s totally asynchronous and unstacked. And unless you setup a delegate to make sure you detect the end of the animation, you can end up in very dark places. As soon as you have a more complex navigation than simple tree walking, something that worked reasonably well till now becomes very hard to maintain.

Let’s say I have a custom navigation bar that reflects some general data (ie not just a title with the name of the current screen you’re at). Now lets imagine some scenario where when you change something in one of the leaves of your navigation tree it requires a change in another leaf, where the user has to input something. You can say, for example, that you have a weather app, and in one leaf you have the general area of interesting data (wind, hygrometry, whatever), and in the other the units you want to use. Changing from one to another requires a change of units. So, naturally, you want to pop the view controller and push the units one. Except now, you can’t do it too fast. You have to wait for each animation to complete before taking the next action. And you really shouldn’t try to change the title while the animation is running either. (yes I know you could set the navigation hierarchy directly, but my rebuttal is the same as the one given afterwards)

And I hear someone say “well it’s easy, you have the two delegate methods for the navigation controller, so just use it”. Yeaaaaaaaaaah. Weeeeeeeell… The nav controller delegate has to be known at a higher level than the views you will push and pop, right? So basically, the app delegate would be the nav delegate. So the app delegate has to know about the underlying structure and the navigation paths of the whole application? Sorry, I do object oriented programming, I have no intention of having all my view controllers as instance variables of my delegate with a huge switch every time there is pop animation to determine whether I should wait for it to finish or not.

What would be a good alternative then?

  • If I had a notification I could subscribe to, that’d be swell. And that’s what I implemented at the higher level. But it’s a hack.
  • There could be a lock, too. [UINavigationController waitForAnimationToEnd] for instance.
  • Or a bool telling me whether the nav controller is in transition or not.
  • Hell, even an exception would be better than just putting the app in an uncertain state. At the very least I could break on it to find which one of my thousands of lines should be scrutinized.

Going the multi-app, multi-threaded, asynchronous, route, is fine by me. But we have to have the tools to do it properly. Even viewDidAppear is not a guarantee that the animation is done and changing the title or pushing a new view controller on the stack won’t give us a nice Finishing up a navigation transition in an unexpected state. Navigation Bar subview tree might get corrupted.. What did I do wrong that was perfectly fine up until 3 weeks ago? No idea. No exception. No information. Just “hey don’t do that”. Totally Kafkaian as far as I’m concerned, since I just hit on the back button. But now, after this point, any and all navigation related code will crash. And I’ll spend a few hours or even days to figure out why.

[Networking]

For the same reasons, networking (low level networking mostly, but with stuff creeping up sometimes higher) has changed. I can have network connectivity, tested, working fine in Safari, and a socket that connects to nothing while not timing out. Why? How?

Because that, too, has been moved to a new asynchronous mechanism that offers very little help. My NSURLConnection might appear to be doing nothing, but be active enough so that the system as a whole doesn’t deem it a timeout. And since I have no way of peering into its current state, apart from waiting patiently at the delegate points, the net result is that the app looks as if it’s stuck. Which it is, but not UI stuck. Can’t get out of that mode any other way than kill/restart for now.

I wish I had dug enough to give you more information about it, on par with the above UIKit problem, but the truth is it’s been 2 weeks and I still haven’t had the time to figure it out properly. And I have very little hair left. And it’s all grey.

[Conclusion]

I am for all the new changes. I like them, think they make sense, and overall improve and expand the possibilities for us developers. But without insider information as to how the thing works under the hood, we have very little ways of tackling these kinds of bugs. And all of them feel… hacky.

By all means, change the APIs, improve them, etc etc, but also give us the tools to do our jobs properly. Everybody wins if the users are happy.

And I wish I didn’t have to kill Mail so often, which seems to be stuck on the same problems I have. I guess that’s a sign that I’m not alone in my struggle.

  

2013 Update (So Far)

It’s been a busy summer. For all of you 5 readers that I still have and who check periodically for the end of the Ice Age on this blog, I’m sorry.

Of course, the main thing that happened is the release of iOS 7 and the imminent arrival of Mavericks, which kept us all very busy indeed. I’ll elaborate on that later.

But beyond this upgrade madness, there were the job-as-usual type of thing. Astrolab isn’t dead, it’s on hiatus while the 2 other cohosts and me finished writing our 2 development books. I’ll link them afterwards if any of you is interested.

Writing a book is a very different experience for me. I’ve written manuals (back in the osxserver Puma days), a lot of programs, and quite a few articles, but nothing quite prepares you for the involvment of writing a book, especially with 3 other people. There’s the actual writing, the fact-checking, the code, the language and “message” tweaking, and the interaction with the editor. All in all, it went rather well, and I’m not ashamed of putting my name on the cover, which is a first, for me. We’ll see how that goes.

Speaking of podcasting, I have been experimenting with a different concept, with an excellent friend of mine from the Apple days, now doing stuff that blows my mind each time he speaks about it. So, we are trying to find a way to blow your minds, too, with his knowledge. It will be in French for various reasons, so all of you non french speakers out there have to get on board with the language.

And to carry on with the experiments, with a wild bunch of awesome people, we did an app that’s definitely not little : Pas a Pas. It is a combination of tourism guide, helping you discover some towns, and a litterary app, reading to you a text written by a French theatre writer named Christophe Huysman. It features some pretty cool tricks with regards to geolocalization and guiding, while keeping you fully immersed in both the view and the text. It was a little hard to birth, but the end-result is promising, if not outright genius ;)

As usual, there are a few other projects in the queue that I can’t talk about, but suffise to say I don’t remember last time I had a full night sleep.

And then, there is the dual combo IOS 7 and Mavericks. I’ll reserve my judgement on these releases for later (or never as the case may be), but the upgrade was a little traumatic for freelancers like me. Beyond the wild variations between betas (to be expected during a normal development cycle), the changes and half-fixed transitional problems are still costing me a few hours of sleep each night. Forget the appearance debate, it’s not up to us developers to say whether or not it’s good and/or better than before, the users will decide. But some low-level things (I’m looking at you autolayout) and mechanisms (some tricks we had to implement to take into account the height of the status bar, for instance) actually cause problems now. And of course the crashes/hangs when using CoreData in a way that will actually work on iOS 5 and 6…

I have no doubt we’ll ride the wave and come out with solutions, but right now it’s so frustrating to support both the old world and the new that I totally understand my colleagues who are working on an iOS7 only product. And I’m tempted to do the same thing with some of the projects, most definitely. However fast the user base goes from 6 to 7, though, there is still quite a large number of people using apps that I have written that will remain on a system/app that just works, rather than making the switch and going through the choppy waters of both upgrades.

If I had a complaint, it wouldn’t be that Apple boldly goes where it hasn’t gone before, it’s more that we developers should count for something in such a big transition. Unstable betas, we understand, being developers too and all that. Back and forth on including this or that function, as well… But giving us a week to finalize our products on a version of the OS that is clearly not ready for public consumption (the 7.0.2 version came a couple of weeks afterwards and fixed a lot of things), while completely ignoring our frantic alarms is detrimental to everybody: the early adopters, once the shine has worn off will be disappointed, the journalists covering the launch will be merciless, and the developers will be downcast. We need to be better included in this cycle: being able to submit betas for betas would be a good help for everybody: developers could showcase what they intend to do, while Apple engineers could see how the APIs are (mis)used and communicate on or fix what they think is wrong, etc.

It’s still vital that Apple applications work perfectly from the get go, but it’s also increasingly important to have the users’ apps running up to spec as well. There is money in the hardware and the software.

Alright, got to go back to work, it’s been a pleasure to see you all, and I’ll see you again soon(ish)

  

The Long Journey (part 3)

Despite my past as a Java teacher, I am mostly involved in iOS development. I have dabbled here and there in Android, but it’s only been for short projects or maintenance. My first full fledged Android application now being behind me (mostly), I figured it might be helpful for other iOS devs out there with an understandable apprehension for Android development.

Basically it boils down to 3 fundamental observations:

  • Java is a lot stricter as a language than Objective-C, but allows for more “catching back”
  • Android projects should be thought like you would a web project, rather than monolithic as an iOS app
  • Yes, fragmentation of the device-scape sucks
Fragmentation is the main problem

It’s been said a bunch of times, the Android market is widely spread in terms or hardware and OS versions. My opinion on the matter is that device fragmentation is the obvious downside of the philosophy behind the agnostic OS development, and not a bad thing in and of itself. Having a SDK that allows you to write applications for things ranging from phones to TVs has a lot of upsides as well.

The thing that really irks me is the OS fragmentation. Android is evolving fast. It started pretty much horrible (I still shiver at the week I spent with Donut, version 1.6) but Jelly Bean feels like a mature system. However, according to the stats out there, half of the devices out there are still running 2.3, which lacks quite a lot of features… I’ll try to avoid ranting too much about back-porting or supporting older systems, even though I have to say most of my fits of rage originated in that area.

Designing with fragmentation in mind

As I said in part 2, my belief is that designing for a large scope of screen sizes is closely related to designing for the web: you can choose to go unisize, but you’d better have a very good reason to do so.

What I think I will recommend to designers I might work with on Android applications, is to create interfaces arrayed around an extensible area. Stick the buttons and stuff on the sides, or have a scrollable area without any preconceived size in one of the two directions. Think text editors, or itemized lists like news feeds. It’s a luxury to have only a couple of form factors, when you think about it: Desktop applications don’t have it, web applications don’t have it, iOS might not have it for long. But when in the past, most of the designers I had to work with were more print-oriented (working with a page size in mind), nowadays it tends to be easier to talk about extensibility and dynamic layouts. But if all you’ve done is working with iOS applications recently, it might be a little painful at first to go back to wondering about item placements when you resize the window. Extra work, but important work.

Coding with fragmentation in mind

According to Wikipedia, less than one third of the users are actually running 4.0 and above. The previous version is something like a quarter, and the vast majority of the rest runs 2.x.

The trouble is, many things that are considered “normal” for iOS developers started appearing in 3.0, or 4.0. Just the ActionBar (which can be like a tab bar or a button bar, or both) is 3.0 and above (SDK version 11). When you think about it, it’s mind boggling: half of the users out there have custom-coded tab bars… That’s why Google has been providing backwards-compatibility libs left and right to support the minimal functionalities of all these controls we take for granted.

But it also means that as a developer, you have to be constantly on the lookout for capabilities of your minimal target device.

There. I didn’t rant too much. Sighs with relief.

But wait, that’s not the only problem there is with fragmentation! The other problem is power and RAM. Graphical goodies are awesome for the user, but they also use an inordinate amount of memory and processing power. As a result, the OS constantly does things to the UI elements: they get destroyed, shelved, or something, and switching between two tabs rapidly might end up in recreating the views from the XML each and every time, which means that you as a developer have to keep constant track of the state of the items onscreen. Again, this might be counterintuitive to iOS developers who are pretty much guaranteed to have all the views in the hierarchy pre-loaded, even though some of its memory might have been dumped (after having informed the developer it needed space and given the app a chance to clean things up… Otherwise the app is killed, to keep things simple).

The funny part of this is this is completely obvious. We are, after all, running on devices that for all their power compared to my trusty (Graphite, clamshell) iBook of 2000, are still tight and limited. Last millenium, I ranted a lot towards developers who didn’t understand that all the computers out there are not as powerful as their shiny new G4 with Altivec enabled. With mobile computing, we’re back to having to worry about supporting 3GS and iPhone 5s, and for Android, the area to cover is just wider.

Debugging with fragmentation in mind

Last, but not least, all of these previous points mean that making sure your app works fine enough on all your target devices means a lot of testing. And I mean a lot.

TROLL BEGINS

The few big Android projects I have seen from the inside had to have at least a dozen devices on-site to test/debug on. The wide variety of hardware components, combined to the specificities of each and every manufacturer (who like to override the standard hooks in the system with their branded replacements) made that a requirement for actual grown-up development.

Maybe that’s why Android devices sell more than iOS devices, since each developer out there owns a bunch of phones and a bunch of tablets, where iOS devs have at the most two devices to test on: the oldest supported one and the one the dev is actually using.

TROLL ENDS

Sorry. That troll might be a way for my subconscious to punish me about reining in the rants earlier.

For the somewhat (in theory) simple project I did all the coding of, we had to conscript a lot of friends for testing purposes. And most of the time, it turned out it was indeed necessary, as the wide variety of screen sizes, component parts, connectivity options, os versions, and system configurations (both on the manufacturer side and the user side), while it made for a fascinating survey, provided us with a borad spectrum of potential problems. Of course it wasn’t made any easier by the fact that setting up a computer to get the log of the device requires some technical skills on the testers’ part.

But, again, finding out a way to work fine on most devices forces developers to be careful with CPU and RAM, and designers to be more focused and precise in their work, which is something I’m all for. Java gets a lot of crap for being slow and clunky but the reality is that developers always took for granted a number of things, including a competent garbage collector, and some cool syntactic sugar, which made most of the programs I reviewed really bloated and… non-careful. I think that if a good Android app exists out there, it means that there are some very talented developers behind it, and that they might even be more aware of the constraints embedded/mobile programming thrusts upon us than iOS/console programmers.

In conclusion, because there must be an end to a story

All in all, I enjoyed developing that app for Android. I happen to like the Java programming language in itself, even though it gets a bad press from the crappy VMs, and the crappy “multiplatform” ports out there.

I might be a more masochistic developer than most, but I like the fact it forces both me and the designer to have a better communication (because of all the unknowns to overcome), and to write more robust (and sometimes elegant) code.

Of course, in many respects, when you compare it to iOS development (which has its own set of drawbacks), it might not feel as a mature environment. With all its caveats, its backward-compatible libraries, its absence of centralized UI “way”, and its fragmented landscape, it forces you to be somewhat of an explorer, in addition to being a developer.

But with 3.0 (and it’s especially visible in 4.0), it’s starting to converge rapidly, and one can hope that, once Gingerbread is finally put out of its long-lasting agony, we will soon have a really complete way of writing apps that can work on a very wide range of devices (can’t wait to see what the Ouya will have in store, for example) with very little tweaks.

But, once again, I used to write stuff in assembly or proprietary languages/environments, for a lot of embedded devices, and I might have a bias on my enthusiasm.

  

The Long Journey (part 2)

Despite my past as a Java teacher, I am mostly involved in iOS development. I have dabbled here and there in Android, but it’s only been for short projects or maintenance. My first full fledged Android application now being behind me (mostly), I figured it might be helpful for other iOS devs out there with an understandable apprehension for Android development.

Basically it boils down to 3 fundamental observations:

  • Java is a lot stricter as a language than Objective-C, but allows for more “catching back”
  • Android projects should be thought like you would a web project, rather than monolithic as an iOS app
  • Yes, fragmentation of the device-scape sucks
Android projects as Web projects

This is not about technology and opposing app developers to web developers. Android, with its Intents and Activities, layouts and drawables, resources of many kinds, memory management schemes, etc, felt like web development should be, to me. Now, let me put it out there: as you can see from the surrounding pages, I am not a web developer. But the HTML/CSS/Javascript feels much closer to Android development than iOS techniques.

The basics of View Management

I’ll set aside the hardcore way of doing things (that also applies to iOS dev), with only code. I’m talking about the more standard XML layout + corresponding class here.

In the resources, there is a bunch of subfolders for supporting multiple screen sizes, ratios, input types, and languages. Think responsive design in CSS. Once the correct set of items has been selected, it is passed along to the Java class (think Javascript) which will then access each identified item to modify it.

In the Activity code, it translates to inflating resources, then assigning stuff to them, including contents, onClick (familiar, yet?) responders and the like. But If you don’t do anything, it’s just a static page.

public class MainActivity extends FragmentActivity {
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
 
        setContentView(R.layout.activity_main); // this will select the activity_main.xml in the most appropriate res folder
 
        TextView helloLbl = (TextView) findViewById(R.id.hello); // find me the view corresponding to that ID
        helloLbl.setText("Yay"); // replace the current contents with "Yay"
        // ...
    }
 
    // ...
}

Graphically, since devices have a somewhat big range of ratios and pixel sizes, having a design that “just works” is kind of difficult. In my opinion, which is the one from a non-designer developer, it’s a lot easier to decide early on that one of the dimensions is infinite like on a web page (otherwise, precisely tailored boxes change ratios on pretty much every device), or to have controls clustered on one end of the screen and a big scrollable “contents” area that will resize depending on the screen you’re on (kind of like a text editor). Any other arrangement is a bag of hurt…

Contents lifecycle

Maybe it’s just the project(s) I have been working on, but it feels as though the mechanics of the view resembles what I have done and seen in dynamic web apps built around HTML5 and Javascript, rather than a pure PHP one where the server outputs “static” HTML for the browser to handle:

  • The view is loaded from the XML. At that point, it’s “just” a structure on-screen.
  • The callback methods (onCreate, onResume, …) are called on the Java side
  • The Java code looks for graphical items with an id (or a class), and fills them, or moves/resizes them, or duplicates them…

Some will say that the iOS side of development works the same (with XIBs and IBOutlets etc), and maybe they are right, to a certain extent. It just feels that the listener approach gives a potentially much more varied way of doing things, and that they are called very often: for example, since a text label will resize according to its contents by default, it will trigger a resize/reorganizing of the layout, which will trigger other changes.

And since any object in the program can be a listener / actuator for these events, there’s a lot of competition and guesswork as to what will actually happen. The text label may respond a certain way (which can be overridden), its superview/layout engine another way (idem), all the way up the chain, which will trigger some changes down the branches again. During an ideal (and somewhat simple) load layout -> fill data boxes non repeating cycle, my onMeasure method (responsible for giving out the “final” size my control wishes to have depending on a bunch of parameters) was called up to 8 times in very close succession.

But that same listener mechanism, so pervasive in everything Java, also opens a lot of ways to catch anything that happens in your application, from any object:

  • you can detect layout changes from the buttons it contains
  • you can detect a tap on a control (usually non tappable) from its neighboring on a graphical point of view, thus extending the “tappable area”
  • you can react to content changes and limit/alter them on the fly

But so do the other views in the frame, not written by you!

View lifecycle

For memory-obsessed geeks such as myself, the view retain/release cycle was some work to get used to: They are kind of like the tabs in your mobile browser. Sometimes, you browse a website, open another tab, do some stuff, and when you get back to the first tab, it’s completely blank, and reloaded. Or not. It depends on the browser’s memory handling techniques and the available amount of RAM, processing power, and the page’s javascript running scripts.

Since the views might be re-created from scratch each and every time, the strategy to hold on to some data becomes critical. Do you keep them as instance variables of the controller, potentially hogging up the RAM, but being readily accessible instantly? Do you serialize them in the Bundle that the system uses for that kind of things (which I guess is written to disk when the RAM is full) every time the view goes away, and therefore test on all the restoring methods (onCreate, onViewStateRestored, …) that some data is present, deserialize it and put them where they belong? Do you reload it from whatever web service they are coming from? Do you serialize them yourself on disk?

All of these mechanisms require a lot of testing on various low memory devices, because there are out-of-view things happening that will destroy your data from memory if you’re not careful. And although you won’t find yourself having pointers that look ok, but have been freed, some data might be invalid at the time you’re trying to use them.

To Be Continued!

Next time, we’ll discuss the extremely varied range of android devices your app might be running on!

  

The Long Journey (part 1)

Despite my past as a Java teacher, I am mostly involved in iOS development. I have dabbled here and there in Android, but it’s only been for short projects or maintenance. My first full fledged Android application now being behind me (mostly), I figured it might be helpful for other iOS devs out there with an understandable apprehension for Android development.

Basically it boils down to 3 fundamental observations:

  • Java is a lot stricter as a language than Objective-C, but allows for more “catching back”
  • Android projects should be thought like you would a web project, rather than monolithic as an iOS app
  • Yes, fragmentation of the device-scape sucks
Java vs Objective-C

Forget everything you think you know about Java by other means than actually coding something useful in it. It pains you when somebody says Objective-C is not a real language? That C is just a toy for S&M fetishists? Java has been around for a long time, and therefore is mature. What might make it slow (disproved by stats a bunch of times) or a memory hog (it has an efficient garbage collector) is due to bad programming. I had to code a full Exposé-like interface in Java, and I was going 60fps on my 2008 black mackbook.

But for an ObjC developer, it does have its quirks (apart from basic syntax differences).

Scopes are respected

That’s right. Even knowing the pointer address of an object and its table of instance variables won’t help. If it’s private, it’s private. That means, that, for once, you’ll have to think of the class design as well before writing some code. Quick reminder:

  • private means visible to this class only
  • protected means visible to this class and its descendants
  • package (default) means visible to descendants and members of the same package (scope)
  • public means accessible to everybody
Types are respected

Same here. You can’t cast a Fragment to an Activity if it’s not an Activity. Failure to comply will lead to a massive crash.

What you can do is test if object is of type Activity by doing

if(object instanceof Activity)
Interface doesn’t mean the same thing

In ObjC, interface is a declaration. It basically exposes everything that should be known by the rest of the program about a class. In Java, once it’s typed, it’s known. You make a modification in a class, it’s seen by every other class in the program. Interfaces are kind of like formal protocols. If you define a class that implements an interface it has to implement all the functions of the interface.

public interface MyInterface {
    public void myMethod();
}
public class MyClass implements MyInterface {
    public void myMethod() {
        // mandatory, even if empty
    }
}
There’s some weird funky anonymous classes!

Yup. It’s common place to have something like:

getActivity().runOnUiThread(new Runnable() {
    public void run() {
        // do something
    }
}

What it means is “I think I don’t need to create an actual class that implements the Runnable interface just to call a couple of methods”, or “I’m too lazy to actually create a separate class”, or “this anonymous class needs to access a private variable of the containing class”.

Let me substantiate the last part (and pointedly ignore the first two): according to one of the above paragraphs, a private variable isn’t accessible outside of that class. So a class, anonymous or not doesn’t have access either, right? Wrong.

A class can have classes (of any scope, public, protected or private) defined within. These internal classes, if you will, are part of the class scope. And therefore have access to the private variables.

public class MyClass {
    private int count;
 
    public MyClass(int c) {
        this.count = c;
    }
 
    public void countToZero() {
        Thread t = new Thread(new Runnable() {
            public void run() {
                while(count >= 0) {
                    System.out.println(count--);
                }
            }
        }); // anonymous class implementing Runnable
 
        t.start(); // start and forget
    }
 
    // Just for fun
    public class MySubClass {
        private int otherCount;
 
        public void backupCount(MyClass c) {
            otherCount = c.count;
        }
    }
}

is a perfectly valid class. And so is MyClass.MySubClass.

Beware of your imports

Unfortunately, because of the various cross-and-back-support libraries, some classes are not the same (from a system point of view) but have the same name. For example, if you compile using ICS (4.0+) and want to support older versions, chances are you’ll have to use the Support library, which backports some mechanisms such as Fragment.

It means that you have two classes named “Fragment”:

  • android.support.v4.app.Fragment
  • android.app.Fragment

From a logical point of view they are the same. But they don’t have the same type. Therefore, they are not related in terms of inheritance. Therefore they are not exchangeable.

The order of the imports matters: the last one has a higher priority than the first one. So if you have:

import android.support.v4.app.Fragment;
import android.app.Fragment;

Fragment will be the modern version. It helps to see imports as macros: basically, you tell the compiler that “Fragment” expands into “android.support.v4.app.Fragment”. If you have two macros with the same name, the last definition wins. The same rule applies here.

Exceptions are important

A Java program never crashes. It bumps exceptions to the top-level calling function, then exits with a bad status code if that exception isn’t caught.

On the other hand, the IDE gives you a lot of warnings and errors at code-writing (ie compile) time that should be heeded. Methods declare the kind of exceptions they might throw most of the time, and not catching them (or not passing them to the caller) isn’t allowed. There are very few exceptions to this rule, most of them being in the “You didn’t program defensively enough” or “this is a system quirk” variants.

Say a method reads

public void myMethod() throws IOException;

The calling method has to either be something like

public void myOtherMethod() throws IOException {
    myMethod();
}

or it has to catch the exception

public void myOtherMethod() {
    try {
        myMethod();
    } catch(IOException e) {
        // do something
    }
}

Of course, catching a more generic exception (Exception being a parent of IOException, for instance) will mean less catch-cases.

The exceptions that aren’t explicit are (mostly) stuff like NullPointerException (you tried to call a method on a null object, you bad programmer!) or IndexOutOfBoundsException (trying to access the 10th element in a 5-long array, eh?). The other category is more linked to system stuff: you can’t change a view’s attribute outside of the UI thread, or make network calls on the UI thread, that kind of thing.

To Be Continued!

Next time, we’ll see the project management side of things!

  

Demodynamics

It should be clear by now: I am a geek. Aside from all the normal quirks, I’m a computer geek, which means that I dream about systems and I subcounciously try to optimize things, make them more rational if not more efficient… I’m told it’s borderline rude, sometimes.

Anyway.

There is one thing geeks and non geeks who actually encounter large amounts of people all at once agree on: we suck at demodynamics.

Look at a school of fish or a flight of sparrows. Even though they have no brain to speak of compared to ours, you don’t see them bumping into each other even though their speed and group density is a receipe for disaster. Imagine a bunch of people you say “run around for a half hour, but you have to stay together as a group” to. When you’re done laughing, you’ll know what I mean.

Why am I rambling about demodynamics anyway?

Well, professionally, you can draw a lot of parallels between the two following situations:

  • a group of people is supposed to run together towards a common goal without knowing the route and finding some difficulties along the way
  • a group of people is supposed to deliver a product that has been outlined in somewhat vague (from an engineer’s point of view) fashion

And you see the same kind of dynamics: people shoving, people showing off, but also people helping each other when facing a wall etc…

Yesterday, I was in the subway (but you can have similar occurences when driving), and a couple of ladies rushed past me in a corridor, only to go half my speed ahead of me, effectively blocking me, because they were side by side.

Now, the worst part is I don’t think they even realized. They were side by side because they were chatting, and going slower for the same reason. Whoever is placed in that situation will undoubtedly sigh heavily, at the very least. But the same can be said for people who honk at you when you can’t pass the truck in front of you, etc…

As I said, people suck at demodynamics. Evaluating the right time to yield a priority you do have, in order to fluidifying traffic for everyone, including you, is a hard thing to do, since you basically can’t trust anyone around you to act with the same plan, let alone intent.

When you think about it, it’s all about two things: telegraphing your intent (and your plan), and being on the lookout for other people telegraphing their intent. That’s level zero. Then you have to know when to enforce and when to yield, and telegraphing that as well.

Most people think the problem lies in the second layer. We are a competitive race, and we naturally expect our solution to be followed. But my impression is that we completely lack the understanding of level zero. It’s not that our plan is the best one… It’s that it’s the only one.

Talking about this to my friends in the business and outside of it, we kind of agreed that people who like to do things when they have to relinquish control to have a better time are the ones looking around for cues and avoid bumping into other people (as understood in a general sense): people who dance a lot, musicians, construction workers, military or military inspired people,…

In any project I go with, it is painstakingly obvious that if someone I depend on fails, I’m screwed. If for nothing else, that makes a duty of mine to help this person. To some degree, the same can be said about people “above” me. I have to point at potential problems early and help them make a decision.

Unfortunately, as with the people in the subway or on the road, it doesn’t seem to be that obvious. Here in France, we go back and forth on a mandatory class taught to all kids that’s called “civic instruction”, or whatever the name that thing might have these days. Is there any way we could make that a demodynamics course, or a dance class?

TBC

  

[CoreData] Migrating To A New Model With An Extra Entity

As I mentioned in the previous post, I ran into an annoying problem with CoreData very recently:

Problem: We needed to have an evolution of the model, which in its simplest form meant adding an entity.

Naive solution: Well, duh. Lightweight migration will work just fine. If not, just make a mapping model, and you’ll be fine.

Well, no. Every automatic migration ended up into a “Error: table ZRandomNumber_*WhateverRandomEntityFromYourModel* already exists”

After a lot of digging around, annoying any and every contact who remotely knew anything about CoreData, I managed to extract the actual SQL commands it was trying to execute on the SQLite database.

Guess what? it was trying to recreate some of the tables based on relationships (rightly assuming that some of them had changed or had been added, since I added an Entity). But one of them was getting mangled, because it incorrectly assumed none of the relationships preexisted.

Naive solution v2: Well duh, export the database to an agnostic format, then reimport it in the new model.

Yep. That actually works. But I have 140 entities, and close to 300000 rows. After 4h of crunching, I decided to stop the test.

Naive (and I mean REALLY naive) solution that actually works: Find a way to add your new entity at the end of the alphabetical list. That way it creates the missing relationships after having made everything was kosher. I’m not even kidding, I added ZZ in front, and everything just worked. Try it before you loose your own hair.

  

CoreData, iCloud, And “Failure”

CoreData is a very sensitive topic. Here and elsewhere, it’s a recurrent theme. Just last week I had a hair-pulling problem with it that was solved in a ridiculous manner. I’ll document it later for future reference.

This week, triggered by an article on the verge, the spotlight came once again on the difficulties of that technology, namely that it just doesn’t work with iCloud, which by all other accounts works just fine.

It is kind of frustrating (yet completely accurate) to hear from pundits and users that iCloud just works for them for most things, especially Apple’s own products, and that CoreData-based apps work unreliably, if at all. The perception of people not actually trying to make it work is that it’s somehow the developer’s fault for not supporting it. Hence this article on the verge, which highlights the fact that it’s not the developer’s fault. This is a good intent, but unfortunately doesn’t solve anything, since it kind of waggles the finger at Apple and doesn’t explain anything.

But what is the actual problem?

CoreData is a framework for storing an application’s data in an efficient (hopefully) and compact way. It was introduced in 2005 for a very simple purpose: stopping the developers from storing stuff on the user’s disk in “messy” ways. By giving access to a framework that would help keeping everything tidied up in a single (for the “messy” part) database (for the “efficient” part), Apple essentially said that CoreData was a solution to pretty much every storage ailment that plagued the applications: custom file formats that could be ugly and slow, the headache of having “relationships” between parts of documents that would end up mangled or inefficient, etc.

CoreData is a simplification of storage techniques maintained by Apple and therefore reliable, is the underlying tenet. And for the most part, it is reliable and efficient.

iCloud, on the other hand, is addressing another part of the storage problem : syncing. It is a service/framework meant to make the storage on every device a user owns kind of the same storage space. Meaning, if I create a file on device A, it is created on B and C as well. If I modify it on C, the modification is echoed on A and B without any user interaction. Behind the scene, the service keeps track of the modifications in the storage it’s responsible for, pushes them through the network, and based on the last modification date and some other factors, every device decides which files on disk to replace with the one “in the cloud”. The syncing problem is a hard one, because of all the fringe cases (what if I modified a file on my laptop, then closed it before it sent something, then made another modification on my iPad? Which version is the right one? can we mix them safely?), but for small and “atomic” files, it works well enough.

iCloud is a simplification of syncing techniques maintained by Apple, and therefore reliable, to keep the tune playing. And for the most part, it does work as advertised.

But when you mix the two, it doesn’t work.

When you take a look at the goals of the two technologies, you can see why it’s a hard problem to solve: CoreData aims at making a monolithic “store-it-all” file for coherence and efficiency purposes, while iCloud aims at keeping a bunch of files synchronized across multiple disks, merging them if necessary. These two goals, while not completely opposed, are at odds: ideally, iCloud should sync the difference between two files.

But with a database file, it’s hard. It’s never a couple of bytes that are modified, it’s the whole coherence tracking metadata, plus all the objects referenced by the actual modification. Basically, if you want to be sure, you’d have to upload and replace the whole database. Because, once again, the goal of CoreData is to be monolithic and self-contained.

The iCloud philosophy would call for incremental changes tracking to be efficient: the original database, then the modification sets, ideally in separate files. The system would then be able to sync “upwards” from any given state to the current one, by playing the sets one by one until it reaches the latest version.

As you can see, a compromise cannot be reached easily. A lot of expert developers I highly respect have imagined a number of ways to make CoreData+iCloud work. Most of them are good ideas. But are they compatible with Apple’s vision of what the user experience should be? Syncing huge files that have been partially modified isn’t a new problem. And it’s one none of my various version control systems have satisfactorily addressed. Most of them just upload the whole thing.

Just my $.02.