Bluetooth, or the new old technology

Connected devices are the new thing. Thanks to the growing support of BTLE, we now have thingies that can perform a variety of tasks, and take commands or give out information to a smartphone.

When I started a project that relies on BTLE, my first instinct was to take a look at the documentation for the relevant frameworks and libraries, both on iOS and Android. According to these documentation files, everything is dandy… There’s a couple functions to handle scanning, a few functions/methods/callbacks that manage the (dis)connections, and some read/write functions. Simple and peachy!

Well… Yes and no. Because of its history, BTLE has a few structural quirks, and because of its radio components, a few pitfalls.

The very first thing you have to understand when doing hardware realted projects is that there a huge number of new (compared to pure software development) things that can go wrong and have to be taken into account: does it matter if the device disconnected in the middle of an operation? If so, what kind of power management strategy should you employ? The failsafe mechanisms in general become highly relevant, since we’re not in a tidy sandbox anymore. Also, hardware developers have a different culture and a different set of priorities, so an adjustment might be necessary.

Bluetooth is an old serial-over-radio thing

Yes. Not only is it radio, with all its transmission potential hiccups, it also is a 20 years old technology, made to have a wireless RS-232 like protocol. Its underpinnings are therefore way less network-ish than most remote-toting APIs. There’s very little in the way of control mechanisms, and no guarantee the packets will arrive in a complete and orderly fashion.

As far as I can understand it, it’s kind of like AT commands on a modem, and therefore prone to errors.

On top of it, BTLE adds a “smart” layer (smart being the name, not a personal opinion), which as a very singular purpose: syncing states.

Again, I see it from the vantage point of having a few successful RC projects under my belt, not an expert in the BTLE stack of the OS or the device. But as far as I can see, BTLE is a server over serial connection that exposes a hierarchy of storage units. These storage units have contents, and the structure of the storage, as well as the bytes it contains are periodically synced with the ghost copy in the other end of the link.

So, a device handles its Bluetooth connection as usual (pairing, bonding, etc), then exposes a list of services (top level storage containers of the tree, akin to groups), each of which containing characteristics (a storage space with a few properties, a size, and a content).

In theory, once you’re connected to the device, all you have to do is handle the connection, and read/write bytes (yep, bytes… No high level thing here) in characteristics, for Things to happen.

As you can see, having a good line of communication with the hardware developers is a necessity: everything being bytes means being sure about the format of the data, and knowing which characteristics require a protocol of their own, as well as which writes will trigger a reboot.

All in all, provided you are methodical and open minded enough, it can be fun to figure out a way to copy megabytes of data using two characteristics that can handle 16 bytes at the most. Welcome back to the dawn of serial protocols!

BTLE is all about data sync

Since the device is likely to be a state machine, most of the APIs mirror that: the overall connection has disconnected, connecting, connected, and disconnecting states, and synchronizing the copy in-system of the data with the copy in-device is highly unsynchronous. Not only that, but there’s no guarantee as to order in which they are transmitted, received, or timing thereof. You are a blind man playing with the echo, here.

If you want to transmit long structured data, you have to have a protocol of your own on top of BTLE. This includes, but is not restricted to, in-device offset management, self correcting algorithms, replayable modularity, etc etc.

Not to mention that more often than not the OS tells you you are disconnected after an inordinate amount of time, sometimes never. So the constant monitoring of the connection state is also paramount.

Last but not least, background execution of said communication is patchy at best. Don’t go into that kind of development expecting an easy slamdunk, because that is the very best way to set some customer’s house on fire, as the thingie you are managing was stuck on “very hot” in the middle of a string of commands and you never realized this was the case.

The software side, where it’s supposed to come together

Let’s imagine you mastered the communication completely, and know for sure the state and values of the device at any given time. First, let me congatulate you. Applaud non ironically, even.

Conveying the relative slowness of serial communications in the age of high speed communication in a manner the user will a/ accept and b/ understand is not a small feat whatsoever. It takes up to 4 minutes for us to push 300k of data in mega-slow-but-safe mode. Reading is usually 10 times faster, but we are still talking orders of magnitudes above what’s considered normal on these devices for these sizes.

One trick is to make the data look big: plenty of different ways to visualize the same 12 entries, another is to make it look like the app is waiting for a lot of small data points to compute a complex looking graph. All in all, it’s about making the user feel at home with the comparatively large “doing nothing visible” moments.

What I am hoping for the (very near) future

Bluetooth has always felt kind of wonky and temperamental. It actually has a very stable radio side, the problem largely lays in the total absence of the kind of control structures protocols like TCP uses to recover from errors, or slow and changing connections. At its core, the whole system seems to be built on the assumption that “once the connection is established, everything will be alright”. A lot of effort has therefore been put on discovery and power management issues, rather than any kind of self-correcting way to talk. It is a very complex system intended to establish a serial connection, then a level of abstraction on top of it in the form of a “server” that organizes the data in a somewhat discreete fashion. And that’s it.

If changing the protocol is too hard, and I’m totally ready to assume that it is, then the API providers need to figure out a way to manage the various states in a way that represents way more the reality of the transmission. Otherwise it’s a permanent struggle to circumvent the system or to coax it in doing what you know is what should happen.

  

Leave a Reply