Be Original, Like Everybody Else

You come to be because you want to have something everybody else has, but to “exist” you need to differentiate yourself from your competitors. Said like this, it seems oxymoronic.

There used to be a few reasons why someone would write an application:

  • It did something no other application could do
  • It took an existing concept/application and added some functionality that was lacking
  • It tied a specific interface to a specific service that couldn’t be accessed any other way

Quite frankly, any development that meant more than a few man-hours couldn’t sail if at least one of these criteria wasn’t met. And that was mainly due to one thing: exposure. Finding software for your computer was hard. Of course, services like versiontracker or macupdate was there to help, but a piece of software had to be made, buzzed about and sold by its editor. It had to be “worth it”.

The popularity of the App Store has changed that, for good or for ill. No “respectable” company out there would want to miss on the huge market and public that is composed of iPhone and iPad (and to a lesser extent Mac) users. You just have to have a least a presence there. And the competition of the store is fierce.

So, the companies do what they have to do: at least exist on the app store. With that objective in mind, it’s more a matter of building an image than an app. Therefore, the two axes of communication weight way more than any kind of usefulness. You have to be there (do like everybody else), and you have to be seen there (make yourself known). Essentially, an app can be useful, but an app can exist solely as part of a communication strategy.

For public relation purposes, that more or less precludes any kind of singularity. You want to have at least like all your competitors. Then you have to do better or different in a way that’s not too unnerving, or expensive. And yes, being outrageously shocking is a different way of doing the same thing. Anyone can run naked in the street to grab attention. Changing the way people deal with their daily life is a whole different pie.

I was chatting with a colleague earlier, and he was lamenting that R&D is dead. But R&D serves a different purpose: it’s about long term investment. You pour money and time and effort into building something new without any kind of guarantee that you’ll get a return on your investment. That takes a leap of faith (harder to achieve when you have a responsibility towards shareholders and/or employees) and means (harder to have when you are a freelancer). Therefore it’s really not what most of the paying gigs we get talked into is about.

But I disagree that innovation is dead. Yes, it may seem like that for us freelancers sometimes after the tenth “news pushing” project. But even with projects labelled “do the same as app X, but with, you know, a more ‘our company’ feel”, there are ways to have some leeway and some fun. It could be through the way you make the user interact with your app, the details you want to get back from it to the server, etc… And sometimes, it’s the developer that offers suggestions as to how to make his day less miserable.

Face it, developers: we are responsible for that state of affairs too. Freelancers maybe a bit less than software farms, but the policy of churning out made and re-made apps on the cheap versus being hugely expensive doesn’t promote innovation either. Yes, we have to eat and pay our rent and whatever. But given the ridiculous quotes/conditions some people with innovative ideas get when they talk about them, it’s no wonder these projects are boxed and forgotten.

Personally, I try to “give” at least 15% of my time to projects that seem whacky. Maybe they won’t find their mark, maybe I’ll loose money over them, maybe we won’t even go past the planning phase. And sometimes, I get ripped off. But most of the time, at the very least, I have fun, and I learn something. And the partner/client/prospect/person in front of me can explore fully their idea.

Yes, your idea will be lost among thousands of apps that are there only to exist. Yes, the chances are great that it won’t make you a millionaire any time soon. Yes, finding a willing developer is hard. Yes, it costs a lot of time and money and effort to get anything done. But you know what? If it’s not out there, the chances of it proving to be a good idea or indeed make you a ton of money are precisely zilch.

  

Dolos ; Techne

[Dolos]: (anc. greek) Trick, trickery, guile, art of thinking out of the box

[Techne]: (anc. greek) Applied knowledge, craft, as opposed to episteme, pure knowledge of crafts/systems

Our job in technology (hint hint) is to make stuff. But as systems get more complex and our tools… evolve, we get closer to being dolos masters than technicians.

BBEdit just turned 20. It’s changed with the times, adapting to new possibilities and giving me new options and functionality, shedding irrelevant parts and struts to keep lean and efficient. From the day I started earning a living writing stuff (code, courses, and other misc items) to this day, it remained with me, and I fire it up quicker than any other application. If I had any statistic software running, I bet it’d say I spend more time in it than even Finder. Getting used to its way of doing things means becoming proficient in the techne of using it, but mostly, I use it for dolos matters.

On the other side of the ring, Xcode, in its 4th iteration, works more and more and more against me. Using it becomes a dolos process of achieving techne. Most of the scripts and techniques I build over time to optimize my time efficiently gets broken with even the next minor revision of the only way to build mac and iOS application software. Yes, I’ve tried alternative IDEs too, but can’t get them to work as I’d like them to.

To give credit where it’s due, then, thank you so much BBEdit, and happy birthday! I plodded through my professional life knowing that in one way or another you would be able to help me do what I want to do, despite the odds. If I had anything negative to say about you, it would be that you spoiled me for other pieces of software. I kind of expect every single one of the professional tools I use to be as proficient as you are, and I have to say it’s not very charitable.

I am fully aware that a bad craftsman will blame their tools. It’s even a saying around these parts. The thing is, for what I do we don’t have a choice anymore. It’s Xcode or nothing (and the latest version, at that) for packaging applications to be deployed, the latest version requiring the latest version of the OS, and I can’t say I’m impressed by either.

I had to switch to RAID drives for 10.7 to be any kind of fluid, and even then I feel like it’s getting worse. I’m looking for SSD drive big enough to hold everything I need and that ain’t easy, or cheap. Xcode takes a full minute to start up, with 4 2Ghz processors and 8 gigs of RAM. Switching between applications sometimes take up to ten seconds while everything swaps in and out of RAM. And of course, packaging a normally sized application gives me time to make a cup of coffee.

At the same time, I guess I shouldn’t be complaining. Even big companies and the people who would outbid me with customers end up needing my dolos-type of knowledge… Which means that as long as all these tools will be in that frustrating state, I’m going to be well occupied, and incidentally well fed.

And I hope the same applies to you, BBEdit! I’m willing to bet I’ll still be working with you in ten years!

  

More Isn’t Better

The high tech world we live in generates some really high tech expectations.

Whether it’s needed or not, we see countless “features” and “upgrades” being thrown at us, causing for the most part more confusion than anything else. If there is a lesson the humongous sales of iPads hasn’t taught our beloved decision makers, it’s that sometimes simpler is better than more.

Most of the time, a subtle nudge works better than a huge wink, and thankfully, some designers took the hint.

But it seems that flashy now means modern in some people’s minds. Case in point: a very good friend of mine was put in charge of creating from scratch a website for his company. The company deals in service for professionals (read “something most of you, and myself, will never need”). Their business is all mouth-to-ear anyway, so the website isn’t really needed, it’s mostly so that people could look them up if needed.

He worked with some friends of his, very good at designing websites in their own right, and came up with something Apple-y. Clear and concise, no-nonsense, but clearly not really funny either.

This idea got rejected immediately. “Where are all the animations?” and “can we add a little more bang to it, to show that, you know, we’re modern and all?” seemed to be the major reason for rejection.

Now, picture this: you are tasked by your company to find a suitable service provider. You ask a few colleagues and/or friends from the business, you look companies up, and you come up with two possible candidates.

You go on one’s website, it’s clean but contains little except for a list of current customers, and contact information, maybe with a little side of demo/pr.

On the other’s website, you see animations everywhere, it takes a good couple of minutes for everything to settle down, and maybe take you to the place you were looking for: contact and price information.

In all honestly, which is most likely to annoy?

This is something that, as a developer who knows I suck at design, I have to face on a regular basis. For a project, I would get only screens, and not a word about navigation. A beta I would offer would get criticized at length because “the cool flipover double axel animation thingie is not in, yet”. I would have detailed sketches as to wooshing sound effects and glow-in-the-dark animations, but when I ask “ok, but once you are on that screen, you’re stuck, and have no way to go back, right?”, I would get looked at as if I had rabies.

Every once in a while, I have the chance of working with designers who actually think all of this through carefully. And man, does it feel great to have someone who can sometimes say “you know what, I honestly didn’t think that case would present itself, but now that I see it, I’ll think about how to deal with it”, and do so. And as a user, I even agree with the final decision. Talk about sweet. That was the case on that huge Java project I was working on, and given the scope of the project (think small OS), it was a very welcome change in the type of people I sometimes have to deal with.

In my mind, usability should come first, graphics second. This is why for a long time, Linux, while vastly superior in many ways on the technical level to its competitors, could not gain a foothold in the desktop business: unusable by my granny. That’s why some really really cool projects (from a geek perspective) such as automated households don’t really appeal to most people: they know how to use a dial and a button, and fidgeting with an LCD display and a keyboard seems over-complex. Even if in the end, they won’t have to touch the thing ever again.

If you are thinking of a new and wonderful project, think about 3 major factors before handing the making to somebody:
– the user needs to find what he/she is looking for in less than 20s, or at least understand how to get there in that time frame. (depth)
– do at least a rough storyboard of the navigation. Where do you start? Where can you go from there? Should you be able to go back, or forward only? Repeat. (width)
– “animations” is cool. But only if it highlights a feature you want to bring forward, draws the attention towards it, never away from it.

Now, I’m only a developer. I have no track record in design or graphics. But after a decade of writing code, I start to get a sense of what the user wants. And if a developer can, you can too.

  

Sectar Wars II : Revenge Of The Lingo

Every so often, you get a tremor of a starting troll whenever you express either congratulations or displeasure at a specific SDK, language, or platform.

Back in the days where people developing for Apple’s platforms were very very few (yea, I know, it was all a misunderstanding), I would get scorned at for not having the wondrous MFC classes and Visual Basic and the other “better” and “easier” ways of having an application made.  You simply couldn’t do anything remotely as good as the Windows equivalent, because, face it, Mac OS was a “closed system”, with a very poor toolbox, and so few potential users. But hey, I was working in print and video, and MacOS had the best users in both fields at the time. And the wonders of QuickTime… sigh

Then it would be a ProjectBuilder versus Codewarrior (I still miss that IDE every now and then…). Choosing the latter was stupid: it was expensive, with minimal support for NIBs, was sooooooo Carbon,… But it also had a great debugger, a vastly superior compiler, and could deal with humongous files just fine on my puny iBook clamshell…

Once everyone started jumping on the iOS bandwagon, it was stupid to continue developing for Mac.

Every few months, it’s ridiculous to develop in Java.

There seems to be something missing for the arguments of every single one of these trolls: experience.

Choosing a set of tools for a task is a delicate thing. Get the wrong language, IDE, library, for a project and you will end up working 20 times as more for the same result. Granted, you can always find a ton of good examples why this particular choice of yours at that moment in time is not ideal. But that doesn’t mean it’s not good in general.

“QuickTime is dead”, but it’s still everywhere in the Mac OS. “Java is slow” is the most recurrent one. Well for my last project I reimplemented the “Spaces” feature in Java. Completely. Cross platformly. And at a decent speed. I’d say that’s proof enough that, when someone puts some care and craft in his/her work, any tool will do.

It all boils down to experience: with your skillset, can you make something good with the tools at your disposal? If the answer is yes, does it matter which tools you use? Let the trolls rant. The fact that they can’t do something good with this or that platform/tool doesn’t mean no one can.

  

[CoreData] Honey, I Shrunk The Integers

Back in the blissful days of iOS4, the size you assigned to ints in you model was blissfully ignored: in the SQLite backend, there are only two sizes anyway – 32bits or 64bits. So, even if you had Integer16 fields in your model, they would be represented as Integer32 internally anyway.

Obviously, that’s a bug: the underlying way to store the data shouldn’t have any impact on the way you use your model. However, since using a Integer16 or an Integer32 in the model didn’t have any impact, a more insidious family of bugs was introduced. The “I don’t care what I said in the model, it obviously works” kind of bug.

Fast forward to iOS5. The mapping model (the class that acts as a converter between the underlying storage and the CoreData stack) now respects the sizes that were set in the model. And the insidious bugs emerge.

A bit of binary folklore for these people who believe an integer is an integer no matter what:

Data in a computer is stored in bits (0-1 value) grouped together in bytes (8 bits). A single byte can have 256 distinct values (usually [0 – 255] or [-128 – 127]). Then it’s power-of-two storage capacities: 2 bytes, 4 bytes, 8 bytes, etc…

Traditionally, 2 bytes is called a half-word, and 4 bytes is called a word. So you’ll know what it is if it seeps in the discourse somehow.

2 bytes can take 65636 values ([0 – 65 635] or [-32 768 – 32 767]), 4 bytes can go much higher ([0 – 4 294 967 295] or [-2 147 483 648 – 2 147 483 647]). If you were playing with computers in the mid-to-late nineties, you must have seen your graphic cards offering “256 colors” or “thousands of colors” or “millions of colors”. It came from the fact that one pixel was represented either in 8, 16 or 32 bits.

Now, the peculiar way bits work is that they are given a value modulo their maximum width. On one bit, this is given by the fact that:

  • 0 + 1 = 1
  • 1 + 1 = 0

It “loops” when it reaches the highest possible value, and goes to the lowest possible value. With an unsigned byte, 255 + 1 = 0, with a signed byte, 127 + 1 = -128. This looping thing is called modulo. That’s math. That’s fact. That’s cool.

Anyway, so, in the old days of iOS4, the CoreData stack could assign a value greater than the theoretical maximum for a field, and live peacefully with it. Not only that, but you could read it back from storage as well. You could, in effect, have Integer16 (see above for min/max values) that would believe as an Integer32 would (ditto).

Interestingly enough, since this caused no obvious concern to people writing applications out there, some applications working fine on iOS4 stopped working altogether on iOS5: if you try to read the value 365232 on an Integer16, you’d get 37552. If your value had any kind of meaning, it’s busted. The most common problem with this conversion thing is the fact that a lot of people love using IDs instead of relations. You load the page with ID x and not the n-th child page.

So, your code doesn’t work anymore. Shame. I had to fix such a thing earlier, and it’s not easy to come up with a decent solution, since I couldn’t change the model (migrating would copy the truncated values, and therefore the wrong ones, over), and I didn’t have access to, or luxury to rebuild, the data used to generate the SQLite database.

The gung-ho approach is actually rather easy: fetch the real value in the SQLite database. If your program used to work then the stored value is still good, right?

So, I migrated

NSPredicate *pred = [NSPredicate predicateWithFormat:@"id = %hu", [theID intValue]];
[fetchRequest setPredicate:pred];
NSError *error = nil;
matches = [ctx executeFetchRequest:fetchRequest error:&error];

to

NSPredicate *pred = [NSPredicate predicateWithFormat:@"id = %hu", [theID intValue]];
[fetchRequest setPredicate:pred];
NSError *error = nil;
matches = [ctx executeFetchRequest:fetchRequest error:&error];
 
// in case for some reason the value was stored improperly
if([matches count] == 0 && otherWayToID.length > 0) { // here, I also had the title of the object I'm looking for
  int realID = -1;
 
  NSString *dbPath = [[[[ctx.persistentStoreCoordinator persistentStores] objectAtIndex:0] URL] absoluteString];
  FMDatabase* db = [FMDatabase databaseWithPath: dbPath];
  if (![db open]) {
      NSLog(@"Could not open db.");
  }
 
  FMResultSet *rs = [db executeQuery:@"select * from ZOBJECTS where ZTITLE = ?", otherWayToID];
  while ([rs next]) {
      realID = [rs intForColumn:@"zid"];
  }
 
  [rs close];  
  [db close];
 
  if(realID >= 0) {
      pred = [NSPredicate predicateWithFormat:@"id = %u",[identifiant intValue]];
      [fetchRequest setPredicate:pred];
      error = nil;
      matches = [lapps.managedObjectContext executeFetchRequest:fetchRequest error:&error];
  }
}

In this code, I use Gus Mueller’s excellent FMDatabase / SQLite3 wrapper
Obviously, you have to adapt the table name (z<entity> with CoreData), the column name (z<field> with CoreData), and the type of the value (I went from unsigned Integer16 to unsigned Integer32 here)

Luckily for me (it’s still a bug though, I think), CoreData will accept the predicate with the full value, because it more or less just forwards it to the underlying storage mechanism.

Hope this helps someone else!

-nz

  

You Will Never Take The Debugging Out!

Continuing on the somewhat long-winded grandiloquent course on software development, something that sticks out these days is the way people convince themselves that bug-free applications can exist. It’s like the Loch Ness Monster. In theory it might be possible, and there’s no way to disprove it totally, but all the empirical evidence and every attempt at finding it is a failure.

Mind you, I’m not saying it’s not a good objective to have a bug-free application rolling out. It’s just very very very unlikely.

A computer is a very complex ecosystem: a lot of pieces of software are running on the same hardware and can have conflicting relationships. The frameworks you are using might have gaping holes or hidden pitfalls. Your own work might deal with problems in a way that might not be fully working at the next revision of the OS you support. Or you were rushed the day you wrote these ten lines of code that now make the application blow up in mid-air.

And that’s OK!

As long as the bugs are acknowledged and fixed, in a timely fashion, they are part of the natural life-cycle of an application. Contrary to biological life that allows for a certain margin of error, and kind of self-corrects it, computer programs either work or don’t, they can’t really swerve back on the right track. That’s why most people see bugs in a different way that they see a mistake in other aspects of life: computers are rather unforgiving appliances, and the software relies and expresses itself solely through them. And given the fact that computers are put in the hands of people who by and large don’t expect to have to learn the software to be able to use it, that’s a recipe for disappointments.

Back when I used to teach, I would tell my students of the very first application I released (DesInstaller), and the first “bug report” I got. It went along the lines of “Your software is a piece of crap! Every time I hit cmd alt right-shift F1 and enter, it crashes!”.

First off, this was absolutely true. The application did indeed crash on such occasions. Therefore it is a genuine bug. The real question is “why in hell would I have found out that bug in my development cycle?”. I can’t even type that shortcut without hurting my fingers in the process, so the chances of me finding out that crash-condition were pretty much nil.

When I write an application, it’s hard for me to imagine a user would use it in a barbaric fashion. Whenever I test it, I have to somehow change the way I think to put myself in the user’s shoes. It is incredibly hard to do as you probably know from personal experience. However, somehow, we have to do it. What we just cannot do is to add on top of that the layer of complexity that is the interaction with other pieces of software competing for the same resources. It would be like trying to figure out in advance where the next person who’ll bounce into you in the street will be.

Anyway, I digress. This piece is not about explaining why there will probably never be any bug-free application out there, it’s about the mindset we have to set ourselves in it when making an application: debugging is vital, and is here to stay.

So, right off the bat, resources have to be allocated to debugging and the debugging skillset has to be acquired by any person or company toddling in the software business. QA is not optional, and fine tuning isn’t either.

Basically, once a bug has been reported, there are several ways to deal with it depending on the ramifications of the bug (is the application unusable, or is it a forgivable glitch?), the depth of the bug (is it something caused by just one exact and somewhat small cause, or is it a whole family of problems rolled into one visible symptom?), the fixability of the bug (will correcting it imply 10 lines of changed code, or 50% of the base to be re-written?), and the probable durability of the fix (will it hold for a long time, or is it something that will break again at the next OS update?). Identifying these factors is crucial, and hard. Exceptionally so in some cases.

1/ Ramifications

This is a relatively (compared to the others, at any rate) easy thing to identify. You are supposed to know what your target audience is, after all. If your users are seasoned pros, they might overlook the fact that your application crashes .01% of the time of their day-to-day use of it.
Broader audiences might be trickier, because the public opinion is such a volatile thing: bad publicity could sink your product’s sales real fast. Then again, the general bugginess of some systems/applications out there and the lack of public outcry seems to indicate the general public is kind of lenient, provided the bugs are fixed and not too frequent.

2/ Depth

That is probably the hardest thing to figure out. Having sketchy circumstances and symptoms, especially in a somewhat big piece of software, makes finding out the real depth of a bug little more than an educated guess. Crash logs help, of course, but even crash logs (the exact state of the computer program at the moment of the crash) don’t tell the whole story.

I trust that any developer worth his/her salt won’t leave any divide-by-zero or somesuch bugs in the code, especially if time has been made for the QA leg of the development cycle. Therefore, when I talk about bugs, I’m not really talking about that easily-fixed family of bugs, where having the exact position in the code where it crashed tells the whole story.

Complex bugs tend to come from complex reasons. Knowing where it happens helps. Knowing how you got at this precise point with this precise state has yet to be figured out.

Has the user chosen an action that should be illegal and therefore should have been filtered out before we got to that point? Is there a flaw in the reasoning our lines of code are built on (aka “building sand castles”)? Is there an unforeseen bug in the underlying framework or library we are using? Is there a hardware component to this bug?

The approach I tend to use in these cases is bottom up. It’s not the shortest way by a long shot, but it tends to root other potential bugs in the process:

  1. I start from the line in my code the application has crashed (as soon as I have found out, which can be an adventure in itself)
  2. I seek all the calling paths that might have taken me at this point in my code (using some calling/caller graph such as doxygen )
  3. I map out in all these paths based on logic: in all these branches, remove every truly impossible ones (dues to arguments or logic gates such as if/else)
  4. I then consolidate the branches together as much as possible (if this branch is executed then this one is as well, so might as well group them together) to minimize the variability of the input set. Behind these barbaric words is a very simple concept: find out how many switches you have at the top that are truly independent from each other.
  5. I build a program that uses these branches, takes a set of values corresponding to the entry set, and permutes through each possibility of each input entry

Sometimes you can do it all in your head if the program is small, but in any program that has a few thousands of lines of code, being thorough generally goes through this process.

Once I get there, though, what I have is a list of every permutation causing a crash. The rule of thumb is, the shorter this list, the shallower the bug.

3/ Fixability

Counter-intuitively, this might actually be completely independent from the depth of the bug. A very deep bug that has a very small footprint in terms of causes can take as little as one line of code to fix it (example: unsigned ints versus signed ones). The time it takes to find the bug is not really related to the time it takes to fix it.

The problem here is more like taking the opposite path of finding the bug: If I change this to fix the root cause of my bug, what impact will it have on anything that was built on top of the function the bug was in? In many ways, if you want to be systemic about it, this process is actually longer than the previous one: using the line of code you just fixed, you have to examine every path containing that section and looking hard for any implied modification.

A real world example I could take is changing a cogwheel in a mechanism: are all the connected ones the right size too? if it changes another wheel, what impact does it have on the ones connected to it? etc etc.

It can be very long and very tiresome, or it can be a walk in the park, depending on the redundancy and the structure of your program

4/ Durability

This is the trickiest one of them all, because sometimes, there is just not enough information to figure it out. It’s not hard per se, but it depends on so many factors that the best thing you will achieve is mostly a bet.

The first two factors to consider are actually points 2 and 3. Especially if what you’re tasked to give is an estimate of the time it will take to fix a bug or how much resources will be needed to fix it. Since the other two can be really hard to evaluate, something that stands on these two is bound to ricochet even harder.

Then you have to factor in the general quality of the platform the program is running on (do they introduce new bugs every now and then? Is any of the frameworks you based your application on susceptible to change?), the kind of users you have (do they tinker a lot? Is your program part of a chain of tools that might change as well?), and the time that can realistically be invested in finding out what the bug actually is and how to fix it.

Time to conclude, I guess. Bugs are here to stay. Accept it from the beginning and rather than hoping for the best, prepare for the worst.

For developers, make debugging-friendly code. That means factoring your code as much as possible (if you fix a bug once, it’s fixed everywhere), having clear types and possible values for your parameters (none of that “if I cast it, it stops complaining”), and, I know it’s a little old-fashioned but, output debugging data in human readable format at various choke points.

For project managers, make time for QA and debugging. It’s not shameful to have a bug in the product. It is to have a stupid bug that could have easily been fixed if the dev team had had one more day, though. Don’t assume the developer who asks for a little more time, especially if he’s able to tell you the reasoning behind it, is a lazy bastard who should have done his job better the first time around. There is no certain metric for the chance of a bug happening. Rule of thumb is that there will be a minor bug for every few user-centered features, and one major bug every couple of thousands of lines of code.

And for end-users, be firm, but fair. While the number of bugs is not linked to the size of the company putting the application out there, finding out about one takes time and resources. If you paid for a piece of software, you sure have a right to ask the developer to fix it. And any developer who is proud of his work will definitely fix it. It may take a little while, though, depending on the bug, the size of the company, the number of products they have out there and the size of said products. Even I, as a developer, don’t know how long it will take to fix a bug, so don’t expect anything instantaneous. Better have a good surprise than insanely high expectations, right?
That being said, if your requests are ignored after a couple of releases, you have every right to remove your patronage and stop paying for the software or the service. It’s up to you to decide if the fix is vital to your workflow or not. But think twice before advocating foul play.