[Xcode] IPA Generation Broken, Again

It happened before and shall happen again: xcrun PackageApplication doesn’t work anymore.

The whole process isn’t meant for contractors

Let’s face it, Xcode is getting better at the coding part, but the signature / IPA generation is “streamlined” for the App Store, which in turn is under a lot of assumptions that I’ve mentioned here in the past. Since the last time, the heavy push for storyboards (oh, the tasty git conflicts), and the weird quirks of swift haven’t really improved things. Dev tools should be as flexible as possible, because they are meant for devs.

Anyways, 8.3 broke the “easy” IPA generation that everyone was using. It was especially important for Hudson/Jenkins continuous integration and beta deliveries. No, relying on the Organizer’s UI doesn’t work, because more often than not, you want automation, and Xcode is less and less automatable.

PackageApplication is gone

It worked for a while, then you had to remove a part of the entitlement checking, but now it’s not there anymore. So… no more Hudson builds?

Fret not! The one-liner may be gone, but we have a heavily convoluted multi-step export process in place that does the same!
(Imaginary tech support person)

The idea is to replicate the UI process:

  • generate an archive
  • click export an IPA
  • select a bunch of signature and type options
  • rename all the things it exported (you usually want the build number/version right)

Believe it or not, some of these are actual xcodebuild commands. But the argument list is… errrrr, weird, if you’re used to Linux and server stuff. So, no, in the following, it’s not a mistake that there’s only one dash. You have been warned.


There are two pieces to the export thing : a plist containing what you would type/click in the UI options at the end of the process, and a script that chains all the necessary actions together. I tried to piece together various sources into a coherent whole, but it’s a dev tool and your mileage may vary. Also, because it’s intended as a Hudson build script, it takes the version number as argument and outputs what I would upload to the beta server in Outfiles. Again, adapt it to your workflows.

The plist:

< !DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">

Remember to use the correct TEAM_ID in there.

The script (takes the version as first argument, may break if none is provided):

APP_NAME="Insert app name here"
SCHEME_NAME="Insert scheme name here"
CONFIGURATION="Insert configuration name here"
#change as needed
echo "Cleaning previous build"
xcodebuild -sdk "$SDK" \
-scheme "$SCHEME_NAME" \
-configuration "$CONFIGURATION" clean
echo "Creating Archive"
xcodebuild -sdk "$SDK" \
-scheme "$SCHEME_NAME" \
-configuration "$CONFIGURATION" \
-archivePath "$ARCHIVE_DIRECTORY/$APP_NAME-$1.xcarchive" \
echo "Creating IPA"
#export ipa
xcodebuild -exportArchive \
-archivePath "$ARCHIVE_DIRECTORY/$APP_NAME-$1.xcarchive" \
-exportPath "$OUT_PATH/" \
-exportOptionsPlist exportIPA.plist
#name files and dSYMs
mv "$OUT_PATH/$APP_NAME.ipa" "$OUT_PATH/$APP_NAME-$1.ipa" 
cd "$OUT_PATH"
zip -r "$APP_NAME-$1.dSYM.zip" "$APP_NAME-$1.dSYM"
rm -rf "$APP_NAME-$1.dSYM"
cd ..
# rm -rf Archives

As usual, use with wisdom ;)

Final Thoughts

It’s not that the system is bad, per se. It actually makes a lot of sense to have commands that mirror the GUI. But the issue is, for a long long long long time, automation hasn’t been about simulating clicks. Since you don’t have the UI, it makes even more sense to have shortcuts in addition the GUI counterparts


Some More Adventures in Swiftland

Last summer, I tried to learn new tricks and got some small success. Because the pace at which Swift 3 is changing finally settled a bit, I decided to finish what I set out to do and make SwiftyDB both up to date with the latest version and working on Linux.

The beginning

In August, SwiftyDB was working fine on MacOS but, while it compiled fine, it didn’t on Linux for a variety of reasons.

Swift was flimsy on that platform. The thing “worked”, but caused weird errors, had strange dependancies and was severely lacking stuff in the Foundation library. The version I had then crashed all the time, but for random and different reasons, so I decided to wait till it stabilized.

With Swift 3.0.2 and the announcement that Kitura was to become one of the pillars of the official built-in server apis (called it, by the way), I figured it was time to end the migration.

The problem

The main issue is that swift for Linux lacks basic things Foundation on the Mac has. I mean, it doesn’t even have NSDate’s timeIntervalSinceReferenceDate… But beyond that, the port lacks something that is truly important for the kind of framework that SwiftyDB is : introspection.

The typechecker is the absolute core of Swift. Don’t get me wrong, it’s great. It forces people to mind the type of the data they are manipulating, and throws errors early rather than late. But it comes at a cost : the compiler does all kinds of crazy operations to try to guess the type and too often for my tastes, fails miserably. If you’ve ever seen “IDE internal error” dialogs in XCode, that’s probably the reason.

But even if it worked perfectly, data manipulation that’s needed to get rows in and out of the db requires either to work with formless data (JSON) or to have a way to coerce and map types at run time. And boy, swift doesn’t like that at all.

So, SwiftyDB handles it in a hybrid way, passing along dictionaries of type [String : Any] and suchlikes. It’s kind of gross, to be honest, but that’s the only way this is going to work.

But that was kind of solved in August, albeit in a crash-prone way. What was the issue this time?

The swift team made huge strides to unify MacOS and Linux from an API point of view. If you read the doc, it “just works”, more or less. And that’s true, except for one tiny little thing : toll-free bridging.

Data type conversion

Swift, as ObjectiveC before it, deals with legacy stuff by having a toll-free bridging mechanism. Basically, to the compiler, NSString and String are interchangeable, and will use the definition (and the methods) it needs based on the line it’s at, rather than as a static type.

Something you surely know if you’ve done any kind of object oriented programming, typecasting is hard. If String inherits from NSString, I can use an object of the former type in any place I would have to use the latter. Think of the relationship between a square and a rectangle. The square is a rectangle, but the rectangle isn’t necessarily a square. It’s an asymmetrical relationship. And you can’t make it work by also having NSString inheriting from String, because that’s not allowed for a lot of complicated reasons, but with effects you can probably guess.

So, how does this toll-free bridging work? By cheating. But that’s neither here nor there. The fact is that it works just fine on MacOS, and not on Linux.

A solution

The easiest way to solve that is to have constructors in both classes that take the other as a parameter. And that’s the way it’s solved on Linux. True it’s a bit inelegant, and negates most of the “pure sexiness” that swift is supposed to have, but what are you gonna do? This, after all, is still a science.

Once those extensions are in place, as well as a few replacement additions to the stock Foundation (such as the infamous timeIntervalSinceReferenceDate), and a few tweaks to the way the system finds the SQLite headers, everything finally works.

Use it

As I said before it’s mostly an intellectual exercise, and a way to see if I could some day write some serverside stuff, but in the meantime it works just fine and you can find it here. Feel free to submit pull requests and stuff, but as it is, it works as intended : swift objects to sqlite storage and vice versa.

As usual, if you want to use it as a swift package, just use:

.Package(url: "https://github.com/krugazor/swiftydb", majorVersion: 1, minor: 2)


Web Services And Data Exchange

I may not write web code for a living (not much backend certainly and definitely no front-end stuff, as you can see around here), but interacting with webservices to the point of sometimes having to “fix” or “enhance” them? Often enough to have an Opinion.

There is a very strong divide between web development and more “traditional” heavy client/app development : most of the time, I tell people I write code for that these are two very distinct ways of looking at code in general, and user interaction in particular. I have strong reservations about the current way webapps are rendered and interacted with on my screen, but I cannot deny the visual and overall usage quality of some of them. When I look at what is involved in displaying that blog in my browser window, from the server resources that it takes to the hundreds of megabytes of RAM to run a couple paltry JS scripts in the window, the dinosaur that I am reels back in disgust, I won’t deny it.

But I also tell my students that you use the tool that’s right for the job, and I am not blind : it works, and it works well enough for the majority of the people out there.

I just happen to be performance-minded and nothing about the standard mysql-php-http-html-css-javascript standard pipeline of delivering stuff to my eyeballs is exactly doing that. Sure, individually, these nodes have come a long long way, but as soon as you start passing data along the chain, you stack up transformation and iteration penalties very quickly.

The point

It so happens that I wanted to do a prototype involving displaying isothermic-like areas on a map, completely dynamic, and based on roughly 10k points whenever you move the camera a bit in regards to the map you’re looking at.

Basically, 10 000 x 3 numbers (latitude, longitude, and temperature) would transit from a DB to a map on a cell phone every time you moved the map by a significant factor. The web rendering on the phone was quickly abandoned, as you can imagine. So web service it is.

Because I’m not a web developer, and fairly lazy to boot, I went with something that even I could manage writing in : Silex (I was briefly tempted by Kitura but it’s not yet ready for production when involved with huge databases).

Everyone told me since forever that SOAP (and XML) was too verbose and resource intensive to use. It’s true. I kinda like the built-in capability for data verification though. But never you mind, I went with JSON like everyone else.

JSON is kind of anathema to me. It represents everything someone who’s not a developer thinks data should look like :

  • there are 4 types that cover everything (dictionary, array, float, and string)
  • it’s human readable
  • it’s compact enough
  • it’s text

The 4 types thing, combined with the lack of metadata means that it’s not a big deal to any of the pieces in the chain to swap between 1, 1.000, “1”, and “1.000”, which, to a computer, is 3 very distinct types with hugely different behaviors.

But in practice, for my needs, it meant that my decimal numbers, like, say, a latitude of 48.8616138, gobbles up a magnificent 10 bytes of data, instead of 4 (8 if you’re using doubles). That’s only the start. Because of the structure of the JSON, you must have colons and commas and quotes and keys. So for 3 floats (12 bytes – 24 bytes for doubles), I must use :


That’s the shortest possible form – and not really human readable anymore when you have 10k of those -, and it takes 41 bytes. That’s almost four times as much.


Well, for starters, the (mostly correct) assumption that if you have a web browser currently capable of loading a URL, you probably have the necessary bandwidth to load the thing – or at least that the user will understand page load times – fails miserably on a mobile app, where you have battery and cell coverage issues to deal with.

But even putting that aside, the JSON decoding of such a large datasets was using 35% of my cpu cycles. Four times the size, plus a 35% performance penalty?

Most people who write webservices don’t have datasets large enough to really understand the cost of transcoding data. The server has a 4×2.8Ghz CPU with gazillions of bytes in RAM, and it doesn’t really impact them, unless they do specific tests.

At this point, I was longingly looking at my old way of running CGI stuff in C when I discovered the pack() function in PHP. Yep. Binary packing. “Normal” sized data.


Because of the large datasets and somewhat complex queries I run, I use PostgreSQL rather than MySQL or its infinite variants. It’s not as hip, but it’s rock-solid and predictable. I get my data in binary form. And my app now runs at twice the speed, without any optimization on the mobile app side (yet).

It’s not that JSON and other text formats are bad in themselves, just that they use a ton more resources than I’m ready to spend on “just” getting 30k numbers. For the other endpoints (like authentication and submitting new data), where performance isn’t really critical, they make sense. As much sense as possible given their natural handicap anyways.

But using the right tool for the right job means it goes both ways. I am totally willing to simplify backend-development and make it more easily maintainable. But computers work the same way they have always done. Having 8 layers of interpretation between your code and the CPU may be acceptable sometimes but remember that the olden ways of doing computer stuff, in binary, hex, etc, also provide a way to fairly easily improve performance : less layers, less transcoding, more cpu cycles for things that actually matter to your users.


You Are Not Alone

Employers need their star programmers to be leaders – to help junior developers, review code, perform interviews, attend more meetings, and in many cases to help maintain the complex legacy software we helped build. All of this is eminently reasonable, but it comes, subtly, at the expense of our knowledge accumulation rate.

From Ben Northrop’s blog

It should come at no surprise that I am almost in complete agreement with Ben in his piece. If I may nuance it a bit though, it is on two separate points : the “career” and some of the knowledge that is decaying.

On the topic of career, my take as a 100% self-employed developer is of course different from Ben’s. The hiring process is subtly different, and the payment model highly divergent. I don’t stay relevant through certifications, but through “success rate”, and clients may like buzzwords, but ultimately, they only care of their idea becoming real with the least amount of time and effort poured in (for v1 at least). While reading up and staying current on the latest flash-in-the-pan as Ben puts it, allows someone like me to understand what it is the customer wants, it is by no means a requirement for a successful mission. In that, I must say I sympathize, but look at it in the same way I sometimes look at my students who get excited about an arcane new language. It’s a mixture of “this too shall pass” and “what makes it tick?”.

Ultimately, and that’s my second nuance to Ben’s essay, I think there aren’t many fundamentally different paradigms in programming either. You look at a new language, and put it in the procedural, object oriented, or functional categories (or a mixture of those). You note their resemblances and differences with things you know, and you kind of assume from the get-go that 90% of the time, should you need to, you could learn it very quickly. That’s the kind of knowledge you will probably never lose : the basic bricks of writing a program are fairly static, at least until we switch to quantum computing (ie not anytime soon). However fancy a language or a framework looks, the program you write ends up running roughly the same way your old programs used to. They add, shift, xor, bytes, they branch and jump, they read and store data.

To me that’s what makes an old programmer worth it, not that the general population will notice mind you : experience and sometimes math/theoretical CS education brought us the same kind of arcane knowledge that exists in every profession. You can kind of “feel” that an algorithm is wrong or how to orient your development before you start writing code. You have a good guesstimate, a good instinct, that keeps getting honed through the evolution of our field. And we stagnate only if we let it happen. Have I forgotten most of my ASM programming skills? sure. Can I read it still and understand what it does and how with much less of a sweat than people who’ve never even looked at low-level debugging? Mhm.

So, sure, it’ll take me a while to get the syntax of the new fad, as opposed to some new unencumbered mind. I am willing to bet though, that in the end, what I write will be more efficient. How much value does that have? Depends on the customer / manager. But with Moore’s law coming to a grinding halt, mobile development with its set of old (very old) constraints on the rise, and quantum computing pretty far on the horizon, I will keep on saying that what makes a good programmer good isn’t really how current they are, but what lessons they learn.


I Know People Call Me Crazy, But…

Some days in the life of a developer are a rabbit hole, especially rainy saturdays, for some reason. This following story will be utter gibberish to non developers or people who haven’t followed the news about Apple’s new language: Swift.

You have been warned and are still reading? Gosh, ok, I guess I have to deliver now. No story would be complete with a bit of background about the main protagonist: yours truly.

The story, till that fateful morning.

Despite being a mostly Apple oriented developer (it’s not exactly easy to change more than 15 years of habits), I freely admit I can be skeptical about some seemingly random changes the company makes. Swift can have lofty goals like being able to work for any task a programmer is given, or being learned super easily, it still has to deal with the grim reality that our development work is: understanding what an apparent human being has in its head, and translate it in a way a computer, which is not just apparently stupid, can understand and act as if it understood the original idea/concept/design, whatever youngsters call it these days.

It doesn’t help that Swift isn’t source compatible: any program written up to a month ago simply does not compile anymore. Yea. If there’s anything a computer is good at and humans suck at, it’s redoing the same task over and over again. I know I feel resentful when that happens.

For these reasons and more, I’ve held up switching to Swift as a primary language for my Apple related work. It doesn’t mean I don’t like it or can’t do it, just that I earn a living by writing code and making sure my customers don’t come back a month later with a little request that means I have to rewrite a bunch of code, because the language changed in between then and now.

The premise

I was excited about some things shown during WWDC2016, including the fact I may not have to learn javascript or upgrade my old and shabby knowledge of php to write decent backends for some projects I tend to refuse. IBM is hard at work on an open source and actually pretty decent Swift “Web Framework” (i.e. a server written in and extensible in Swift) named Kitura. I know there are alternatives, but Apple endorsed this one, and judging by the github activity, I have some faith in the longevity of the project.

Armed with my Contact List of Dinosaurus (+3 resist to Baffling), my Chair of Comfortness (-1 Agility, +3 Stamina), and my Giant Pot of Coffee (grants 3 Random Insights per day), I embarked on a quest that was probably meant for Lvl 9 Swift Coders rather that Lvl 10 Generalists, but it’s an adventure, right?

The beginning of the adventure, or the “But”

The aforementioned source instability means that the brave folks at IBM and their external contributors have to deal with rewrites of the API roughly every other week. Sometimes they are big, sometimes small, but it’s non trivial. At the time your truly embarks on the journey, Kitura only works on the June 6th snapshot of Swift. One notable thing about this is that it’s actually different than the one that ships with the betas Apple provided to developers for the WWDC. It requires installing a separate toolchain that, while it can work with Xcode is, to put it bluntly, a pain to work with. Special shell variables, switching Xcode to the new toolchain (which, incidentally, isn’t a project setting, but will apply to everything. You have to switch and restart the IDE every time).

After a solid hour of grinding through the process, the very simple Hello World sample finally loads in Safari, and our hero grins when the url http://localhost:8090/hello/Zino spit out

Hello, Zino!

Incidentally, it can also do the same thing using http://localhost:8090/hello/?name=Zino and its POST variant, because why the hell not, while I’m at it?

Whereupon, invigorated by the victory over the Guardian of the Door, we enter the rabbit hole proper and start making much longer headers

Once that particular beast is slain, I decide that experimenting with passing arguments through every possible means at my disposal is childish, and set my sights on middleware, and more specifically authentication. For those of you unfamiliar with the topic, good for you. It’s a mess that no amount of reading will make clearer. Encryption, storage, and tokens feature prominently, and the more you read about it the more you go ‘huh?’. Best practices are as varied as they are counter intuitive, and quite frankly, the reason why most websites out there either fail spectacularly at it or resign themselves to trusting third parties like facebook, goolge, or twitter, is because any way you look at it, it’s a tradeoff between security, sanity, and ease of use. And you can’t satisfy all three without going insane. It basically requires a complete rewrite about either how the web or humans work. And we know both are really hard to do.

Since the frameworks are in the middle of a transition to something that is already obsolete anyways, you can tell that some dependencies aren’t as much used as others. But let’s be clear on one thing: I do not blame anyone. It’s a stern chase, and a classic catch 22: why pour time and effort beyond a certain point since Swift will break everything again soon? Hopefully, now that source compatibility is on the agenda, we’ll be able to catch up.

But that’s not the point of the adventure, says the now tired Zino, it’s to gauge both my capacity to learn new tricks and my adaptability, so let’s pretend it actually serves a purpose! HTTP Basic and Digest are quickly worked out and I turn my sights to making my sample code actually do something realistic, like talking to a database. That’s what backends do, you know? They talk to stuff like PostgreSQL instances and construct responses to queries, isolating the base itself from prying eyes.

Medieval or primeval, call it what you wish

A not so quick trawling through the interwebs reveals almost nothing that deals with direct database access. There is simply no demand yet for it. Swift programs are currently user facing apps on iOS, and therefore do not need to interact with servers beyond happily chatting away with the kind of API I am currently trying to build. So, it starts looking like I will have to actually create a database driver to test my abilities. Quite honestly, I have been at it for a long time, if you look at the result. So writing a PDO-like framewok will have to wait. The only alternative that seems reasonably maintained works with CouchDB, and there is absolutely no frigging way I am going to “embrace the web” by ditching a performance-oriented piece of software for a javascript derivated “storage format”. Call me a snob, but no thank you.

Then I remember I have used successfully a SQLite wrapper in a project using Swift. SQLite may not be PostgreSQL, but at least it tries. The library in question is SwiftyDB, and I highly recommend you take a peek at it if you want to use a Swift class to SQLite (and back) that works as advertised. It doesn’t handle complex relationships yet, but for those who, like me, don’t want to have to deal with the idiosyncrasies of the SQL language and the various incompatibilities it entails, it’s quite a nifty piece of software.

So, thinks little me, all there is to do, is to port SwiftyDB to Swift 3 (moar XP!). Too easy, why not port it to the package manager Kitura and Apple use instead of bloody Cocoapods (don’t get me started on pods. It’s a good idea, a necessity, even, but implemented in a completely baffling and fragile way). Port it to the Future, kind of thing. Yea, it’s only 18:00, and it will be a good source of XP as well. From medieval Swift to the Modern Era!

Modern, my %€!&@#$~!

The Swift Package Manager is the Nth re-invention of something we’ve been doing since forever in every language, and has its own particular twists, because it’s new, and why not do things in a new way?

It relies on git. Heavily. The version of the library you’re using as a dependency is predicated on the git tags. It clones whatever URL you said the manager would find it at, looks for the version you want to use, based on the ubiquitous major, minor and build, numbers. If you want to have letters in there or a versionning system that uses some other scheme, you’re screwed. Yup. Well, that’s not a concession that would warrant much outrage, even if more than half of my versioning naming schemes don’t work like that. Never mind says I, I’ll bottle the cry for freedom, and get to it.

Did I mention that I had git issues earlier in the process? Apparently, git 2.9.0 has a bug that prevents the package manager from working correctly, and you can’t use dependencies that aren’t git based. That took a while to figure out. But figure it out I did, and everything’s dandy again.

So, let’s take a look at SwiftyDB. It depends on TinySQL (by the same guy), which depends on sqlite3. A handful of swift files in each of the two frameworks, and a lib that is installed on every mac out there. What could go wrong?

The sad story of the poor toolchain that was shoved aside

In order to use C functions (which sqlite is made of), you need to trick Swift in importing them. This is done through the use of a modulemap, which is basically a file that says “if the developer says they want to use this package, what they really mean is that you should include this C header, and link toward that library”. The file sqlite.h lives in /usr/include, so I start with that. Except it is incompatible with the Swift development snapshot for Reasons. The package build system requiring git tags and stuff for the dependencies, it’s kind of a long process to make a small modification and recompile, but after half an hour I manage to find the right combination of commands to have a working module map. Short story is: you need to use the .sdk header, not the system one. I went through gallons of coffee and potential carpal tunnel syndrome so that other adventurers don’t have to. You’re welcome.

Once that little (and awesomely frustrating) nugget has been dug up, all that’s left to do is port TinySQL, then SwiftyDB.

The package system is quirky. It requires a very specific structure and git shenanigans, but once you’ve got where everything should go, it’s just a matter of putting all the swift files in Sources and editing your Package.swift file carefully. It’s a shame that file isn’t more flexible (or expressive) but what are you going to do?

Now the Swift 2 to Swift 3 migration isn’t as simple and self explanatory as Chris lets on, let me tell you. Sure, the mandatory labels where they were optional before aren’t a big deal, but it throws the type inference system and the error checker into a fugue state that spouts gibberish such as “can’t find overloaded functor” of something or other, when I, who haven’t spent that much time writing the SwiftyDB code, can see there’s only one possible match.

Anyways, after much massaging and coaxing, and even cajoling, my SwiftyDB fork is finally swift-package-manager-compatible. With glazed eyes, I look up at the clock to see it’s now 21:00 and on a friggin saturday, too. Very few lines of actual code were involved. And that still was way longer than expected. At this point, the hero of the story wonders if he should go on with his life, or if he should at least try to make everything explode by incorporating SwiftyDB in the sample project that sits there, stupidly saying hello to everyone who type in the right URL…

Aren’t you done yet? I’m hungry!

Sorry sorry. The end was totally anti-climactic. The package integration worked first go. And SwiftyDB delivered without a hitch, I now have a database that holds stuff and a simple web-service that supports authentication and lets you store “notes” (aka blurbs of whatever text you want to throw at it) without so much as a hiccup.

But, as I’m sitting here, on the body of that slain foe, recounting my adventure for the folks who have no idea what the process of such an endeavor is, and intermittently looking at a boiling pot containing my well deserved dinner, I wonder if anyone will see what makes this an adventure: the doubts, the obstacles, the detours, and finally, hopefully, the victory. All that for something that isn’t even needed, or for all I know wanted, by anyone but a lone developer looking for an excuse to take on a challenge.

If, for reasons of your own, you want to use my work in your own package manager experiment, be it with or without Kitura, all you have to do is include this as a dependency:

.Package(url: "https://github.com/krugazor/swiftydb", majorVersion: 1, minor: 2)

Currently, it works with the same dependencies as Kitura (snapshot 06 06), and I may even update it along with it.