[Xcode] IPA Generation Broken, Again

It happened before and shall happen again: xcrun PackageApplication doesn’t work anymore.

The whole process isn’t meant for contractors

Let’s face it, Xcode is getting better at the coding part, but the signature / IPA generation is “streamlined” for the App Store, which in turn is under a lot of assumptions that I’ve mentioned here in the past. Since the last time, the heavy push for storyboards (oh, the tasty git conflicts), and the weird quirks of swift haven’t really improved things. Dev tools should be as flexible as possible, because they are meant for devs.

Anyways, 8.3 broke the “easy” IPA generation that everyone was using. It was especially important for Hudson/Jenkins continuous integration and beta deliveries. No, relying on the Organizer’s UI doesn’t work, because more often than not, you want automation, and Xcode is less and less automatable.

PackageApplication is gone

It worked for a while, then you had to remove a part of the entitlement checking, but now it’s not there anymore. So… no more Hudson builds?

Fret not! The one-liner may be gone, but we have a heavily convoluted multi-step export process in place that does the same!
(Imaginary tech support person)

The idea is to replicate the UI process:

  • generate an archive
  • click export an IPA
  • select a bunch of signature and type options
  • rename all the things it exported (you usually want the build number/version right)

Believe it or not, some of these are actual xcodebuild commands. But the argument list is… errrrr, weird, if you’re used to Linux and server stuff. So, no, in the following, it’s not a mistake that there’s only one dash. You have been warned.

Gimme!

There are two pieces to the export thing : a plist containing what you would type/click in the UI options at the end of the process, and a script that chains all the necessary actions together. I tried to piece together various sources into a coherent whole, but it’s a dev tool and your mileage may vary. Also, because it’s intended as a Hudson build script, it takes the version number as argument and outputs what I would upload to the beta server in Outfiles. Again, adapt it to your workflows.

The plist:

< !DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
  <dict>
    <key>method</key>
    <string>development</string>
    <key>teamID</key>
    <string>TEAM_ID</string>
    <key>uploadBitcode</key>
    <true></true>
    <key>uploadSymbols</key>
    <true></true>
  </dict>
</plist>

Remember to use the correct TEAM_ID in there.

The script (takes the version as first argument, may break if none is provided):

#!/bin/bash
 
APP_NAME="Insert app name here"
SCHEME_NAME="Insert scheme name here"
CONFIGURATION="Insert configuration name here"
 
ARCHIVE_DIRECTORY=Archives
OUT_PATH=Outfiles
 
#change as needed
SDK=iphoneos10.3
 
echo "Cleaning previous build"
 
#clean
xcodebuild -sdk "$SDK" \
-scheme "$SCHEME_NAME" \
-configuration "$CONFIGURATION" clean
 
echo "Creating Archive"
 
#archive
xcodebuild -sdk "$SDK" \
-scheme "$SCHEME_NAME" \
-configuration "$CONFIGURATION" \
-archivePath "$ARCHIVE_DIRECTORY/$APP_NAME-$1.xcarchive" \
archive
 
echo "Creating IPA"
 
#export ipa
xcodebuild -exportArchive \
-archivePath "$ARCHIVE_DIRECTORY/$APP_NAME-$1.xcarchive" \
-exportPath "$OUT_PATH/" \
-exportOptionsPlist exportIPA.plist
 
#name files and dSYMs
mv "$OUT_PATH/$APP_NAME.ipa" "$OUT_PATH/$APP_NAME-$1.ipa" 
cp -rf "$ARCHIVE_DIRECTORY/$APP_NAME-$1.xcarchive/dSYMs/$APP_NAME.app.dSYM" "$OUT_PATH/$APP_NAME-$1.dSYM"
cd "$OUT_PATH"
zip -r "$APP_NAME-$1.dSYM.zip" "$APP_NAME-$1.dSYM"
rm -rf "$APP_NAME-$1.dSYM"
cd ..
 
#optionally
# rm -rf Archives

As usual, use with wisdom ;)

Final Thoughts

It’s not that the system is bad, per se. It actually makes a lot of sense to have commands that mirror the GUI. But the issue is, for a long long long long time, automation hasn’t been about simulating clicks. Since you don’t have the UI, it makes even more sense to have shortcuts in addition the GUI counterparts

  

Failed Oracle

I (sometimes) write stuff here because I find some stuff fascinating, or ground-breaking or weird. More often than not, I tend to express some kind of prophecy, because who doesn’t look at the future and make some bets, right?

Well, I apparently suck at it.

Apple TV

A decade ago, I wrote that I was excited about the possibilities inherent to the AppleTV. I did it again a couple of years ago.

Despite the many rumors and excitement, Apple TV is simply going nowhere. There’s a variety of reasons for that, but predominantly, it’s in the name : it’s about TV. Sure, it helps put on TV stuff that comes through AirPlay, and consume content that isn’t going through <insert the normal way you get your TV here>, but it could have been so much more. Open up a way to plug stuff in there? blueray players and really smart DVR are on the horizon. Lift the ban on emulation (within legal constraints, of course), and all that retro-gaming stuff is a done deal. At the very least open it up as a network access point, or a network extender.

But the years pass, and we hear a lot of exciting rumors (none of which anyone but the US denizens care about), and it’s still mostly a netflix/itunes/airplay box.

Multipeer Connectivity

I did several talks on Multipeer tech, spread the gospel, etc.

I wasn’t alone in that, judging from NSHipster (2013):

Multipeer Connectivity is a ground-breaking API, whose value is only just starting to be fully understood. Although full support for features like AirDrop are currently limited to latest-gen devices, you should expect to see this kind of functionality become expected behavior. As you look forward to the possibilities of the new year ahead, get your head out of the cloud, and start to consider the incredible possibilities around you.

But apart from AirDrop, and Continuity (when it works) is there any serious uses of the technology? If you answered yes, are there any outside of Apple?

Why?

Why am I talking that nostalgic trip? There is a battle of sorts waged in the Apple punditry about the iPad. Is it a failure? Is it underrated? Underused? What’s its future?

I have an opinion, but given my track record, I’ll keep it to myself.

One things for certain though: cool tech that appeal to geeks such as myself, or to old-timer Apple people (such as myself, again) don’t necessarily go the whole way. Revolutionary the iPhone may seem, and don’t get me wrong, it’s a very cool device, it’s actually not revolutionary, and wasn’t at the time. What made is special is that Apple used all of its creative genius and tech know-how to make the best damn phone they could and it was a success. But just remember that originally, we weren’t supposed to develop apps for it. Apps are arguably what makes a smartphone popular (I’m looking at you Metro). Cool tech is all well and good but it has to become widespread use to be relevant. And the mechanics of that process baffle me, and I suspect they baffle most commentators, whether they realize it or not.

  

Anonymity Isn’t The Problem, You Are

Excellent debunking of asking for real names to prevent abuse, usually thought to be the number one reason for people who behave badly on the interwebs.

I’ll let you read it. There, you done?

My Bad

I have had that opinion for a while, despite my evidence of the contrary. I come from a time when nicks, handles, and pseudonyms were the norm. I always had trouble writing under my own name or using anything but my handle for communication. Hell, even day to day, I use something that can be regarded as an alias : Zino. However, I never shied away from giving my real address, or phone number. It’s out there, you can find it fairly easily. I won’t pick up, mind you, but ¯\_(ツ)_/¯

As I read this article, I was thinking of why I don’t feel like using my name. It could be from the habit of using handles for most of my adult life, or because my name isn’t me. I very seldom turn around when someone actually uses my birth name.

So why did I assume other people who use handles to spout abuse were using it to hide their identity rather than out of comfort, like me?

Honestly, I have no idea. But I will try and stop to say that there are trolls on the internet more than anywhere else because internet promotes anonymity.

What Gives, Then?

Weeeeeeeeeeell. Not to get too political, but I tend to agree that we live in a post-facts world. The reasons and examples are well described in the article, but in a nutshell, the ‘web gives you an opportunity to voice your opinion more readily than any other medium. And opinions are increasingly considered as important as facts (if not more).

I have an opinion on curly braces position when I write my code. The fact is, it matters little to none for the compiler or the quality of the program in the end. It makes it more legible for me to have it a bit more compact, and if hard pressed on the topic, I can probably concede to the other style (while secretly still using mine, probably).

I have an opinion as to how public money (ie tax revenue) should be spent, and therefore a fairly strong opinion as to what the level of taxes should be. The only way I can turn it into facts, though, is to vote and convince others to vote like me.

Internet would allow me to select (or invent) facts to support my opinion and present a case that it should be everyone’s. Who’s going to fact-check me, or challenge my opinion? People with different opinions who will select a different set of facts, obviously.

The problem isn’t the anonymity, it’s the way you discuss things. Internet allows everyone to shout something to the world. Most people tend to forget that this amazing new way to exercice a right (yes, it’s a right) comes at a cost: the world can answer.

Trolls seek to elicit that answer by any means necessary, so I’ll just set them aside, we’ve always had them. People systematically playing Devil’s Advocate, or going for getting a rise for more personal reasons have always been around. We just didn’t hear them as much because they had to go through a bunch of filters (physical proximity, newspaper and tv editors, etc…).

But people who are convinced that their opinion is right have now the ability to express themselves. It’s up to us to remind them that everyone with the access they enjoy has the same right, and opinions not only can, but will, be challenged. If you can’t take it, your opinion is de facto useless.

So what’s that jibber-jabber about facts? Facts are supposed to be the thing that everyone agrees on. Then you use that fact to promote an opinion.

Example:

Facts:
– computer geeks have been mostly looked down on by society throughout their short history (cf almost every single movie and tv show)
– computer geeks are needed in every sector that functions better with a computer, due to the increasing complexity of computer systems
– computer geeks are human
– most humans require validation and recognition to function as part of a group

Opinion #1:
Computers, and by extension computer geeks, are not bringing anything new, they are just new tools to help creative people build new things. They should therefore be glad if they aren’t badly treated anymore, but not to the point of expecting the rock star treatment.

Opinion #2:
Now that computer geeks are needed pretty much everywhere, it’s time to take over the world and teach all these bullies who’s boss.

Opinion #3:
Computer geeks finally got the recognition for the intuition they had since forever: the computer did change the world. They deserve to be integrated in the society, as every other profession.

Facts support those 3 opinions, so why be surprised when your favorite geek exhibits any of them? They can even change their opinion in the middle of a sentence! None of those are facts, though. Facts are about what has been and what is. Opinions are about what should be. And what should be is by definition debatable. And debate is the single reason for the internet to exist.

But I’ll add another fact to the pile : I do not have to share my opinion, and neither do you. The simple fact that I do doesn’t make my opinion any more or less valid than it was before. It just subjects it to an open debate, and all the pros and cons of it.

I am of the opinion that sharing and discussing things is better than the alternative, but if you forcibly shut me up, it doesn’t make my opinion less valid. If you invalidate or complete the facts I am using to prop that opinion up, though…

For all of those who still have trouble differentiating the two, I suggest reading a bit of Plato. The guy was a mastermind at asking questions to test the opinions of others.

One last thing

An unverifiable opinion (something that is supported by too little facts) is called a belief. You basically replace some of the facts that you build your opinion on by other opinions. Opinions include taking an example (a highly selected example, most of the time) and promote it to a fact. These are OK to have, as long as no fact comes in contradiction to any of the opinions at the base of your reasoning.

For instance? I believe that the human race as a whole is compassionate, despite many proofs of the opposite. According to my previous paragraph, it should crumble, right?

Wrong. Because the core “leg” so to speak of that belief is that people who have done bad things can indeed change for the better. And there are examples, sure, but no fact supporting that claim. It’ll just have to stay a belief up until the point where we can definitely prove or disprove the notion of free will.

  

Some More Adventures in Swiftland

Last summer, I tried to learn new tricks and got some small success. Because the pace at which Swift 3 is changing finally settled a bit, I decided to finish what I set out to do and make SwiftyDB both up to date with the latest version and working on Linux.

The beginning

In August, SwiftyDB was working fine on MacOS but, while it compiled fine, it didn’t on Linux for a variety of reasons.

Swift was flimsy on that platform. The thing “worked”, but caused weird errors, had strange dependancies and was severely lacking stuff in the Foundation library. The version I had then crashed all the time, but for random and different reasons, so I decided to wait till it stabilized.

With Swift 3.0.2 and the announcement that Kitura was to become one of the pillars of the official built-in server apis (called it, by the way), I figured it was time to end the migration.

The problem

The main issue is that swift for Linux lacks basic things Foundation on the Mac has. I mean, it doesn’t even have NSDate’s timeIntervalSinceReferenceDate… But beyond that, the port lacks something that is truly important for the kind of framework that SwiftyDB is : introspection.

The typechecker is the absolute core of Swift. Don’t get me wrong, it’s great. It forces people to mind the type of the data they are manipulating, and throws errors early rather than late. But it comes at a cost : the compiler does all kinds of crazy operations to try to guess the type and too often for my tastes, fails miserably. If you’ve ever seen “IDE internal error” dialogs in XCode, that’s probably the reason.

But even if it worked perfectly, data manipulation that’s needed to get rows in and out of the db requires either to work with formless data (JSON) or to have a way to coerce and map types at run time. And boy, swift doesn’t like that at all.

So, SwiftyDB handles it in a hybrid way, passing along dictionaries of type [String : Any] and suchlikes. It’s kind of gross, to be honest, but that’s the only way this is going to work.

But that was kind of solved in August, albeit in a crash-prone way. What was the issue this time?

The swift team made huge strides to unify MacOS and Linux from an API point of view. If you read the doc, it “just works”, more or less. And that’s true, except for one tiny little thing : toll-free bridging.

Data type conversion

Swift, as ObjectiveC before it, deals with legacy stuff by having a toll-free bridging mechanism. Basically, to the compiler, NSString and String are interchangeable, and will use the definition (and the methods) it needs based on the line it’s at, rather than as a static type.

Something you surely know if you’ve done any kind of object oriented programming, typecasting is hard. If String inherits from NSString, I can use an object of the former type in any place I would have to use the latter. Think of the relationship between a square and a rectangle. The square is a rectangle, but the rectangle isn’t necessarily a square. It’s an asymmetrical relationship. And you can’t make it work by also having NSString inheriting from String, because that’s not allowed for a lot of complicated reasons, but with effects you can probably guess.

So, how does this toll-free bridging work? By cheating. But that’s neither here nor there. The fact is that it works just fine on MacOS, and not on Linux.

A solution

The easiest way to solve that is to have constructors in both classes that take the other as a parameter. And that’s the way it’s solved on Linux. True it’s a bit inelegant, and negates most of the “pure sexiness” that swift is supposed to have, but what are you gonna do? This, after all, is still a science.

Once those extensions are in place, as well as a few replacement additions to the stock Foundation (such as the infamous timeIntervalSinceReferenceDate), and a few tweaks to the way the system finds the SQLite headers, everything finally works.

Use it

As I said before it’s mostly an intellectual exercise, and a way to see if I could some day write some serverside stuff, but in the meantime it works just fine and you can find it here. Feel free to submit pull requests and stuff, but as it is, it works as intended : swift objects to sqlite storage and vice versa.

As usual, if you want to use it as a swift package, just use:

.Package(url: "https://github.com/krugazor/swiftydb", majorVersion: 1, minor: 2)

  

Web Services And Data Exchange

I may not write web code for a living (not much backend certainly and definitely no front-end stuff, as you can see around here), but interacting with webservices to the point of sometimes having to “fix” or “enhance” them? Often enough to have an Opinion.

There is a very strong divide between web development and more “traditional” heavy client/app development : most of the time, I tell people I write code for that these are two very distinct ways of looking at code in general, and user interaction in particular. I have strong reservations about the current way webapps are rendered and interacted with on my screen, but I cannot deny the visual and overall usage quality of some of them. When I look at what is involved in displaying that blog in my browser window, from the server resources that it takes to the hundreds of megabytes of RAM to run a couple paltry JS scripts in the window, the dinosaur that I am reels back in disgust, I won’t deny it.

But I also tell my students that you use the tool that’s right for the job, and I am not blind : it works, and it works well enough for the majority of the people out there.

I just happen to be performance-minded and nothing about the standard mysql-php-http-html-css-javascript standard pipeline of delivering stuff to my eyeballs is exactly doing that. Sure, individually, these nodes have come a long long way, but as soon as you start passing data along the chain, you stack up transformation and iteration penalties very quickly.

The point

It so happens that I wanted to do a prototype involving displaying isothermic-like areas on a map, completely dynamic, and based on roughly 10k points whenever you move the camera a bit in regards to the map you’re looking at.

Basically, 10 000 x 3 numbers (latitude, longitude, and temperature) would transit from a DB to a map on a cell phone every time you moved the map by a significant factor. The web rendering on the phone was quickly abandoned, as you can imagine. So web service it is.

Because I’m not a web developer, and fairly lazy to boot, I went with something that even I could manage writing in : Silex (I was briefly tempted by Kitura but it’s not yet ready for production when involved with huge databases).

Everyone told me since forever that SOAP (and XML) was too verbose and resource intensive to use. It’s true. I kinda like the built-in capability for data verification though. But never you mind, I went with JSON like everyone else.

JSON is kind of anathema to me. It represents everything someone who’s not a developer thinks data should look like :

  • there are 4 types that cover everything (dictionary, array, float, and string)
  • it’s human readable
  • it’s compact enough
  • it’s text

The 4 types thing, combined with the lack of metadata means that it’s not a big deal to any of the pieces in the chain to swap between 1, 1.000, “1”, and “1.000”, which, to a computer, is 3 very distinct types with hugely different behaviors.

But in practice, for my needs, it meant that my decimal numbers, like, say, a latitude of 48.8616138, gobbles up a magnificent 10 bytes of data, instead of 4 (8 if you’re using doubles). That’s only the start. Because of the structure of the JSON, you must have colons and commas and quotes and keys. So for 3 floats (12 bytes – 24 bytes for doubles), I must use :

{lat:48.8616138,lng:2.4442788,w:0.7653901}

That’s the shortest possible form – and not really human readable anymore when you have 10k of those -, and it takes 41 bytes. That’s almost four times as much.

Furthermore

Well, for starters, the (mostly correct) assumption that if you have a web browser currently capable of loading a URL, you probably have the necessary bandwidth to load the thing – or at least that the user will understand page load times – fails miserably on a mobile app, where you have battery and cell coverage issues to deal with.

But even putting that aside, the JSON decoding of such a large datasets was using 35% of my cpu cycles. Four times the size, plus a 35% performance penalty?

Most people who write webservices don’t have datasets large enough to really understand the cost of transcoding data. The server has a 4×2.8Ghz CPU with gazillions of bytes in RAM, and it doesn’t really impact them, unless they do specific tests.

At this point, I was longingly looking at my old way of running CGI stuff in C when I discovered the pack() function in PHP. Yep. Binary packing. “Normal” sized data.

Conclusion

Because of the large datasets and somewhat complex queries I run, I use PostgreSQL rather than MySQL or its infinite variants. It’s not as hip, but it’s rock-solid and predictable. I get my data in binary form. And my app now runs at twice the speed, without any optimization on the mobile app side (yet).

It’s not that JSON and other text formats are bad in themselves, just that they use a ton more resources than I’m ready to spend on “just” getting 30k numbers. For the other endpoints (like authentication and submitting new data), where performance isn’t really critical, they make sense. As much sense as possible given their natural handicap anyways.

But using the right tool for the right job means it goes both ways. I am totally willing to simplify backend-development and make it more easily maintainable. But computers work the same way they have always done. Having 8 layers of interpretation between your code and the CPU may be acceptable sometimes but remember that the olden ways of doing computer stuff, in binary, hex, etc, also provide a way to fairly easily improve performance : less layers, less transcoding, more cpu cycles for things that actually matter to your users.

  
%d bloggers like this: