While making RemoteTickets, I had to depend heavily on NSURL. For some reason, it didn’t work with Dam’s RT system.

After a couple hours of debugging, here’s the thing: NSURL doesn’t work as expected.

Creates and returns an NSURL object initialized with a base URL and a relative string.

+ (id)URLWithString:(NSString *)URLString relativeToURL:(NSURL *)baseURL


* URLString
The string with which to initialize the NSURL object. May not be nil. Must conform to RFC 2396. URLString is interpreted relative to baseURL.

* baseURL
The base URL for the NSURL object.

Return Value
An NSURL object initialized with URLString and baseURL. If URLString was malformed, returns nil.

This method expects URLString to contain any necessary percent escape codes.

Available in iOS 2.0 and later.

Seems to me it means I’m building an URL based on concatenation. Well, that’s not the case:

NSURL *baseURL = [NSURL URLWithString:@"http://www.apple.com/macosx"];
NSURL *compositeURL = [NSURL URLWithString:@"/lion" relativeToURL:baseURL];

should give http://www.apple.com/macosx/lion, right? It gives http://www.apple.com/lion instead. The baseURL parameter is actually taken as “the base URL to take the base from”.

The following code gives the result posted afterwards. Use with caution.

    NSURL *baseURL = [NSURL URLWithString:@"http://www.apple.com/macosx/"];
    NSURL *relativeURL = [NSURL URLWithString:@"/lion" relativeToURL:baseURL];
    NSURL *relativeURL2 = [NSURL URLWithString:[NSString stringWithFormat:@"%@/lion", [baseURL absoluteString]]];
    NSURL *relativeURLLvl211 = [NSURL URLWithString:@"/new" relativeToURL:relativeURL];
    NSURL *relativeURLLvl221 = [NSURL URLWithString:@"/new" relativeToURL:relativeURL2];
    NSURL *relativeURLLvl212 = [NSURL URLWithString:[NSString stringWithFormat:@"%@/new", [relativeURL absoluteString]]];
    NSURL *relativeURLLvl222 = [NSURL URLWithString:[NSString stringWithFormat:@"%@/new", [relativeURL2 absoluteString]]];
        NSLog(@"%@", [NSString stringWithFormat:@"baseURL:\n%@ (%@)\n\nLevel 1:\nRelative:\n%@ (%@)\nAbsolute:\n%@ (%@)\n\nLevel2:\nRelative/Relative:\n%@ (%@)\nRelative/Absolute:\n%@ (%@)\nAbsolute/Relative(a):\n%@ (%@)\nAbsolute/Absolute:\n%@ (%@)\n",
                     baseURL, [baseURL absoluteString],
                     relativeURL, [relativeURL absoluteString],
                     relativeURL2, [relativeURL2 absoluteString],
                     relativeURLLvl211, [relativeURLLvl211 absoluteString],
                     relativeURLLvl221, [relativeURLLvl221 absoluteString],
                     relativeURLLvl212, [relativeURLLvl212 absoluteString],
                     relativeURLLvl222, [relativeURLLvl222 absoluteString]]);

http://www.apple.com/macosx/ (http://www.apple.com/macosx/)

Level 1:
/lion -- http://www.apple.com/macosx/ (http://www.apple.com/lion)
http://www.apple.com/macosx//lion (http://www.apple.com/macosx//lion)

/new -- http://www.apple.com/lion (http://www.apple.com/new)
/new -- http://www.apple.com/macosx//lion (http://www.apple.com/new)
http://www.apple.com/lion/new (http://www.apple.com/lion/new)
http://www.apple.com/macosx//lion/new (http://www.apple.com/macosx//lion/new)


Regarding the AppStore for iPhone/iPod Touch

I was reading John Gruber’s piece on Opera Mini, and although, as usual, John is pretty thorough with his analysis, the last sentence made me jump.

Again, though, just because an app doesn’t violate the rules doesn’t mean Apple will accept it.

It kind of invalidates everything everybody says or writes about the AppStore. And the sad thing is that it is true. I wrote an app for Rebel Software (Spin the Bottle, in case anybody’s wondering). We were pretty happy with it, it respected what we thought were the guidelines, was pretty good looking, if not “useful” (it’s just a game, people). It was also the first app doing that on the Store.

It took 6 weeks to validate the app. And a couple more after I fixed a bug they had uncovered. So on the one hand, the validation process is actually pretty thorough, or so it seems. On the other hand, while the app was being reviewed, 4 other spins were released. They are simpler, true. But come on!

The whole mechanism looks good on paper. Apple “filters” bad applications, or buggy ones. The software developer doesn’t have to worry about the installation/update system on the device. And the user has access to everything pretty easily.

But the whole process is very opaque. How comes it took so long? No one knows. How comes some apps are rejected although they follow the guidelines? No one knows.

I’m not saying that every app should be accepted immediately, or that Apple shouldn’t reserve the right to forbid an app to be run on their device. It’s their business model, their choice. I may not agree for philosophical reasons, but from a different point of view, I guess it makes some sort of sense.

What I’m saying is that making the validation process opaque is a mistake. With a clear set of rules, some apps would not even be submitted. Less work for Apple, less work for the developers who won’t even try. When my app was stuck, a contact of mine told me I should have said something, and that he would’ve accelerated the process a bit. Why should I need that? And what happens to a developer who knows nobody within the Forbidden City?

The whole thing looks fishy. People with ideas don’t spin them up because it might get rejected (and working on something for no result costs money). People with bad ideas but having some pull might get some apps accepted although they probably shouldn’t be if you have Standards. The whole thing takes too much time and brain power from everybody.

I talked with a friend about the possibility to submit a prototype to Apple before going any further to see if the app had any chance of being rejected. He replied he wouldn’t do that because he’s afraid that someone else might take idea and implement it before he has time to do so. And while some applications get rejected because they do something too close to the built-in apps, you still can find several applications that do the exact same thing, competing against each other. Why doesn’t he trust Apple to respect the secrecy on that?

So the fog, instead of creating a healthy competition environment, looks like it’s promoting arbitrary, or network-b(i)ased decisions. Transparency is not optional here. Some rejection rules might be unfair, true. But why be ashamed? If the rule is written down in a way that non-lawyers can understand, they will be obeyed. Everybody wins.

OK, I’m naive. OK, I’m optimistic. But either you trust the users not to buy the apps that suck (don’t laugh), or you find a straightfoward way to define what an acceptable app is. Spread the responsibility around a bit. We all want the platform to the be the best there is. Why not work together?


Please don’t guesstimate me

It so happens that I load google’s webpage from a clean user account. Now this account has US locale, and is set to english throughout the system. Then why in hell does google think I want the page displayed in french?

For some reason, based on the fact that my IP matches a french ISP, a lot of websites assume I am french (which I am, but then again, I could be there only for a few days), and display their contents accordingly. So, what I’ll get is a french text on the page, french advertisement, and (if enabled) a french keyboard layout for any virtual keyboard that may appear on the page (having a virtual keyboard on a webpage is something of a puzzle to me, but that’s another story)…

Now, some of you may think that it’s kind of an easy guess, and reasonable at that. Well, I think reverse-mapping is incomplete at best, random in most cases. However, my identification string is not (for instance, with Safari):

Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_2; en-us) AppleWebKit/525.18 (KHTML, like Gecko) Version/3.1.1 Safari/525.18

Dude, that’s the first thing the browser sends to the web page. It doesn’t require any dns-mapping tool or clever interpretation… It says here “en-us”. That means english with a us locale. I know it can be forged, but someone forging an agent string might as well find a way to get through a proxy in a different country…

I generally find that “clever” programs trying to guess what I want to see or how I want it presented to me have a very strange way of finding out things about me. And most of the time, they get it wrong, precisely they want to guess in a “clever” fashion.

Don’t misunderstand me, I don’t think they make a bad guess 100% of the time… I just think that most of the time, they use a roundabout fashion that’s way too clever for the intended purpose. Sometimes they guess right. But most of the time, a more basic approach would yield better results. I don’t have any statistics, but I’m more than half convinced that the clever way yields most of the time the same thing as the basic way, by using more resources. And in “extreme” cases, such as my web browser, the basic way gets you better results.

That’s Occam’s Razor in reverse : if there’s a simple and a complex way to achieve something, it’s probably best to go the simple way. Unless you are a Shadok, of course…


I have a 10Mbits line and the websites are still slow

The Internet really is the new frontier. Everything cutting edge seems to have something to do with it. By and large, it has become my primary workplace. There, you can post, blog, tweet, chat, ping and google. Oh and work, of course.

But the Internet also is the place where rules and common sense are violated several times every day. You can spam, crack, hack, DDOS, flood, infest, and troll.

For some reason, hi-tech often implies meaningless. Just like art. Sometimes, it doesn’t matter. After all, something beautiful or funny doesn’t have to make sense. But some decisions just seem quite dumb to me.

One example, because it’s getting worse these days : advertisement.

Internet is center-less. That’s what’s cool about it : you have mirrors, load balancing, redundancy, etc… The information is more often than not at the tip of the fingers. Every news/information related media in the “real” world has to get money by selling advertisement space. I understand that, it’s the price to pay.

On the Internet, ads generate money as well. The problem is for the (paying) advertiser to know that the investment is worth it : how many people saw the ad? How many followed through on the ad and actually bought something? That information is what the advertiser pays for. The goal is to show the ad to a maximum of people.

To get this number, you have two choices : you trust the website (that’s the way it works in printed papers… you know roughly the numbers, but they are unverifiable), or you don’t. The way things go these days, I guess the advertisers don’t trust the websites. The ad is hosted on their own servers, which provides an accurate count of the views/clicks.

But it also means that N sites (N being potentially big) try to access simultaneously the same ad, thus slowing down every one of them at the same time. And you can be trying to load a light page (say, mostly text), on a fast line (most DSL providers here work around 2Mbits/s throughput), and wait one damn minute for the page to actually load.

Would it be so hard to have a delayed mechanism that stores all the relevant data on the website’s server (signed, encrypted, whatever) and uploads them to the advertiser every day or week while getting the new ad contents? That way, the website would be fast 90% of the time, instead of 10…


It’s been a long couple of weeks…

Last week (up until yesterday, that is), Paris held a huge show devoted to… students. How to become one, how to stop being one, what to do with it, where to go, etc… And I was in the crash team for computers (although sometimes lightning, sound, walls and doors also happened to be my problem).

Just imagine cramming a quarter of million of people in a building. Now imagine these people (of all ages and sizes) playing with the displayed computers. I imagine you get the picture. Well, you’re wrong. It went very very smoothly. That’s because we suffered a lot through setup :)

My blisters have blisters. Leopard 10.5.0 (requirement from the upper echelons) proved to be as stable as an ice cube on a stream. We spent a lot of time fishing for the keys of the anti-theft devices. But apart from patrolling and the occasional update for the crashed computers, it was a stroll in the park.

I say stroll, but because the public transportation was on strike, I should say forced march. I lost count after my 50th kilometer. But that was the only painful thing.

Keep that in mind. Plan first, and plan well. You might get bored afterwards, but at least you’re safe.

[UPDATE] Pictchoors!



Am I really that crazy?

Sometimes I feel like I’m the only one around ready to pay for good tools…

I have spent the last couple of weeks working on a Mac project under Codewarrior. For those of you who don’t know what CW is, imagine that a few years ago we had very little choice for developing Mac applications. There were Borland C++, MPW (a kind of programming shell), and Codewarrior, and probably a few others but I never saw any project running on something else.

Codewarrior allowed mac programmers to use pascal / c / c++, and to target mac or windows platforms. For most developers out there, it was THE developer tool.

And since they wanted to stay ahead of the market, there was a complete API to develop new compilers, linkers, scm plugins, etc… A dream come true.

Then, Mac OS X came with free dev tools. They were not so great. But they were free. Codewarrior still was the reference tool for Carbon development, since gcc can’t do PEF (OS9 linkage). The applications became multilingual and package-based, which CW had a lot of trouble dealing with, and the debugger wouldn’t work at first.

But Metrowerks tried to keep in the race. They updated their tools, and since they were the only way to develop apps for OS9 and carbon plugins, they still had a customer base. With CW9, we had almost all the functionnalities Project Builder had, but oh so much cleaner and easier to use. We had code completion, class browsers, the full monty. And we still had the SDK to develop our own tools.

When Xcode came, OS9 was dead. No one would start projects that would run on both platforms. And most of the SDKs out there evolved towards a “modern” tool chain. CW was a tool of the past, no new developers knew about it, and most of us were forced into using Xcode anyway, since the Mac OS X APIs were meant to be used with it.

With Xcode 2.4, the codewarrior compatibility with the system headers was broken. It was a sign that we were not supposed to use it anymore.

But Xcode doesn’t have the same elegance CW had. Call me nostalgic, but I really think CW is vastly superior to Xcode. But it costs money. And it isn’t available anymore, since no one would pay for it (it is still widely used for embedded developments, though). And Xcode is closed. Not closed as opposed to open source… closed as opposed to “has a SDK”. Which means that we have to do with the available tools, without any possibility to expand its features, work around its bugs, or customize the way it presents its data to us.

A couple of weeks ago, I had the opportunity to work with CW again. And let me tell you it’s been a pleasure ever since. Of course, it’s outdated, and you kinda have to push your way through sometimes, but it’s solid, and expandable. You can use CVS/SVN/Perforce/… with it. You can build PEF or MachO binaries, and even Windows programs right on your mac. And the editor is just fast and reliable.

What’s the point of this? It’s not that Xcode is lousy. Xcode is now kind of complete, for Cocoa development. It is still lacking in other departments, but hey, it’s getting there. But the general trend of the business is that good&expensive will always be less appealing than restricted and free.

People who are ready to spend a hundred bucks for a game they’ll finish in a week don’t want to pay for tools that will allow them to make more money. And great tools disappear.