WWDC 2016, or close to that

My first WWDC was 15 years ago. I was part of a few youngsters who were selected for the student scholarship, and back in the day, there were a lot of empty seats during the sessions. It was in San Jose, and my friend Alex was kind enough to let me crash on his couch for my very first overseas “professional” business trip. Not that I made any money on that trip, but it was beginning to be my career and I was there in that capacity. A month later, I would be hired at Apple in Europe, and Alex would be hired by the Californian HQ a few years later, but back then, what mattered was to be a nerd in a nerd place, not only allowed to nerd out, but actively encouraged to do so.

I was 20, give or take, and every day, I would have lunch with incredible people who not only shared my love of the platform, and the excitement at what would become so huge – Mac OS X, Cocoa, and Objective-C-, but would also share their experiences (and bits and pieces of their code) freely, and for the first time in my short professional life, I was treated as a peer. I met the people who came out with the SETI@Home client, and were looking for a way to port it from Linux to 10.0 (if you’ve never seen 10.0 running, well… lucky you), I exchanged tricks with the guy who did the QT4Java integration, and met my heroes from Barebones, to name a few.

Of course, the fact that I was totally skipping university didn’t make me forget that, like every science, programming flourishes best when ideas flow easily. No one thought twice about opening a laptop and delving in code to geek out about a specific bug or cool trick. I even saw and maybe had a few lines of code in a Lockheed Martin hush hush project… Just imagine that!

Over the years I went regularly, then less so, and in recent years not at all. It’s not a “it was so much better before” thing as much as a slow misalignment of what I wanted to get out of it. Let’s get this particular thing out of the way, so that I can move on to more nerding out.

Randomness played a big part for me. I met people who were into the platform, but necessarily living off of it. Academics, server people, befuddled people sent there by their company to see if it was worth the effort porting their software onto the Mac, it was that easy to get in the conference. These days I dare you to find an attendee who has a paid ticket and isn’t making a living from developing iOS apps (either indie or contractor, or in-house). The variety in personnalities and histories, and uses of the platform is still there, but there’s zero chance I’ll see an astronomer who happens to develop as a hobby… As a side note, the chance that a presenter (or Phil Schiller, who totally did) will give me his card and have a free conversation about a nerdy thing, certain in the fact that we were part of a small community and therefore not abuse each other’s time is very close to zero as well. Then again, who else was interested in using the IRdA port of the titanium to discuss with obscure gadgets?

So, it felt a little bit like a rant, but it’s not. I recognize the world has moved, and Apple went from “that niche platform a handful of enthusiasts keep alive” to the biggest company on Earth, and there is absolutely no reason why they should treat me differently for that past role, when there are so many talented people out there who would probably both benefit more from extra attention, and prove a more valuable investment. Reminiscing brings nostalgia, but it doesn’t mean today is any worse from an imagined golden age, when the future of the platform was uncertain, and we were reminded every day that we were making a mistake by the rest of the profession. Today is definitely better, even if that means I don’t feel the need to go to the WWDC anymore.

So, back to this year, the almost live nature of the video posting meant that I coded by day and watched sessions by night, making it almost like those days when sleep was few and far between, on the other side of the world. I just wasn’t physically in San Francisco, enjoying the comfort of my couch and the possibility to pause the video to try out a few things the presenter was talking about, or the so very important bathroom break.

All in all, while iOS isn’t anything new anymore, this year in particular, I was kind of reminded of the old days. It feels like we’re on a somewhat mature platform that doesn’t revolutionize itself every year anymore (sorry users, but it’s actually better this way), the bozos doing fart apps are not that preeminent anymore, and we can get to some seriously cool code.

2016 is all about openness. Gone are the weird restrictions of tvOS (most of the frameworks are now on par with other platforms, and Multipeer Connectivity has finally landed). WatchOS is out of beta. We can plug stuff in first party apps that have been walled off for 8 years. Even the Mac is getting some love, despite the fact it lost a capital M. And for the first time in forever, we have a server session! OK it is a Big Blue Man on stage but we may have a successor to WebObjects, folks! What a day to be both a dinosaur and alive.

Not strictly part of the WWDC announcements, the proposed changes to the App Stores prefigure some interesting possibilities for people like me, without an existing following or capital that can pay for a 6 months indie project. Yes, yes I know. There are people who launch new apps every day. I’m just not one of these people. I enjoy the variety of topics my customers make me confront to, and I have very little confidence in my abilities to manage a “community” of paying customers. Experience, again, and maybe I’ll share those someday.

Anyways, Swift on Linux, using frameworks like Kitura or Perfect right now, or the future WebObjects 6.0 might allow people like me, who have a deep background in languages with more than one type to be able to write fairly rapidly and consistently a decent backend, and who knows maybe even front end. Yes, I know Haskell has allowed you to do similar things for a bit, but for some reason, my customers are kind of daunted by the deployment procedures and I don’t do hosting.

The frills around iMessage stickers don’t do much for me, but being able to use iMessage to have a shared session in an app is just incredible. So. Many. Possibilities. Completely underrated in what I heard from the fallout of the conference doesn’t even begin to describe it. Every single turn based game out there, playable in an iMessage thread. I’ll leave that out here. See? I can be nice…

MacOS (yes I will keep using the capital M because it makes more sense to me) may not get a flurry of shinies, but benefits largely from everything done for iOS, and Xcode may finally make me stop pining after Codewarrior, or AppCode, or any other IDE that doesn’t (or didn’t) need to be prodded to do what I expect it to do. Every time I have to stop writing code or debugging code to fix something that was working fine yesterday, I take a deep breath. Maybe this year will grind those disruptions to a halt, or at least be limited to the critical phases of the project cycle.

I like my watch. I may like it without having to express an almost shame about it, come September. Actually, while I’m not tempted in the least to install iOS 10 on any of my devices just yet, I might have to do it just to have a beta of the non beta version of watchOS.

In short, for not quite defined reasons, I feel a bit like I did, 15 years ago during my first WWDC. It looks like Apple is shifting back to listening to us developers who aren’t hyper high profile, that the platform is transitioning to Swift at a good pace, but not just bulldozing it over our dead bodies, and that whatever idea anyone has, it’s finally possible to wrap your head around all the components, if not code them all by yourself using a coherent approach.

Hope feels good, confidence feels better.

  
Mood : contemplative
Music : Muse - Time is Running Out

UIkit and AppKit unification

The latest fad in tech punditry is to claim the barrier to have iOS apps on the Mac is the fact these two graphical frameworks are so different that it makes the work of developers too complex. This is false.

Let’s start with the elephant in the room: an app generally has to bring more to the table than its UI. But the underpinning coverage is total. There is less than 5% of classes and methods that are available on iOS and not on the Mac. On the other hand, there are more methods in the same frameworks on the mac side. So for everything but the UI, “porting” only means recompiling. If your MVC is well implemented, that part at least won’t cause any issue.

The UI part is a mixed bag, but the paradigms are the same nowadays. It wasn’t the case until fairly recently but table views are cell or view based, the delegates are as expected, etc, etc. As an experiment, on uncomplicated examples, I did search and replace only, no tweaking, and it worked.

Now, of course, the issue isn’t technical: the iOS mono-window and mono-view is wasted on the mac. A lot of applications take the “landscape ipad” paradigm to make it less obvious, including Apple : you have the sidebar and the main view, and it works just like the master/detail project template that comes with Xcode.

Porting a successful app from iOS to the Mac is indeed a bit of work. The Mac is window centric and iOS is view centric. Some things you cannot do on iOS are possible on the Mac, like covering parts of your UI, or dragging and dropping elements. It is definitely a very different way to think about the user experience, and the design choices are certainly less constrained and less obvious. But there is no real technical hurdle, unless the vast majority of your app logic lives in the view controllers rather than in a separate codebase. And then again, the Mac now has NSViewController that works exactly like, who would have guessed, UIViewController, and apps can run in full screen mode, so who knows?

The tools (Xcode, IB, etc) are the same. The non UI frameworks are the same. The UI frameworks are similar where it makes sense (putting stuff on screen) and dissimilar where you have to (input methods and window management). That’s it.

Now, you can definitely agree that the Mac app landscape is very different from the iOS one. People are used to having giant things that install other things everywhere, demos, sharewares, unrestricted access to the filesystem, the possibility to copy and paste anything from anywhere, or drag and drop anything from anywhere, and to put it somewhere else, where it will do something. They have multiple apps that come and go depending on modal dialog boxes that show up, and pieces of stuff like palettes that they can arrange any damn way they please, thank you very much. For all these reasons, designing a successful Mac app is challenging. Big screens, small screens, people who like lots of little windows, a few big windows, people who use spaces, people who use keyboard shortcuts more than the mouse, people who don’t know how menus work, people who have a gazillion of menu items, fonts that can be changed systemwide, colorschemes, those are all valid reasons to dread an attempt at making an app that will appeal to most people.

But you don’t get to play the technical hurdle card. All these interactions have been studied, refined, and solved over 30 years of graphical interfaces. You have to choose what will work best for your needs, and yes, this is hard. But it’s not about code.

  

[CoreData] Duplicating an object

As any of you knows, duplicating an object in coredata is just a nightmare : you basically have to start afresh each and every single time for each object, then iterate over attributes and relationships.

It so happens I have to do that often in one of my projects. I have to duplicate them except for a couple of attributes and relationships, and there are 20 of each on average (I didn’t come up with the model, OK?).

So, I came up with this code. Feel free to use it, just say hi in the comments, via mail, or any other way if you do!

@implementation NSManagedObject (Duplication)
+ (BOOL) duplicateAttributeValuesFrom:(NSManagedObject*)source To:(NSManagedObject*)dest ignoringKeys:(NSArray*)ignore {
    if(source == nil || dest == nil) return NO;
    if(![[source entity] isEqual:[dest entity]]) return NO;
 
    for(NSString *attribKey in [[[source entity] attributesByName] allKeys]) {
        if([ignore containsObject:attribKey]) continue;
 
        [dest setValue:[source valueForKey:attribKey] forKey:attribKey];
    }
 
    return YES;
}
 
+ (BOOL) duplicateRelationshipsFrom:(NSManagedObject*)source To:(NSManagedObject*)dest ignoringKeys:(NSArray*)ignore {
    if(source == nil || dest == nil) return NO;
    if(![[source entity] isEqual:[dest entity]]) return NO;
 
    NSDictionary *relationships = [[source entity] relationshipsByName];
    for(NSString *attribKey in [relationships allKeys]) {
        if([ignore containsObject:attribKey]) continue;
 
        if([((NSRelationshipDescription*)[relationships objectForKey:attribKey]) isToMany]) {
            [dest setValue:[NSSet setWithSet:[source valueForKey:attribKey]] forKey:attribKey];
 
        } else {
            [dest setValue:[source valueForKey:attribKey] forKey:attribKey];
        }
 
    }
 
    return YES;
}
 
@end
  

[iPhone] detecting a hit in a transparent area

Problem : let’s say you want to have a zone tht’s partially transparent and you want to know if a hit is on the non-transparent zone or not.

Under Mac OS X, you can use several methods to do so, but on the iPhone, you’re on your own.

Believe it or not, but the solution came from the past : the QuickDraw migration guide to Carbon actually contained a way to detect transparent pixels in a bitmap image. After some tweaking, the code works.

Here is the setup :
– A view containing a score of NZTouchableImageView subviews (each being able to detect if you are in a transparent zone or not)
– on top of it all, not necessary for every purpose, but needed in my case, a transparent NZSensitiveView that intercepts hits and finds out which subview of the “floorView” (the view with all the partially transparent subviews) was hit
– a delegate conforming to the NZSensitiveDelegate protocol, which reacts to hits and swipes.

The code follows. If you have any use for it, feel free to do so. The only thing I ask in return is a thanks, and if you find any bugs or any way to improve on it, to forward it my way.

Merry Christmas!

[UPDATE] It took me some time to figure out what was wrong and even more to decide to update this post, but thanks to Peng’s questions, I modified the code to work in a more modern way, even with the Gesture Recognizer and the scaling active. Enjoy again!

[UPDATE] Last trouble was linked to the contentsGravity of the images: when scaled to fit/fill, the transformation matrix is not updated, and there’s no real way to guess what it might be. Changing approach, you can trust the CALayer’s inner workings. Enjoy again again!

NZSensitiveDelegate:

@protocol NZSensitiveDelegate
 
- (void) userSlidedLeft:(CGFloat) s;
- (void) userSlidedRight:(CGFloat) s;
- (void) userSlidedTop:(CGFloat) s;
- (void) userSlidedBottom:(CGFloat) s;
 
- (void) userTappedView:(UIView*) v;
 
@end

NZSensitiveView:

@interface NZSensitiveView : UIView {
  id _sdelegate;
  UIView *_floorView;
}
 
@property(retain,nonatomic) IBOutlet id  _sdelegate;
@property(retain,nonatomic) UIView *_floorView;
 
@end
#define kSwipeMinimum 12
#define kSwipeMaximum 4
 
static UIView *currentlyTouchedView;
static CGPoint lastPosition;
static BOOL moving;
 
@implementation NZSensitiveView
@synthesize _sdelegate;
@synthesize _floorView;
 
- (id)initWithFrame:(CGRect)frame {
  if (self = [super initWithFrame:frame]) {
  // Initialization code
  }
  return self;
}
 
- (void)drawRect:(CGRect)rect {
  // Drawing code
}
 
- (void)dealloc {
  [super dealloc];
}
 
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
  UITouch *cTouch = [touches anyObject];
  CGPoint position = [cTouch locationInView:self];
  UIView *roomView = [self._floorView hitTest:position
    withEvent:nil];
 
  if([roomView isKindOfClass:[NZTouchableImageView class]]) {
    currentlyTouchedView = roomView;
  }
 
  moving = YES;
  lastPosition = position;
}
 
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
  UITouch *cTouch = [touches anyObject];
  CGPoint position = [cTouch locationInView:self];
 
  if(moving) { // as should be
    if( (position.x - lastPosition.x > kSwipeMaximum) && fabs(position.y - lastPosition.y) < kSwipeMinimum ) {
      // swipe towards the left (moving right)
      [self._sdelegate userSlidedLeft:position.x - lastPosition.x];
      [self touchesEnded:touches withEvent:event];
    } else if( (lastPosition.x - position.x > kSwipeMaximum) && fabs(position.y - lastPosition.y) < kSwipeMinimum ) {
      // swipe towards the right
      [self._sdelegate userSlidedRight:lastPosition.x - position.x];
      [self touchesEnded:touches withEvent:event];
    } else if( (position.y - lastPosition.y > kSwipeMaximum) && fabs(position.x - lastPosition.x) < kSwipeMinimum ) {
      // swipe towards the top
      [self._sdelegate userSlidedTop:position.y - lastPosition.y];
      [self touchesEnded:touches withEvent:event];
    } else if( (lastPosition.y - position.y > kSwipeMaximum) && fabs(position.x - lastPosition.x) < kSwipeMinimum ) {
      // swipe towards the bottom
      [self._sdelegate userSlidedBottom:lastPosition.y - position.y];
      [self touchesEnded:touches withEvent:event];
    }
  }
}
 
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
  UITouch *cTouch = [touches anyObject];
  CGPoint position = [cTouch locationInView:self];
  UIView *roomView = [self._floorView        hitTest:position
    withEvent:nil];
  if(roomView == currentlyTouchedView) {
    [self._sdelegate userTappedView:currentlyTouchedView];
  }
 
  currentlyTouchedView = nil;
  moving = NO;
}
 
@end

NZTouchableImageView:

@interface NZTouchableImageView : UIImageView {
}
@end
@implementation NZTouchableImageView
 
- (BOOL) doHitTestForPoint:(CGPoint)point {
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    CGBitmapInfo info = kCGImageAlphaPremultipliedLast;
 
    UInt32 bitmapData[1];
    bitmapData[0] = 0;
 
    CGContextRef context =
    CGBitmapContextCreate(bitmapData,
                          1,
                          1,
                          8,
                          4,
                          colorspace,
                          info);
 
    // draw the image into our modified context
    // CGRect rect = CGRectMake(-point.x, 
        //                             point.y - CGImageGetHeight(self.image.CGImage),
        //                             CGImageGetWidth(self.image.CGImage),
        //                             CGImageGetHeight(self.image.CGImage));
        // CGContextDrawImage(context, rect, self.image.CGImage);
    CGContextTranslateCTM(context, -point.x, -point.y);
    [self.layer renderInContext:context];
 
    CGContextFlush(context);
 
    BOOL res = (bitmapData[0] != 0);
 
    CGContextRelease(context);
    CGColorSpaceRelease(colorspace);
 
    return res;
}
 
#pragma mark -
 
- (BOOL) isUserInteractionEnabled {
  return YES;
}
 
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
  return [self doHitTestForPoint:point];
}
 
@end
  

Good ideas but hard to fathom

These days, I play a lot with CoreAudio. For those of you who don’t know what CoreAudio is, here’s a quick summary:

Core Audio is a set of services that developers use to implement audio and music features in Mac OS X applications. Its services handle all aspects of audio, from recording, editing, and playback, compression and decompression, to MIDI (Musical Instrument Digital Interface) processing, signal processing, and audio synthesis. You can use it to write standalone applications or modular plug-ins that work with existing products.

Basically it works as a collection of AudioUnits that have an input bus and an output bus and do some processing in between. The goal is to chain them to process audio.

To do so, you have to use AUNodes and AUGraph. First quirk : AUNodes and AudioUnits are not interoperable. an AUNode contains an AudioUnit. Which means that if you tailor up your nice AudioUnits and want to knit them together, you’ve gone the wrong way. You have to create the graph, and its nodes, which will create the units, which you’ll be able to tailor.

To do so, you have to describe the kind of node you want to use, with the old ComponentDescriptor structure found in QuickTime. You specify a type (output, mixer, effect,…), a subtype (headphones, stereo mixer, reverb,…), and the manufacturer (provided you know it), and ask the system to generate the node. Once you have all your nodes, you connect them together.

NewAUGraph(&myGraph);
ComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_HALOutput;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = desc.componentFlagsMask = 0;

AUGraphNewNode (myGraph, &desc, 0, NULL, &inputNode);
// etc...

AUGraphConnectNodeInput(myGraph,
inputNode, 0,
effectNode, 0); // input[0] -> effect[0]

Unless you’ve done some whacky stuff here, there’s little chance of an error. Right this moment, you have a graph, but it’s just an empty shell. It will do nothing. So no error doesn’t mean anything : the AudioUnits don’t exist as of yet.

To activate the graph and create the units, you have to make two calls:

AUGraphOpen(myGraph);
AUGraphInitialize(myGraph);

That’s where you potentially have your first issues. Since the AudioUnits are created here and there, there might be compatibility issues, audio format problems, etc… And close to no explanation except for a general “format error”. But where? You’ll have to unconnect your units to know.

Once the graph works, you will want to change the parameters of the units. So first, you extract the AudioUnit from the AUNode, and then you play with the parameters.

AUGraphGetNodeInfo (myGraph, mixerNode, 0, 0, 0, &mixerUnit); AudioUnitSetParameter(mixerUnit,
kLimiterParam_PreGain,
kAudioUnitScope_Global,
0,
dGain,
0);

Now you will get a lot of errors. AudioUnits are already pre-configured, so changing something might be illegal. There is close to no documentation on which parameters you can set on which bus and with which values. Try and debug it is.

If you’re through with configuring the units, all you have to do is start the graph to begin audio processing.

AUGraphStart(myGraph);
// and its counterpart AUGraphStop(myGraph);

So far, most of the coders out there must think “Well, it wasn’t so bad”. Well try it, you’ll see that figuring the stream format to use between nodes is far from trivial. And of course there is the question of where the sound comes from, and where it goes.

While inputting from the mike and outputting to the standard output isn’t so bad, reading from a file is far less easy (requiring to hook up into quicktime to grab the sound slices), and writing to a file kind of weird because even if the format is wrong, the file will get written without any error. You’ll get there eventually, but it’s hard.

That’s all for today, I’ll go back to my formats.