[iPhone] detecting a hit in a transparent area

Problem : let’s say you want to have a zone tht’s partially transparent and you want to know if a hit is on the non-transparent zone or not.

Under Mac OS X, you can use several methods to do so, but on the iPhone, you’re on your own.

Believe it or not, but the solution came from the past : the QuickDraw migration guide to Carbon actually contained a way to detect transparent pixels in a bitmap image. After some tweaking, the code works.

Here is the setup :
– A view containing a score of NZTouchableImageView subviews (each being able to detect if you are in a transparent zone or not)
– on top of it all, not necessary for every purpose, but needed in my case, a transparent NZSensitiveView that intercepts hits and finds out which subview of the “floorView” (the view with all the partially transparent subviews) was hit
– a delegate conforming to the NZSensitiveDelegate protocol, which reacts to hits and swipes.

The code follows. If you have any use for it, feel free to do so. The only thing I ask in return is a thanks, and if you find any bugs or any way to improve on it, to forward it my way.

Merry Christmas!

[UPDATE] It took me some time to figure out what was wrong and even more to decide to update this post, but thanks to Peng’s questions, I modified the code to work in a more modern way, even with the Gesture Recognizer and the scaling active. Enjoy again!

[UPDATE] Last trouble was linked to the contentsGravity of the images: when scaled to fit/fill, the transformation matrix is not updated, and there’s no real way to guess what it might be. Changing approach, you can trust the CALayer’s inner workings. Enjoy again again!

NZSensitiveDelegate:

@protocol NZSensitiveDelegate
 
- (void) userSlidedLeft:(CGFloat) s;
- (void) userSlidedRight:(CGFloat) s;
- (void) userSlidedTop:(CGFloat) s;
- (void) userSlidedBottom:(CGFloat) s;
 
- (void) userTappedView:(UIView*) v;
 
@end

NZSensitiveView:

@interface NZSensitiveView : UIView {
  id _sdelegate;
  UIView *_floorView;
}
 
@property(retain,nonatomic) IBOutlet id  _sdelegate;
@property(retain,nonatomic) UIView *_floorView;
 
@end
#define kSwipeMinimum 12
#define kSwipeMaximum 4
 
static UIView *currentlyTouchedView;
static CGPoint lastPosition;
static BOOL moving;
 
@implementation NZSensitiveView
@synthesize _sdelegate;
@synthesize _floorView;
 
- (id)initWithFrame:(CGRect)frame {
  if (self = [super initWithFrame:frame]) {
  // Initialization code
  }
  return self;
}
 
- (void)drawRect:(CGRect)rect {
  // Drawing code
}
 
- (void)dealloc {
  [super dealloc];
}
 
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
  UITouch *cTouch = [touches anyObject];
  CGPoint position = [cTouch locationInView:self];
  UIView *roomView = [self._floorView hitTest:position
    withEvent:nil];
 
  if([roomView isKindOfClass:[NZTouchableImageView class]]) {
    currentlyTouchedView = roomView;
  }
 
  moving = YES;
  lastPosition = position;
}
 
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
  UITouch *cTouch = [touches anyObject];
  CGPoint position = [cTouch locationInView:self];
 
  if(moving) { // as should be
    if( (position.x - lastPosition.x > kSwipeMaximum) && fabs(position.y - lastPosition.y) < kSwipeMinimum ) {
      // swipe towards the left (moving right)
      [self._sdelegate userSlidedLeft:position.x - lastPosition.x];
      [self touchesEnded:touches withEvent:event];
    } else if( (lastPosition.x - position.x > kSwipeMaximum) && fabs(position.y - lastPosition.y) < kSwipeMinimum ) {
      // swipe towards the right
      [self._sdelegate userSlidedRight:lastPosition.x - position.x];
      [self touchesEnded:touches withEvent:event];
    } else if( (position.y - lastPosition.y > kSwipeMaximum) && fabs(position.x - lastPosition.x) < kSwipeMinimum ) {
      // swipe towards the top
      [self._sdelegate userSlidedTop:position.y - lastPosition.y];
      [self touchesEnded:touches withEvent:event];
    } else if( (lastPosition.y - position.y > kSwipeMaximum) && fabs(position.x - lastPosition.x) < kSwipeMinimum ) {
      // swipe towards the bottom
      [self._sdelegate userSlidedBottom:lastPosition.y - position.y];
      [self touchesEnded:touches withEvent:event];
    }
  }
}
 
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
  UITouch *cTouch = [touches anyObject];
  CGPoint position = [cTouch locationInView:self];
  UIView *roomView = [self._floorView        hitTest:position
    withEvent:nil];
  if(roomView == currentlyTouchedView) {
    [self._sdelegate userTappedView:currentlyTouchedView];
  }
 
  currentlyTouchedView = nil;
  moving = NO;
}
 
@end

NZTouchableImageView:

@interface NZTouchableImageView : UIImageView {
}
@end
@implementation NZTouchableImageView
 
- (BOOL) doHitTestForPoint:(CGPoint)point {
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    CGBitmapInfo info = kCGImageAlphaPremultipliedLast;
 
    UInt32 bitmapData[1];
    bitmapData[0] = 0;
 
    CGContextRef context =
    CGBitmapContextCreate(bitmapData,
                          1,
                          1,
                          8,
                          4,
                          colorspace,
                          info);
 
    // draw the image into our modified context
    // CGRect rect = CGRectMake(-point.x, 
        //                             point.y - CGImageGetHeight(self.image.CGImage),
        //                             CGImageGetWidth(self.image.CGImage),
        //                             CGImageGetHeight(self.image.CGImage));
        // CGContextDrawImage(context, rect, self.image.CGImage);
    CGContextTranslateCTM(context, -point.x, -point.y);
    [self.layer renderInContext:context];
 
    CGContextFlush(context);
 
    BOOL res = (bitmapData[0] != 0);
 
    CGContextRelease(context);
    CGColorSpaceRelease(colorspace);
 
    return res;
}
 
#pragma mark -
 
- (BOOL) isUserInteractionEnabled {
  return YES;
}
 
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
  return [self doHitTestForPoint:point];
}
 
@end
  

There and back again

I always thought that to prove yourself good, you have to be out of your depths. One step to get that is to be out of your familiar surroundings. Preferably in a situation where everything’s not close to hand.

In Philadelphia, the problem was threefold : getting up to speed with a project I was part of 10 years ago, finding out how and where my help would be relevant, and prototyping what could be prototyped.

All of that had to be done without shutting down the place, rebooting the servers as seldom as possible, or disrupting their workflow.

I can’t really say that I didn’t disrupt their work, but in the end, when you have little to work with, you end up having to do better things. That’s something most of the people I encounter in the business don’t get at first : I know we have 2 Ghz machines and whatnots, but you should always target lower than your dev computer. The users will always find a way to overload their systems with resource-consuming things, anyway…

That’s really my kicks, that’s what I like to do : debug and optimization. And it’s also where the money isn’t these days. And boy am I glad that there are still a few customers who actually give a damn about it!

Anyway, since the thing went well (rather well, let’s say), I had the opportunity to spend part of the week end in the City That Never Sleeps, the Big Apple, New York City. Well, I guess this town had a bad influence on me : I didn’t sleep :D

And now I’m back, and out of my depths on another project. While this is not NYC, there are still quite a number of interesting things to do!

  

Good ideas but hard to fathom

These days, I play a lot with CoreAudio. For those of you who don’t know what CoreAudio is, here’s a quick summary:

Core Audio is a set of services that developers use to implement audio and music features in Mac OS X applications. Its services handle all aspects of audio, from recording, editing, and playback, compression and decompression, to MIDI (Musical Instrument Digital Interface) processing, signal processing, and audio synthesis. You can use it to write standalone applications or modular plug-ins that work with existing products.

Basically it works as a collection of AudioUnits that have an input bus and an output bus and do some processing in between. The goal is to chain them to process audio.

To do so, you have to use AUNodes and AUGraph. First quirk : AUNodes and AudioUnits are not interoperable. an AUNode contains an AudioUnit. Which means that if you tailor up your nice AudioUnits and want to knit them together, you’ve gone the wrong way. You have to create the graph, and its nodes, which will create the units, which you’ll be able to tailor.

To do so, you have to describe the kind of node you want to use, with the old ComponentDescriptor structure found in QuickTime. You specify a type (output, mixer, effect,…), a subtype (headphones, stereo mixer, reverb,…), and the manufacturer (provided you know it), and ask the system to generate the node. Once you have all your nodes, you connect them together.

NewAUGraph(&myGraph);
ComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_HALOutput;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = desc.componentFlagsMask = 0;

AUGraphNewNode (myGraph, &desc, 0, NULL, &inputNode);
// etc...

AUGraphConnectNodeInput(myGraph,
inputNode, 0,
effectNode, 0); // input[0] -> effect[0]

Unless you’ve done some whacky stuff here, there’s little chance of an error. Right this moment, you have a graph, but it’s just an empty shell. It will do nothing. So no error doesn’t mean anything : the AudioUnits don’t exist as of yet.

To activate the graph and create the units, you have to make two calls:

AUGraphOpen(myGraph);
AUGraphInitialize(myGraph);

That’s where you potentially have your first issues. Since the AudioUnits are created here and there, there might be compatibility issues, audio format problems, etc… And close to no explanation except for a general “format error”. But where? You’ll have to unconnect your units to know.

Once the graph works, you will want to change the parameters of the units. So first, you extract the AudioUnit from the AUNode, and then you play with the parameters.

AUGraphGetNodeInfo (myGraph, mixerNode, 0, 0, 0, &mixerUnit); AudioUnitSetParameter(mixerUnit,
kLimiterParam_PreGain,
kAudioUnitScope_Global,
0,
dGain,
0);

Now you will get a lot of errors. AudioUnits are already pre-configured, so changing something might be illegal. There is close to no documentation on which parameters you can set on which bus and with which values. Try and debug it is.

If you’re through with configuring the units, all you have to do is start the graph to begin audio processing.

AUGraphStart(myGraph);
// and its counterpart AUGraphStop(myGraph);

So far, most of the coders out there must think “Well, it wasn’t so bad”. Well try it, you’ll see that figuring the stream format to use between nodes is far from trivial. And of course there is the question of where the sound comes from, and where it goes.

While inputting from the mike and outputting to the standard output isn’t so bad, reading from a file is far less easy (requiring to hook up into quicktime to grab the sound slices), and writing to a file kind of weird because even if the format is wrong, the file will get written without any error. You’ll get there eventually, but it’s hard.

That’s all for today, I’ll go back to my formats.