Good ideas but hard to fathom

These days, I play a lot with CoreAudio. For those of you who don’t know what CoreAudio is, here’s a quick summary:

Core Audio is a set of services that developers use to implement audio and music features in Mac OS X applications. Its services handle all aspects of audio, from recording, editing, and playback, compression and decompression, to MIDI (Musical Instrument Digital Interface) processing, signal processing, and audio synthesis. You can use it to write standalone applications or modular plug-ins that work with existing products.

Basically it works as a collection of AudioUnits that have an input bus and an output bus and do some processing in between. The goal is to chain them to process audio.

To do so, you have to use AUNodes and AUGraph. First quirk : AUNodes and AudioUnits are not interoperable. an AUNode contains an AudioUnit. Which means that if you tailor up your nice AudioUnits and want to knit them together, you’ve gone the wrong way. You have to create the graph, and its nodes, which will create the units, which you’ll be able to tailor.

To do so, you have to describe the kind of node you want to use, with the old ComponentDescriptor structure found in QuickTime. You specify a type (output, mixer, effect,…), a subtype (headphones, stereo mixer, reverb,…), and the manufacturer (provided you know it), and ask the system to generate the node. Once you have all your nodes, you connect them together.

ComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_HALOutput;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = desc.componentFlagsMask = 0;

AUGraphNewNode (myGraph, &desc, 0, NULL, &inputNode);
// etc...

inputNode, 0,
effectNode, 0); // input[0] -> effect[0]

Unless you’ve done some whacky stuff here, there’s little chance of an error. Right this moment, you have a graph, but it’s just an empty shell. It will do nothing. So no error doesn’t mean anything : the AudioUnits don’t exist as of yet.

To activate the graph and create the units, you have to make two calls:


That’s where you potentially have your first issues. Since the AudioUnits are created here and there, there might be compatibility issues, audio format problems, etc… And close to no explanation except for a general “format error”. But where? You’ll have to unconnect your units to know.

Once the graph works, you will want to change the parameters of the units. So first, you extract the AudioUnit from the AUNode, and then you play with the parameters.

AUGraphGetNodeInfo (myGraph, mixerNode, 0, 0, 0, &mixerUnit); AudioUnitSetParameter(mixerUnit,

Now you will get a lot of errors. AudioUnits are already pre-configured, so changing something might be illegal. There is close to no documentation on which parameters you can set on which bus and with which values. Try and debug it is.

If you’re through with configuring the units, all you have to do is start the graph to begin audio processing.

// and its counterpart AUGraphStop(myGraph);

So far, most of the coders out there must think “Well, it wasn’t so bad”. Well try it, you’ll see that figuring the stream format to use between nodes is far from trivial. And of course there is the question of where the sound comes from, and where it goes.

While inputting from the mike and outputting to the standard output isn’t so bad, reading from a file is far less easy (requiring to hook up into quicktime to grab the sound slices), and writing to a file kind of weird because even if the format is wrong, the file will get written without any error. You’ll get there eventually, but it’s hard.

That’s all for today, I’ll go back to my formats.


Leave a Reply