I work in C++, using the Apple CoreAudio API for audio input and output, developing in XCode. This page will hold short tutorials demonstrating how to make CoreAudio work, as well as pointers to complete applications for use with the compositions found on the Scores page.
EZOSXOsc
The EZOSXOsc Xcode project shows how to use CoreAudio and my Personal FX library to send a sine wave to the audio output of the Mac.
The project requires five source code files (and their associated headers) from the PfxLib:
- InputMgr.cpp (handles audio input, unused in this project but still required)
- Oscillator.cpp (table-lookup oscillator, generates a sine wave as default)
- Pfx.cpp (handles interaction with CoreAudio, including interrupt routines for samples coming in and going out)
- Score.cpp (base class for governing unit activation and routing)
- Unit.cpp (base class for audio generation and processing units)
Source code can be found on GitHub here: https://github.com/rowe0002/EZOSXOSc
All applications made with the Personal FX library (pfx) require a Score object to change behavior and organize audio flow. A Score is an abstract base class that will be inherited from by the specifications for any particular application. In the case of EZOSXOsc, the derived Score class implementation looks like this:
EZOSXOsc::EZOSXOsc(void) { osc = new Oscillator; AddUG(osc); } EZOSXOsc::~EZOSXOsc(void) { delete osc; } void EZOSXOsc::RouteAudio(double** mixChannels) { for (int i=0; i<ugIndex; i++) ugs[i]->Update(); osc->GetOutputSamples(mixChannels, 2); }
In the constructor we allocate whatever unit generators are needed (in this case, only an Oscillator). That is then added to the queue of units that will be called at each audio refresh cycle. The mandatory override function RouteAudio() updates all the units that have been added to the queue, and fetches samples from them as designated in the code. Here, all we need to do is get the samples from the oscillator and write them into the mix output buffer (mixChannels).