17 April 2011
While it’s pretty easy to get started generating sound with the Web Audio API, using the Audio Data API takes a bit more work. Unlike the Web Audio API, which is callback driven, the Audio Data API doesn’t help you manage it’s buffers, so you are responsible for keeping them full.
Adapting the example from the Mozilla Wiki, we will write a function to be run every 100 milliseconds and fill the buffer.
The API uses the
Audio object for output. We create one, and configure it for output at a sample rate of 44,100.
We need to keep track of how many samples we’ve written in order to know how many samples to write.
We will try to keep ½ second of data in the output buffer, so we’ll never need to generate more than
44100 / 2 samples of data at a time. We allocate
buffer to hold these samples. Samples that have been generated but not yet written will be kept in
Each time the
write() function is called, it loops until it has filled the output buffer with up to
bufferSize samples of data. At any given time, the number of samples we want to have written to the output buffer is equal to
playPosition + bufferSize. The first iteration through the loop, there may already be samples in
currentBuffer. If this is the case, we can just write them to the output buffer. Otherwise, and on the second iteration of the loop, we need to generate some samples.
Once we have some samples, we write them to the output buffer, and keep track of the total number of samples written so far. The audio data remaining after the output buffer is filled will be kept in
currentBuffer to be used on the next invocation. If we weren’t able to write as many samples as we wanted, we’re finished for this invocation.
Finally, we can actually generate some audio data.
To control playback, we simply set or clear a timer that calls the