Recently I upgraded a little piece of code that does real time synthesis of chimes to use the Web Audio API. Originally it was written with the simple Mozilla Audio Data API, which has been deprecated in favor of the far more capable but complex Web Audio API.
This is a quick post on real time PCM output with Web Audio API.
PCM sample output with Audio Data API (deprecated)
Back in 2010, Mozilla came out with the first native PCM output in the browser that didn’t require Flash.
The method was beautifully simple and straightforward and all you needed if you had a source of PCM audio samples:
function myPCMGenerationFunction(soundData) { // Overwrite the soundData array with your PCM samples. }
The you needed a to set up the destination.
var pcm_audio_destination = new AudioDataDestination(sampleRate, myPCMGenerationFunction);
Finally, you needed to start the sound.
// Start consuming PCM samples and sending them to the speaker. pcm_audio_destination.start(); // Eventually, stop playing PCM samples. pcm_audio_destination.stop();
Easy peasy.
PCM sample output with Web Audio API
Later, Google contributed an API called Web Audio API. It is much more capable and much more complex. It has the kitchen sink thrown in, but all we need here it raw PCM output.
For a long time I could not find documentation on how to accomplish this simple thing with the new API. Recently however I came across this terrific post that uses this method. Turns out it is not too bad.
Here is an example of a noise generator:
function myPCMSource() { return Math.random() * 2 - 1; // For example, generate noise samples. }
The rest is boilerplate.
var audioContext; try { window.AudioContext = window.AudioContext || window.webkitAudioContext; audioContext = new AudioContext(); } catch(e) { alert('Web Audio API is not supported in this browser'); } var bufferSize = 4096; var myPCMProcessingNode = audioContext.createScriptProcessor(bufferSize, 1, 1); myPCMProcessingNode.onaudioprocess = function(e) { var output = e.outputBuffer.getChannelData(0); for (var i = 0; i < bufferSize; i++) { // Generate and copy over PCM samples. output[i] = myPCMSource(); } } myPCMProcessingNode.connect(audioContext.destination); myPCMProcessingNode.start(0);
Here is a jsfiddle to show this in action. Warning, turn down your audio!
Real time processing of microphone input
If you need to do real time processing of microphone input, followed by playback, then you need to hook up the microphone audio source.
function myPCMFilterFunction(inputSample) { var noiseSample = Math.random() * 2 - 1; return inputSample + noiseSample * 0.1; // For example, add noise samples to input. }
The rest is boiler plate to set up the microphone etc.
var bufferSize = 4096; var audioContext; try { window.AudioContext = window.AudioContext || window.webkitAudioContext; audioContext = new AudioContext(); } catch(e) { alert('Web Audio API is not supported in this browser'); } // Check if there is microphone input. try { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; var hasMicrophoneInput = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia); } catch(e) { alert("getUserMedia() is not supported in your browser"); } // Create a pcm processing "node" for the filter graph. var bufferSize = 4096; var myPCMProcessingNode = audioContext.createScriptProcessor(bufferSize, 1, 1); myPCMProcessingNode.onaudioprocess = function(e) { var input = e.inputBuffer.getChannelData(0); var output = e.outputBuffer.getChannelData(0); for (var i = 0; i < bufferSize; i++) { // Modify the input and send it to the output. output[i] = myPCMFilterFunction(input[i]); } } var errorCallback = function(e) { alert("Error in getUserMedia: " + e); }; // Get access to the microphone and start pumping data through the graph. navigator.getUserMedia({audio: true}, function(stream) { // microphone -> myPCMProcessingNode -> destination. var microphone = audioContext.createMediaStreamSource(stream); microphone.connect(myPCMProcessingNode); myPCMProcessingNode.connect(audioContext.destination); //microphone.start(0); }, errorCallback);
You can try this out in a jsfiddle here.
Screeching audio feedback and ripples on a lake
Audio feedback is a big issue when using the above real time microphone processing code on a laptop. Audio feedback is also a big issue in PA systems and hearing aids. Echo cancellation can be used to take care of this.
However, mixing noise with the microphone audio reminded me of a phenomenon that I noticed long ago.
Have you ever observed the build up of small waves on a pond in a steady breeze? Have you noticed that if it starts raining the waves will disappear?
For the case of rain on a lake, the feedback loop is analogous to the wind exciting the waves on the lake and the noise is analogous to rain. I wondered if the same would happen to the screeching audio feedback.
Here is another jsfiddle to try this out.
It’s a subtle effect, but if you turn down the noise, any sound will trigger feedback and quickly build. If you turn up the noise it becomes less sensitive to the buildup of feedback and can even disappear!
is it right cording?: myPCMProcessingNode.start(0);
chrome dugger says it’s not a function.
JavaScript does not run in a real-time processing thread and thus can be pre-empted by many other threads running on the system.