Tutorial #36
getUserMedia #2 - Microphone Access Advanced   2014-01-02
Translations   Cz


This is the second in a series of tutorials on Navigator.getUserMedia - the function that allows you to access the Microphone and Camera on your device from your Web Browser.

Tutorial 35: getUserMedia #1 - Camera Access showed how to access your webcam and display the video in a canvas. This one explains how you access the microphone and feed the audio into a Web Audio network.

The simplest demo would be to pass the input audio through to your speakers - but unless you are using headphones, this will produce a Feedback loop ! Instead I am going to adapt the code in Tutorial 32: Visualizing Audio #3 Time Domain Summary to visualize the input audio in a canvas.

getUserMedia is not yet implemented in all browsers but this will change over time and I will update the following status message accordingly...

As of December 2013, getUserMedia is only supported in Google Chrome and Mozilla Firefox browsers.

Here is example output for the demo with me speaking into the microphone:

Demo 1 screenshot for this tutorial

Understanding the Code

The Visualization code here is based on that described in Tutorial 32 so I strongly recommend that you look at that before going further.

Likewise, I recommend that you look at Tutorial 35 to see how getUserMedia is used to access the video feed from the camera.

Here is the Web Audio node network used in Tutorial 32.

Image 1 for this tutorial

Audio is loaded from a file into the Source Node and passed through to the Destination Node where it is played through the speakers. It is also passed to the Analyser and Javascript Nodes to produce the Time Domain data used in the visualization.

Here is the Web Audio node network used in this Demo

Image 2 for this tutorial

Here we pass the audio stream from the microphone into the Source Node. To avoid ear splitting feedback, we do not link that directly to the Destination, but we keep the network path that handles the data processing.

This is reflected in the code first with a change to the Start Button callback function. Instead of calling setupAudioNodes directly, we pass it as the callback to the navigator.getUserMedia call. We also specify that getUserMediashould handle audio, but not video, input. Because not all browsers support getUserMedia, we wrap all this in a try / catch construct with a user alert if this fails.

        $("#start_button").click(function(e) {
            try {
                  { video: false,
                    audio: true},
            } catch (e) {
                alert('webkitGetUserMedia threw exception :' + e);

navigator.getUserMedia calls setupAudioNodeswith the audio stream. This is passed to audioContext.createMediaStreamSource to create a sourceNode that can access your device's audio input.

    function setupAudioNodes(stream) {
        sourceNode = audioContext.createMediaStreamSource(stream);
        audioStream = stream;

All of the analysis and visualization code is identical to Tutorial 32. The Stop Button callback ends the audio input with audioStream.stop(), as well as the processing / graphics loop.

Here is the code for the complete demo:

In summary, although there is quite a bit of code, relatively few of the lines are involved in accessing the audio input. I hope this example is clear enough to help you write your own code to access and manipulate audio input. I think this capability is very exciting and as it becomes implemented in all the browsers we should expect to see some remarkable new applications.

Code for this Tutorial

Share this tutorial


Related Tutorials

32 : Visualizing Audio #3 Time Domain Summary   (Advanced)

35 : getUserMedia #1 - Camera Access   (Advanced)

Comment on this Tutorial

comments powered by Disqus