Tutorial #31
Visualizing Audio #2 Frequency Domain Advanced   2013-12-20


This is the second in a series of four tutorials on visualizing audio using the Web Audio API. Take a look at the first one, Visualizing Audio #1 Time Domain for more background and a detailed walkthrough of the basic code used here.

That tutorial showed how to access Time Domain data from an audio stream and draw a real-time, rapidly changing waveform of that data.

The partner of the Time Domain is the Frequency Domain. This contains the spectrum of audio frequencies that make up the batch of audio samples that have been analysed. To get this you have to perform a Fourier Transform on the audio data. This is a complex mathematical transformation that, fortunately, we don't need to know anything about - the inner workings of the Web Audio API handles it all for us.

In this tutorial we are going to create a real-time, rapidly changing, display of the frequencies that are contained in each batch of samples taken from the audio stream. Frequency is the X-coordinate, which increases from left to right, and the Y-coordinate is the amplitude of that frequency.

Here is a screenshot of the display that we are going to produce - take a look at the demo to see what it looks like in real-time.

Demo 1 screenshot for this tutorial

As of December 2013, the Web Audio features used here have only been implemented in Mozilla Firefox and Google Chrome browsers.


Here is a quick recap of the network of nodes that I showed in Tutorial 30 which forms the basis of the code for this tutorial.

Each component in the Web Audio API is called a Node and we connect these together to create a Network that implements a complex function. You can think of the web audio AudioContext as a container for the Nodes that make up our network.

Image 1 for this tutorial

The SourceNode is what holds the audio clip that we will play and analyse. The code loads this from an encoded file that it fetches via a XMLHttpRequest (Ajax call). The DestinationNode is a Web Audio built-in node and is what connects the AudioContext to the audio output subsystem on your computer or device.

The AnalyserNode is what collects a number of audio samples from the decoded audio stream and, in the case of this demo, performs a Fast Fourier Transform (FFT) on it to generate the Frequency Domain data. This passes that data to the JavaScriptNode which is the interface to our custom JavaScript code.

Understanding the Code

The audio sample used in the demo is a clip of the Doctor Who theme by Delia Derbyshire and the BBC Radiophonic Workshop and was downloaded from Wikipedia.

The starting point for this series of tutorials was the excellent tutorial Exploring the HTML5 Web Audio: visualizing sound by Jos Dirsken - he has updated his tutorial in November 2013 to reflect some of the changes in browser implementations.

Most of the code for this demo is the same as Tutorial 30 but there important differences in the setup of the AnalyserNode and, of course, the graphics.

Lines in the GIST that are different are marked with // **.

When we create the AnalyserNode in setupAudioNodes() we specify two attributes that were not necessary for TimeDomain data.

        analyserNode   = audioContext.createAnalyser();
        analyserNode.smoothingTimeConstant = 0.0;
        analyserNode.fftSize = fftSize;

fftSize specifies the resolution of the frequency spectrum and MUST be a factor of two - 1024 is a good value for web audio data. smoothingTimeConstant is set to 0.0 here but can be raised smooth out the resulting spectrum data. As before, we defines a Uint8Array typed array to hold the analysis results.

When the JavascriptNode has a new batch of analysis data it calls the onaudioprocess callback function. In this demo, the call analyserNode.getByteFrequencyData(frequencyArray) fetches the Frequency data, rather than the time domain data.

The graphic display is quite different. Here we want to draw a histogram of Frequency on the X-axis and the Amplitude of each frequency as the Y-coordinate.

To make the display visually interesting the code draws a background rectangle in the canvas with a linear color gradient that goes from Red through Yellow to White, such that low frequencies are Red, etc.

Each value in frequencyArray is subtracted from the height of the canvas and a 1 pixel wide rectangle is drawn from the top of the canvas. This blocks out part of the background and what remains at that given X-coordinate is the amplitude for that frequency.

The display is best seen live in the demo but here is a static screenshot to show what you should see.

Image 2 for this tutorial

The visualization is a very dynamic display with peaks of frequency rising and falling as the audio changes. It is an impressive way to accompany music or speech that you are playing but, like the previous tutorial, it is fleeting and can be distracting. The next two tutorials used the same techniques as these, but show summary data from the analyses in more slowly changing visualizations which can give a better idea of how the audio changes over time.

More information

Here are some other useful guides, refererences, etc. on Web Audio:

Code for this Tutorial

Share this tutorial

Related Tutorials

6 : Play an Audio file using JavaScript   (Intermediate)

30 : Visualizing Audio #1 Time Domain   (Advanced)

32 : Visualizing Audio #3 Time Domain Summary   (Advanced)

33 : Visualizing Audio #4 Frequency Spectrogram   (Advanced)

Comment on this Tutorial

comments powered by Disqus