This is the third in a series of four tutorials on visualizing audio using the Web Audio API. Take a look at the first one, Visualizing Audio #1 Time Domain for more background and a detailed walkthrough of the basic code used here.
The first two tutorials visualized Time and Frequency Domain data in 'real-time' - rapidly changing displays that convey the 'energy' of the audio. These are impressive and can enhance any web page that outputs audio. But these displays are frenetic and do not really tell you much about the 'structure' of the audio. Indeed, they can be quite tiring to look at for any period of time.
If you want to show how the audio stream varies over time, then a better solution is to condense each batch of audio data into some summary measure and display that in a 1 pixel wide column. Each batch of audio samples produces another column in the display and so the display builds up, left to right, as the audio is played.
This tutorial showed how to display Time Domain data in this manner to create what we might call a Summary Waveform, like this:
This type of display is used quite often to give an overview of an audio track. Soundcloud, for example, uses them with a colored overlay to highlight progress through the track.
As of December 2013, the Web Audio features used here have only been implemented in Mozilla Firefox and Google Chrome browsers.
Here is a quick recap of the network of nodes that I showed in Tutorial 30 which forms the basis of the code for this tutorial.
Each component in the Web Audio API is called a Node and we connect these together to create a Network that implements a complex function. You can think of the web audio AudioContext as a container for the Nodes that make up our network.
The SourceNode is what holds the audio clip that we will play and analyse. The code loads this from an encoded file that it fetches via a XMLHttpRequest (Ajax call). The DestinationNode is a Web Audio built-in node and is what connects the AudioContext to the audio output subsystem on your computer or device.
Understanding the Code
The starting point for this series of tutorials was the excellent tutorial Exploring the HTML5 Web Audio: visualizing sound by Jos Dirsken - he has updated his tutorial in November 2013 to reflect some of the changes in browser implementations.
Most of the code for this demo is the same as Tutorial 30, and the Audio nodes are identical, but the function drawTimeDomain() is substantially different.
The global variable column is used to keep track of the current X-coordinate that the next column should be drawn at. The audio stream is set to loop continually until stopped and drawTimeDomain() will clear the canvas and reset the column to 1 when that reaches the end of the canvas.
The canvas will look like this:
For the tutorial I have kept the graphics very simple, but you can imagine all sorts of ways to make it more attractive using color maps, scaling the range of Y values, etc.
Here are some other useful guides, refererences, etc. on Web Audio:
- Web Audio API specification
- Web Audio API - book from O'Reilly by Boris Smus
- Exploring the HTML5 Web Audio: visualizing sound - a tutorial by Jos Dirsken that I used this as the starting point for my series of four tutorials.
Code for this Tutorial
|1 :||tutorial_32_example_1.html||Live Demo 1|
Share this tutorial
30 : Visualizing Audio #1 Time Domain (Advanced)
31 : Visualizing Audio #2 Frequency Domain (Advanced)
33 : Visualizing Audio #4 Frequency Spectrogram (Advanced)
36 : getUserMedia #2 - Microphone Access (Advanced)
Comment on this Tutorial
comments powered by Disqus