Real-Time Audio Spectrograms in the Browser Using Web Audio API and Canvas
Need to visualize audio in the browser — without external libraries, servers, or big frameworks? This post shows how to build a real-time spectrogram from microphone input using the Web Audio API and a raw HTML5 canvas. No dependencies, just powerful signal analysis right in your UI. Why Spectrograms? Great for audio tools, education, or music visualizers Lightweight, performant, and fully in-browser Foundation for voice activity detection or pitch tracking Step 1: Request Microphone Access Use navigator.mediaDevices.getUserMedia to tap into the user's mic: const stream = await navigator.mediaDevices.getUserMedia({ audio: true }); const audioCtx = new (window.AudioContext || window.webkitAudioContext)(); const source = audioCtx.createMediaStreamSource(stream); Step 2: Connect an Analyser Node This node provides frequency domain data you can draw from: const analyser = audioCtx.createAnalyser(); analyser.fftSize = 2048; const bufferLength = analyser.frequencyBinCount; const dataArray = new Uint8Array(bufferLength); source.connect(analyser); Step 3: Set Up the Canvas We’ll scroll horizontally, plotting frequency data vertically: const canvas = document.getElementById('spectrogram'); const ctx = canvas.getContext('2d'); let x = 0; function draw() { requestAnimationFrame(draw); analyser.getByteFrequencyData(dataArray); const imageData = ctx.getImageData(1, 0, canvas.width - 1, canvas.height); ctx.putImageData(imageData, 0, 0); // shift left for (let i = 0; i < bufferLength; i++) { const value = dataArray[i]; const percent = value / 255; const hue = Math.floor(255 - (percent * 255)); ctx.fillStyle = hsl(${hue}, 100%, 50%); ctx.fillRect(canvas.width - 1, canvas.height - i, 1, 1); } } draw(); Step 4: Tuning and Customization Use smaller fftSize for faster updates but lower frequency resolution Change color mappings for different effects (e.g., grayscale, rainbow) Log-scale the Y-axis if you want pitch-based scaling Pros and Cons ✅ Pros No libraries, just browser-native tech Responsive and low-latency Easy to embed or extend into audio apps ⚠️ Cons Raw canvas drawing requires manual optimization Not ideal for mobile or very low-end devices May need permission handling logic for UX
Need to visualize audio in the browser — without external libraries, servers, or big frameworks? This post shows how to build a real-time spectrogram from microphone input using the Web Audio API and a raw HTML5 canvas. No dependencies, just powerful signal analysis right in your UI.
Why Spectrograms?
- Great for audio tools, education, or music visualizers
- Lightweight, performant, and fully in-browser
- Foundation for voice activity detection or pitch tracking
Step 1: Request Microphone Access
Use navigator.mediaDevices.getUserMedia
to tap into the user's mic:
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
const source = audioCtx.createMediaStreamSource(stream);
Step 2: Connect an Analyser Node
This node provides frequency domain data you can draw from:
const analyser = audioCtx.createAnalyser();
analyser.fftSize = 2048;
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
source.connect(analyser);
Step 3: Set Up the Canvas
We’ll scroll horizontally, plotting frequency data vertically:
Step 4: Tuning and Customization
- Use smaller
fftSize
for faster updates but lower frequency resolution - Change color mappings for different effects (e.g., grayscale, rainbow)
- Log-scale the Y-axis if you want pitch-based scaling
Pros and Cons
✅ Pros
- No libraries, just browser-native tech
- Responsive and low-latency
- Easy to embed or extend into audio apps
⚠️ Cons
- Raw canvas drawing requires manual optimization
- Not ideal for mobile or very low-end devices
- May need permission handling logic for UX