How to Capture Microphone Input and Visualize Audio in the Browser

Want to access a user's microphone and show a live visualization of their voice in real time? In this guide, you'll build a simple waveform visualizer using the Web Audio API and the Canvas API — all running entirely in the browser. Step 1: Request Microphone Access We begin by asking the user for permission to use their microphone and creating an audio context from the incoming stream. const audioContext = new (window.AudioContext || window.webkitAudioContext)(); navigator.mediaDevices.getUserMedia({ audio: true }).then((stream) => { const source = audioContext.createMediaStreamSource(stream); visualize(source); }).catch((err) => { console.error("Microphone access denied:", err); }); Step 2: Set Up the Canvas Create a element in your HTML and get its drawing context. We'll use this to draw the waveform. const canvas = document.getElementById("oscilloscope"); const canvasCtx = canvas.getContext("2d"); Step 3: Create an Analyser Node This node provides real-time audio frequency and time-domain data that we can draw. function visualize(source) { const analyser = audioContext.createAnalyser(); analyser.fftSize = 2048; const bufferLength = analyser.fftSize; const dataArray = new Uint8Array(bufferLength); source.connect(analyser); function draw() { requestAnimationFrame(draw); analyser.getByteTimeDomainData(dataArray); canvasCtx.fillStyle = "black"; canvasCtx.fillRect(0, 0, canvas.width, canvas.height); canvasCtx.lineWidth = 2; canvasCtx.strokeStyle = "lime"; canvasCtx.beginPath(); const sliceWidth = canvas.width / bufferLength; let x = 0; for (let i = 0; i < bufferLength; i++) { const v = dataArray[i] / 128.0; const y = (v * canvas.height) / 2; if (i === 0) { canvasCtx.moveTo(x, y); } else { canvasCtx.lineTo(x, y); } x += sliceWidth; } canvasCtx.lineTo(canvas.width, canvas.height / 2); canvasCtx.stroke(); } draw(); } ✅ Pros Works in modern browsers with no plugins or installs. Great for building audio tools, games, or creative projects. Visual feedback makes voice apps feel more responsive. ⚠️ Cons Microphone access requires HTTPS (except localhost). Performance may vary across devices. No built-in controls or filters—manual implementation required. Wrap-Up Using just a few browser APIs, you can build responsive, real-time audio visualizations. This is perfect for voice apps, online instruments, or any creative browser-based audio tool. If this helped, consider supporting more dev-friendly content here: Buy Me a Coffee ☕

Apr 24, 2025 - 02:49
 0
How to Capture Microphone Input and Visualize Audio in the Browser

Want to access a user's microphone and show a live visualization of their voice in real time? In this guide, you'll build a simple waveform visualizer using the Web Audio API and the Canvas API — all running entirely in the browser.

Step 1: Request Microphone Access


We begin by asking the user for permission to use their microphone and creating an audio context from the incoming stream.

const audioContext = new (window.AudioContext || window.webkitAudioContext)();

navigator.mediaDevices.getUserMedia({ audio: true }).then((stream) => {
const source = audioContext.createMediaStreamSource(stream);
visualize(source);
}).catch((err) => {
console.error("Microphone access denied:", err);
});

Step 2: Set Up the Canvas


Create a element in your HTML and get its drawing context. We'll use this to draw the waveform.

const canvas = document.getElementById("oscilloscope");
const canvasCtx = canvas.getContext("2d");

Step 3: Create an Analyser Node


This node provides real-time audio frequency and time-domain data that we can draw.

function visualize(source) {
const analyser = audioContext.createAnalyser();
analyser.fftSize = 2048;
const bufferLength = analyser.fftSize;
const dataArray = new Uint8Array(bufferLength);

source.connect(analyser);

function draw() {
requestAnimationFrame(draw);

analyser.getByteTimeDomainData(dataArray);

canvasCtx.fillStyle = "black";
canvasCtx.fillRect(0, 0, canvas.width, canvas.height);

canvasCtx.lineWidth = 2;
canvasCtx.strokeStyle = "lime";

canvasCtx.beginPath();

const sliceWidth = canvas.width / bufferLength;
let x = 0;

for (let i = 0; i < bufferLength; i++) {
  const v = dataArray[i] / 128.0;
  const y = (v * canvas.height) / 2;

  if (i === 0) {
    canvasCtx.moveTo(x, y);
  } else {
    canvasCtx.lineTo(x, y);
  }

  x += sliceWidth;
}

canvasCtx.lineTo(canvas.width, canvas.height / 2);
canvasCtx.stroke();

}

draw();
}

✅ Pros


  • Works in modern browsers with no plugins or installs.
  • Great for building audio tools, games, or creative projects.
  • Visual feedback makes voice apps feel more responsive.

⚠️ Cons


  • Microphone access requires HTTPS (except localhost).
  • Performance may vary across devices.
  • No built-in controls or filters—manual implementation required.

Wrap-Up


Using just a few browser APIs, you can build responsive, real-time audio visualizations. This is perfect for voice apps, online instruments, or any creative browser-based audio tool.

If this helped, consider supporting more dev-friendly content here: Buy Me a Coffee