Web Audio API
Wire AudioUI controls to the native Web Audio API. Covers AudioContext lifecycle, user-gesture unlock, AudioParam automation, and a minimal polysynth example.
This guide shows how to bind AudioUI controls directly to the Web Audio API, the native browser audio stack that every higher-level audio library sits on top of. You'll build a small polysynth with a filter, master volume, waveform selector, and playable keys.
If you prefer a higher-level library, see the Tone.js integration. The patterns here transfer.
For a more complete implementation (per-voice filtering, full ADSR, panning, sustain pedal, hardware MIDI input), read the playground app source at apps/playground-react/app/examples/webaudio/ in the AudioUI repository.
What you'll build
A synth with:
- A waveform selector (
CycleButtonto oscillator type) - A lowpass filter controlled by two knobs (cutoff + resonance)
- A master volume slider
- A piano keyboard (
Keysto note-on / note-off)
AudioContext lifecycle
Browsers suspend AudioContext until a user gesture. Create the context early but resume it on a click:
import { useEffect, useRef, useState } from "react";
export function useAudioContext() {
const ctxRef = useRef<AudioContext | null>(null);
const [ready, setReady] = useState(false);
useEffect(() => {
const ctx = new (window.AudioContext || (window as any).webkitAudioContext)();
ctxRef.current = ctx;
return () => {
ctx.close();
};
}, []);
const start = () => {
if (ctxRef.current?.state === "suspended") {
ctxRef.current.resume();
}
setReady(true);
};
return { ctx: ctxRef.current, ready, start };
}The synth engine
Keep DSP concerns outside React. A plain class that owns the node graph is easier to reason about than state in hooks. Parameters are passed in their real-world units (Hz, Q-factor, linear gain 0-1), matching what the AudioUI controls emit directly.
export class SynthEngine {
private ctx: AudioContext;
private filter: BiquadFilterNode;
private master: GainNode;
private voices = new Map<number, { osc: OscillatorNode; amp: GainNode }>();
private waveform: OscillatorType = "sawtooth";
constructor(ctx: AudioContext) {
this.ctx = ctx;
this.filter = ctx.createBiquadFilter();
this.filter.type = "lowpass";
this.filter.frequency.value = 1000;
this.filter.Q.value = 1;
this.master = ctx.createGain();
this.master.gain.value = 0.3;
this.filter.connect(this.master);
this.master.connect(ctx.destination);
}
private midiToHz(midi: number) {
return 440 * Math.pow(2, (midi - 69) / 12);
}
noteOn(midi: number) {
if (this.voices.has(midi)) return;
const osc = this.ctx.createOscillator();
const amp = this.ctx.createGain();
osc.type = this.waveform;
osc.frequency.value = this.midiToHz(midi);
amp.gain.value = 0;
osc.connect(amp);
amp.connect(this.filter);
osc.start();
amp.gain.setTargetAtTime(0.5, this.ctx.currentTime, 0.01);
this.voices.set(midi, { osc, amp });
}
noteOff(midi: number) {
const voice = this.voices.get(midi);
if (!voice) return;
voice.amp.gain.setTargetAtTime(0, this.ctx.currentTime, 0.05);
voice.osc.stop(this.ctx.currentTime + 0.5);
this.voices.delete(midi);
}
// Accepts Hz directly. The Knob handles log-scale mapping via scale="log".
setCutoff(hz: number) {
this.filter.frequency.setTargetAtTime(hz, this.ctx.currentTime, 0.02);
}
setResonance(q: number) {
this.filter.Q.setTargetAtTime(q, this.ctx.currentTime, 0.02);
}
setVolume(gain: number) {
this.master.gain.setTargetAtTime(gain, this.ctx.currentTime, 0.02);
}
setWaveform(wave: OscillatorType) {
this.waveform = wave;
}
}Bridging AudioUI to the engine
onChange on AudioUI controls receives an AudioControlEvent. Extract e.value and pass it to the engine. Note how the controls work directly in real units (Hz for cutoff, linear for gain) and use valueFormatter / valueAsLabel to show the current value while the user is interacting:
import { useEffect, useRef, useState } from "react";
import { Knob, Slider, CycleButton, Keys, frequencyFormatter } from "@cutoff/audio-ui-react";
import "@cutoff/audio-ui-react/style.css";
import { SynthEngine } from "./SynthEngine";
import { useAudioContext } from "./useAudioContext";
export default function Synth() {
const { ctx, ready, start } = useAudioContext();
const engineRef = useRef<SynthEngine | null>(null);
const [cutoff, setCutoff] = useState(1000);
const [resonance, setResonance] = useState(1);
const [volume, setVolume] = useState(0.3);
const [waveform, setWaveform] = useState<OscillatorType>("sawtooth");
useEffect(() => {
if (!ctx) return;
engineRef.current = new SynthEngine(ctx);
engineRef.current.setCutoff(cutoff);
engineRef.current.setResonance(resonance);
engineRef.current.setVolume(volume);
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [ctx]);
return (
<div className="dark">
{!ready && <button onClick={start}>Start Audio</button>}
<CycleButton
value={waveform}
onChange={(e) => {
setWaveform(e.value);
engineRef.current?.setWaveform(e.value);
}}
options={[
{ value: "sawtooth", label: "Saw" },
{ value: "square", label: "Square" },
{ value: "sine", label: "Sine" },
{ value: "triangle", label: "Tri" },
]}
label="Wave"
/>
<Knob
label="Cutoff"
value={cutoff}
defaultValue={1000}
min={20}
max={10000}
scale="log"
valueFormatter={(v) => frequencyFormatter(v)}
valueAsLabel="interactive"
onChange={(e) => {
setCutoff(e.value);
engineRef.current?.setCutoff(e.value);
}}
/>
<Knob
label="Q"
value={resonance}
defaultValue={1}
min={0.1}
max={20}
step={0.1}
valueFormatter={(v) => v.toFixed(1)}
valueAsLabel="interactive"
onChange={(e) => {
setResonance(e.value);
engineRef.current?.setResonance(e.value);
}}
/>
<Slider
label="Vol"
value={volume}
defaultValue={0.3}
min={0}
max={1}
step={0.01}
orientation="vertical"
valueFormatter={(v) => `${Math.round(v * 100)}%`}
valueAsLabel="interactive"
onChange={(e) => {
setVolume(e.value);
engineRef.current?.setVolume(e.value);
}}
/>
<Keys
nbKeys={25}
startKey="C"
octaveShift={0}
onChange={(e) => {
const { note, active } = e.value as { note: number; active: boolean };
if (active) engineRef.current?.noteOn(note);
else engineRef.current?.noteOff(note);
}}
/>
</div>
);
}Key patterns
onChange receives an event, not a raw value
For continuous and discrete controls (Knob, Slider, Button, CycleButton), e.value is the typed value.
For Keys, e.value is { note: number; active: boolean }. Destructure it and branch on active for note-on / note-off.
MIDI to frequency
Standard formula: 440 * 2^((midi - 69) / 12). MIDI note 69 is A4 = 440 Hz.
Smooth parameter changes
Write to AudioParam via setTargetAtTime(target, startTime, timeConstant) rather than assigning .value directly. The time constant (in seconds) controls how fast the param reaches the target. Values of 0.01-0.05 avoid zippering on knob drags.
Let AudioUI handle non-linear knob mapping
Filter cutoff, frequency, and anything else humans perceive logarithmically should use a log-scale Knob. Pass scale="log" and real-world min / max values (e.g. min={20} max={10000} for Hz), and the Knob handles the perceptual mapping internally. Your engine receives values in real units, no manual Math.pow conversion needed.
Pair this with valueFormatter={(v) => frequencyFormatter(v)} to display the value in context (Hz, kHz) and valueAsLabel="interactive" to swap the label to the live value while the user is dragging.
defaultValue enables reset
When defaultValue is set, double-clicking the control resets it to that value. This is the standard DAW/plugin reset gesture. Always set it to a sensible starting value.
Keep DSP outside React
The SynthEngine class owns the AudioContext and nodes. React only holds state for UI rendering and forwards changes to the engine via useRef. That keeps per-frame re-renders from interfering with audio.
Going further
The example above is deliberately minimal. The playground app's webaudio example shows a full implementation with:
- Per-voice filtering (each note gets its own
BiquadFilterfor independent resonance) - Full ADSR envelope (
linearRampToValueAtTimefor attack/decay, sustain level, exponential release) - Stereo panning via
StereoPannerNode - Sustain pedal (latch
Buttonthat defers note-offs until released) - Live parameter updates across active voices (knob drags affect currently-held notes)
- Hardware MIDI input via the Web MIDI API
It's the canonical reference if you're building a real synth interface with AudioUI.
Related
- Tone.js integration, a higher-level synth/effects library
- Audio Parameters, AudioUI's parameter model for strict value validation
- Interaction System, keyboard, wheel, touch input