Voice dictation · for humans and AI agents

Just talk to your computer.

Hold a key anywhere on your machine, speak, release — your words land in the focused text field. A free, open-source, entirely-local alternative to WisprFlow. And because Alexander AI Voice clones voices too, any AI agent can speak back in a voice you own.

macOS, Windows, Linux

Hold
on macOS,
CtrlAlt
on Windows — from anywhere on your machine.
Recording
0:00
The Captures tab

Every capture, paired with audio and transcript.

Hold the shortcut, speak, release — a capture lands in the Captures tab. Replay the original audio, re-transcribe with a different model, refine with a local LLM, copy to clipboard, or send it straight to any MCP-aware agent. Nothing leaves your machine.

Captures

Beta
Search transcripts…
Apr 22, 3:47 PM·EN·Dictation
0:38
Refined with Qwen3 · 1.7B
Okay, so the pitch for Alexander AI Voice is basically this: it's a local-first voice studio. Everything runs on your machine. You clone voices from a few seconds of audio, generate speech across seven TTS engines, and now with the Captures tab, you can dictate into any app. No cloud, no API keys, no per-character fees. Your voice data never leaves your device. Privacy isn't a feature here — it's the architecture.
Play as Morgan
Copy
Re-refine
Export

Whisper, sized for every machine

Base, Small, Medium, Large, and Turbo. Pick per-capture — 99 languages at every tier, all local, all downloadable from inside the app.

LLM refinement that respects your words

A local Qwen model cleans ums, self-corrections, and punctuation — without rephrasing. Keep raw and refined side-by-side; the original audio is always kept.

Archived by default

Every dictation keeps both the audio and the transcript. Search, re-run, or turn any capture into a voice sample for cloning from the Captures tab.

MCP

Every agent gets a voice.

One tool call — voicebox.speak— and any MCP-aware agent can talk to you in a voice you’ve cloned. Claude Code, Cursor, Cline, or anything that speaks MCP.

01Add Alexander AI Voice to your MCP config
{
  "mcpServers": {
    "voicebox": {
      "url": "http://127.0.0.1:17493/mcp"
    }
  }
}
02The tool is now available
// In any MCP-aware agent:
await voicebox.speak({
  text: "Deploy complete.",
  profile: "Morgan",
})
Also exposed as POST /speakfor anything that doesn’t speak MCP — ACP, A2A, shell scripts, or custom harnesses.
Claude Code
$claude run
Tests passing (42 files)
Build succeeded in 12.4s
voicebox.speak({ profile: "Morgan" })
$
On your desktop
Speaking · Morgan
Tests passing. Ready to merge.

Per-agent voice

Bind each MCP client to a voice profile. Claude Code in Morgan, Cursor in Scarlett — you know which agent is talking without looking.

Always visible

Every agent-initiated speech surfaces the pill. No silent background TTS — you always see what’s coming out of your machine.

Open protocols

MCP ships day one. ACP, A2A, and anything else built on a tool-call primitive slots into the same endpoint.

Install Alexander AI Voice, start dictating.

Free, open-source, local. No account, no API keys, no per-character fees.