Resonant
Back to resources
GuideMar 22, 2026
Share

How to Use Voice Dictation with Claude Code, Cursor, and AI Coding Tools

Claude Code ships with a built-in /voice command. It works for short prompts. For anything longer, most developers switch to a dedicated dictation tool. Here's why, and how.

TL;DR

  • Claude Code /voice: Hold spacebar, speak, release. Quick but limited — cloud audio, one model, hold-to-talk only.
  • Standalone dictation (Resonant): Hold fn, speak anywhere, release. Local processing, 10+ models, works in every app on your Mac.
  • Why it matters: Voice prompts carry more context than typed ones. More context means better model output.

Method 1: Claude Code's built-in /voice command

Claude Code has a native voice mode. Type /voice in the prompt to enable it. Then hold the spacebar, speak your prompt, and release to submit.

The basics

1

Enable voice mode

Type /voice in the Claude Code prompt

2

Hold spacebar and speak

Your audio is recorded while the key is held

3

Release to submit

Audio is transcribed and sent as your prompt

That's the whole workflow. For a quick “run the tests” or “what does this function do”, it gets the job done.

But once your prompts get longer or more technical, the limitations start to show.

Where /voice falls short

  • Cloud audio. Your spoken prompts are sent to a remote server for transcription. If you're describing internal architecture, auth flows, or proprietary APIs, that audio leaves your machine.
  • Hold-to-talk. You have to hold the spacebar for the entire duration of your prompt. For a 30-second explanation of a complex bug, that's awkward. For a two-minute architecture dump, it's impractical.
  • One transcription model. You get whatever model the /voice command uses. No choice of model, no tuning for your language or vocabulary, no hotword support for project-specific terms.
  • Terminal only. The /voice command lives inside Claude Code's CLI. If you also use Cursor, VS Code, Windsurf, or any other tool — you need a separate solution for each one.
  • Accuracy on technical terms. Try dictating "the useEffect cleanup in the AuthProvider context" and check what comes back. General-purpose cloud transcription stumbles on React hooks, API names, and framework-specific vocabulary.

These aren't edge cases. If you dictate prompts regularly — especially prompts about code, system design, or debugging — you hit every one of them within the first week.

Method 2: A standalone dictation tool

A standalone dictation app runs at the OS level. It's not tied to Claude Code or any specific editor. Hold a hotkey, speak, release — the transcribed text appears wherever your cursor is. Terminal, editor, browser, Slack. One tool for everything.

With Resonant

1

Hold fn (or your chosen hotkey)

Works anywhere on your Mac — terminal, editor, browser

2

Speak naturally

No spacebar hold. Release when you're done or toggle mode.

3

Text appears at your cursor

Already in the Claude Code prompt, Cursor chat, or wherever you were typing

The result is the same — you speak, text appears, Claude Code processes it. The difference is everything underneath.

Side-by-side comparison

 Claude Code /voiceResonant
ProcessingCloudOn-device (Apple Neural Engine)
Audio leaves your MacYesNever
Works inClaude Code onlyEvery app on your Mac
Transcription models1 (cloud)10+ local (Parakeet, Whisper, Moonshine, etc.)
Hotword supportNoYes — custom vocabulary for project terms
ActivationHold spacebarfn key, toggle, or custom hotkey
LanguagesEnglish-focused99+ languages with dedicated models
PriceIncluded with Claude CodeFree

What better dictation actually does for your prompts

The point of voice dictation for Claude Code isn't saving a few keystrokes. It's context.

When you type, you abbreviate. You skip the backstory. You leave out the edge case you thought of but didn't want to type out. The model gets a compressed version of what you know.

When you speak, you don't compress. You naturally include the why, the constraints, the thing that went wrong last time. That produces better prompts, which produces better code.

Typed

“fix the race condition in the webhook handler”

Dictated

“the Polar webhook handler in convex/http.ts has a race condition — when two subscription events fire within milliseconds of each other, the second one reads stale data from the database because the first mutation hasn't committed yet. I need either optimistic locking or a serialization strategy. The subscriptions table uses the polarSubscriptionId as the key.”

Same developer. Same problem. The dictated version takes about twelve seconds to say. Typing it takes over a minute — so most people just don't. They send the short version and wonder why the model produces a generic answer.

Where this fits across AI coding tools

A standalone dictation tool isn't just for Claude Code. If you use multiple AI tools — and most developers do — you get one voice input layer for all of them.

  • Claude Code (terminal). Dictate prompts directly into the CLI. The text appears at your cursor just like typing. No /voice command needed — Resonant works in any terminal.
  • Cursor / Windsurf. Dictate into the chat panel or inline prompt. Describe the refactor, the bug, the feature — with full context — in seconds.
  • VS Code + Copilot Chat. Speak your question into the chat input. Works the same as typing, but you include more detail because speaking is faster.
  • Claude.ai / ChatGPT (browser). Dictate into the web prompt field. Architecture discussions, code reviews, debugging sessions — all voice-driven.
  • Slack / Linear / Notion. The same dictation works for writing tickets, PR descriptions, messages, and docs. One muscle memory for everything.

The privacy angle for developers

When you dictate a Claude Code prompt, you're often describing internal systems. Auth flows. Database schemas. API contracts. Bugs in production. These aren't casual conversations.

With Claude Code's /voice, that audio goes to a cloud transcription service before the text reaches Claude. That's two cloud hops for your spoken architectural details.

With a local dictation tool, the audio stays on your Mac. The Apple Neural Engine transcribes it on-device. The only thing that leaves your machine is the finished text — and only when you choose to submit it to Claude Code.

For indie developers, this might not matter. For anyone working on proprietary code, enterprise systems, or anything under NDA, the architecture of the dictation tool matters as much as the architecture of the code.

Getting started

If you want to try Claude Code's built-in voice, type /voice in the CLI and hold the spacebar. It's already installed.

If you want local, system-wide dictation that works in Claude Code, Cursor, and everything else on your Mac: download Resonant. It's free. Hold fn, speak, release. The text appears wherever your cursor is.

No account. No cloud. No subscription. Just hold a key and talk.

Frequently asked questions

How do I use voice in Claude Code?

Type /voice in the Claude Code prompt to enable voice mode. Hold the spacebar to record, speak, and release to submit. For a more capable option, use a standalone tool like Resonant — hold fn, speak, release, and the text appears at your cursor in any app.

Does Claude Code voice send my audio to the cloud?

Yes. The /voice command sends your audio to a cloud transcription service. If your prompts reference internal systems, proprietary code, or architecture details, that audio leaves your machine. Local tools like Resonant process everything on-device.

What's the best voice dictation tool for Claude Code?

For developers, Resonant. It runs locally, supports 10+ transcription models (including Parakeet and Whisper), works system-wide across all your tools, and is free. It's built for the way developers actually work — in terminals, editors, and browsers.

Can I use voice dictation with Cursor and other AI coding tools?

A standalone dictation tool works everywhere — Cursor's prompt field, VS Code chat, Claude Code's terminal, Warp, iTerm, your browser, Slack, Linear. One hotkey for voice input across every app.

Is Claude Code's /voice good enough for daily use?

For quick, short prompts it works fine. For longer prompts with technical vocabulary, or if you care about keeping audio local, most developers outgrow it. The hold-to-talk constraint alone makes it impractical for prompts longer than about 15 seconds.

Share

Try Resonant free

Private voice dictation for Mac and Windows. 100% on-device, no account required. Download and start speaking in under a minute.