Your AI Assistant Knows What You Said This Morning
You had a standup at 9:30. You committed to finishing the JWT migration by Thursday. You asked Sarah to review the webhook retry PR. You mentioned the rate limiter needs load testing.
By 2pm, when you open Claude to plan the afternoon, all of that context is gone. You retype it. You paraphrase it. You forget half of it. The AI gives you a reasonable but incomplete answer because it doesn't know what you already discussed.
That changes today.
MCP: the memory layer
Resonant now exposes an MCP server with 11 tools that let any AI agent — Claude, Codex, and any MCP-compatible client — query your entire voice workspace.
Model Context Protocol (MCP) is an open standard that connects AI assistants to external data. Instead of copy-pasting meeting notes into your prompt, your AI tool calls Resonant directly and gets structured data back: transcripts with timestamps, speaker labels, app context, and ambient activity.
What this looks like
Here are real queries you can make:
You → Claude Code
“What did I commit to in this morning's standup?”
→ Claude calls search("standup", type: "meeting") and returns your exact words with timestamps.
You → Cursor
“I described an API design earlier — find it and use it as the spec.”
→ Cursor calls search("API design", type: "dictation") and turns your spoken architecture into code.
You → Claude Code
“What was I working on yesterday afternoon?”
→ Claude calls ambient_timeline(date: "yesterday", start: "12:00", end: "18:00") and shows your app-by-app work timeline.
Everything stays local
The MCP server runs inside Resonant on your Mac. When Claude queries your meeting transcript, that query travels over a local socket — not the internet. The voice data that feeds these tools never leaves your device.
The only data that reaches Claude or Cursor is the text your AI tool decides to include in its prompt. The same text you'd paste manually. MCP automates the copy-paste — it doesn't create a new privacy surface.
Setup takes 30 seconds
Claude Code and Cursor auto-discover Resonant's MCP server. No config files. No API keys. Install Resonant, use it for a day, and your AI tools can query everything you said.
For VS Code, Windsurf, and other MCP clients, add Resonant's server config to your MCP settings — one JSON block.
Why this matters
AI assistants are stateless. They start every conversation from zero. The quality of their output is bounded by the context you give them. Most people give bad context — not because they're lazy, but because retrieving and formatting context is work.
Resonant already captures everything you say, where you say it, and what you were looking at when you said it. MCP turns that captured context into something your AI tools can retrieve automatically. You stop being the bottleneck between your knowledge and your tools.
No other dictation tool does this. Your voice becomes the memory your AI assistant was missing.