Resonant — AI Memory Layer
The quality of AI output is bounded by the context you provide. Resonant turns your voice into persistent, searchable context that any AI agent — Claude, Codex, and more — accesses automatically via MCP.
11 MCP tools. Auto-discovers in Claude Code and Cursor. Everything runs locally on your Mac.
The problem
They don't know what you said this morning. They don't know what you decided yesterday. They don't know what you've been working on for the past three hours.
Every conversation starts cold. You retype context. You paste meeting notes. You re-explain decisions you already made. The model gets a fraction of what you know, and its output reflects that.
The bottleneck isn't the model. It's the context gap between what you know and what your AI tool knows.
The solution
Resonant captures everything you say — dictations, meetings, voice memos — plus where you say it: which app, which file, which window, how long. This is your ambient context.
All of it is exposed to your AI tools via MCP. 11 tools your agents can call to search, recall, and reference your voice data. No copy-pasting. No “let me find my notes.”
Auto-discovers in Claude Code and Cursor. Your AI tools gain memory the moment you install Resonant.
What Resonant captures
Dictations
Every voice prompt, tagged with app context
Meetings
Full transcripts with speaker labels
Memos
Long-form recordings, auto-summarized
Ambient context
Apps, windows, URLs, dwell time
Journal
Auto-generated daily work summary
What your AI tools see
Structured text with timestamps, speaker labels, app context, and summaries. They query what they need, when they need it. No raw audio ever leaves your Mac.
In practice
These are real interaction patterns. You ask a question. Your AI tool calls Resonant's MCP server. It gets structured data back and keeps working.
What did we decide about the rate limiter in the infra meeting?
Claude Code calls search("rate limiter infra meeting", type: "meeting")
From your Thursday infra meeting (Zoom, 38 min): The team decided on a sliding window rate limiter at 100 req/min per org. You pushed back on the fixed window approach because of burst traffic. Alex owns the k6 load tests, due by Tuesday.
What was I working on yesterday afternoon? I need to pick up where I left off.
Cursor calls ambient_timeline(date: "yesterday", start: "12:00")
Yesterday afternoon: VS Code (auth-service/middleware.ts) 12:00-14:30, Slack (#eng-team) 14:30-14:45, Chrome (Grafana api-latency dashboard) 14:45-15:20, VS Code (webhook-retry/handler.ts) 15:20-17:00. You had 3 dictations during that window, the last one was about retry backoff logic.
I recorded a voice memo about the notification architecture last week. Use that as the spec.
Claude Code calls search("notification architecture", type: "memo")
Found your memo from March 22 (4 min 12s). You specified three event types: user actions, system events, scheduled digests. Separate tables over polymorphic design. Async fan-out via job queue. 30-day retention for user actions, 7 days for system events. I'll use this as the architecture spec.
Search everything I've said this week about the JWT migration.
Cursor calls search("JWT migration", limit: 20)
Found 6 results this week: - Monday 2:14pm (dictation, VS Code): Detailed the fallback validator approach for legacy tokens - Tuesday 9:35am (meeting, Zoom standup): Committed to finishing the PR by Thursday - Tuesday 4:10pm (memo): Walked through the parallel validation strategy while on a walk - Wednesday 11:20am (dictation, Slack): Told Sarah the PR was up for review - Thursday 10:00am (meeting, Zoom): Demo'd the migration path to the team - Thursday 3:45pm (dictation, VS Code): Noted edge case with pre-migration tokens
What's queryable
Full transcripts from Zoom, Google Meet, and Teams. Speaker-attributed, timestamped, searchable by participant or topic.
Every voice-to-text input you make, tagged with the app and window title where you dictated. Your prompts become reusable context.
Long-form voice recordings transcribed and auto-summarized. Capture architecture decisions on a walk, query them from your desk.
A passive record of which apps you used, which files you opened, which URLs you visited, and how long you spent. All local.
Auto-generated summary of your workday: meetings attended, dictations made, apps used, memos recorded. Queryable by date.
Aggregated data on your voice usage: total dictations, meeting hours, words spoken, most-used apps. Your AI tools can reference trends.
Works with your tools
Resonant registers as an MCP server automatically. Open Claude Code and your voice workspace is already connected. No config files.
Cursor detects Resonant's MCP server on launch. All 11 tools are available in Composer and chat without any manual setup.
Add Resonant's MCP endpoint to your VS Code settings. One-time setup, then your Copilot Chat sessions can query your voice workspace.
Point Windsurf's MCP settings to Resonant's local server. Once configured, Cascade can search your meetings, dictations, and context.
Architecture
Resonant's MCP server runs over a local socket. When Claude Code or Cursor queries your voice workspace, the request and response never leave your machine. No cloud relay. No API proxy.
Speech recognition runs on Apple Neural Engine. Audio is processed on-device and discarded. Only the transcribed text is stored — in a local database on your filesystem.
Your AI tools see structured text: transcripts, timestamps, summaries. They never see raw audio. They never access anything you haven't spoken into Resonant. How the on-device AI works →
Free. Local. Always.
Install Resonant. Your meetings, dictations, and work context become searchable memory your AI tools access via MCP. No subscription. No cloud.
Requires macOS 14+ · Apple Silicon