Resonant
Back to resources
GuideMar 3, 2026
Share

How Doctors Can Use AI Safely

AI tools are arriving in clinical practice faster than most institutions can evaluate them. Ambient scribes that record patient encounters. Dictation tools that transcribe spoken notes. AI assistants that summarize charts, draft referral letters, or suggest diagnoses based on symptoms. Each one promises to reduce documentation burden. Each one handles patient data in some way.

The question physicians rarely have time to fully investigate is: what actually happens to that data? Where does it go, who can access it, how long is it retained, and what are the consequences if something goes wrong?

This isn't a reason to avoid AI. It's a reason to develop a clear framework for evaluating it. Some AI tools in clinical workflows are genuinely safe. Others carry risks that aren't obvious from the product page.

The core question: where does the data go?

Every AI tool that processes patient information creates a data pathway. That pathway ends either on your device or somewhere else. Understanding which one is true is the most important thing you can know about any clinical AI tool.

Cloud-based AI tools — the majority of them — send your input to a remote server for processing. That server runs the model, generates the output, and returns it to you. The audio, text, or data you submitted traveled outside your machine to make that happen.

When that input contains Protected Health Information — patient names, dates of birth, diagnoses, medications, clinical details — that transmission is a HIPAA event. It requires a Business Associate Agreement with the vendor. It creates a breach surface. It means your patients' data is held, even temporarily, by a third party whose security posture you don't control.

Local AI tools run entirely on your device. The model executes on your hardware, the output is generated locally, and nothing leaves your machine. There is no transmission event, no breach surface, and no third party.

A spectrum of risk

Not all AI tools carry the same risk profile. It's useful to think of clinical AI on a spectrum from highest to lowest data exposure:

Ambient AI scribes

Record audio continuously during the patient encounter. The entire visit — including what the patient says about symptoms, family history, mental health, and sensitive diagnoses — is transmitted to a remote server. Highest data exposure. Requires explicit patient consent, a signed BAA, and careful review of the vendor's data retention and deletion policies.

Cloud dictation tools

Physician-controlled audio is captured and sent to cloud servers for transcription. The physician decides what to say and when, which limits exposure compared to ambient recording — but PHI still travels to a third party. A BAA is required. Dragon Medical One, many AI dictation apps, and most general-purpose voice tools fall into this category.

Cloud LLM assistants (ChatGPT, Gemini, etc.)

Consumer AI assistants were not designed for clinical use. Pasting patient information into these tools transmits it to the provider's servers with no BAA, no HIPAA coverage, and no predictable data retention policy. Even with opt-outs for training, the data leaves your machine. This is the most common unsafe AI behavior in clinical practice today.

Local AI tools

Models run entirely on your device. No transmission, no third-party access, no BAA required. Lowest data exposure. Local speech recognition and on-device language models fall into this category.

Five questions to ask before adopting any AI tool

Before adding an AI tool to your clinical workflow, these questions will tell you most of what you need to know:

  1. Does the tool process data locally or in the cloud? This is the fundamental question. Ask the vendor directly. If the answer is unclear, treat it as cloud-based.
  2. Does the vendor offer a HIPAA-compliant plan and a BAA? If PHI will be transmitted, a BAA is not optional. If the vendor doesn't offer one, the tool is not appropriate for clinical use.
  3. How long does the vendor retain my data? Deletion policies vary widely. Some vendors retain audio for 30 days by default. Some retain model improvement data indefinitely. Know the retention window and confirm it contractually.
  4. What does the vendor's breach notification process look like? Under HIPAA, a vendor must notify you of a breach within 60 days. Your BAA should specify this. Understand the process before you need it.
  5. What happens to my data if the company is acquired or shuts down? Startup AI vendors are frequently acquired. Data handling policies can change under new ownership. Review what happens to your data in a change-of-control scenario.

The case for local-first AI in clinical practice

The safest AI tools for clinical use are the ones that never create a transmission event in the first place. When a model runs on your device, there is no data to breach, no vendor to audit, and no policy to monitor. The compliance question answers itself.

This is increasingly practical. Modern Macs with Apple Silicon contain a dedicated Neural Engine capable of running state-of-the-art speech recognition and language models locally at speeds that match cloud tools. The hardware constraint that once made local AI impractical has been resolved. The safety advantage remains.

For voice dictation specifically, local processing is now the right architectural choice for clinical use — not because cloud tools are unacceptable, but because local tools offer equivalent accuracy with meaningfully less risk.

How Resonant fits into safe AI use

Resonant is a local-first voice dictation tool for Mac. When you speak, your audio is processed by an on-device speech recognition model running on Apple's Neural Engine. The transcribed text appears in whatever application has focus. The audio is discarded immediately. Nothing is transmitted.

This means that when you dictate a clinical note, a referral letter, or a prior authorization, the content of that dictation never leaves your machine. No server receives it. No vendor holds it. No BAA is required because no third party is involved in the processing.

For physicians who want to use AI to reduce documentation time — which is a reasonable and achievable goal — local dictation is the lowest-risk starting point. It gives you the core benefit (speaking is faster than typing) without the data exposure that cloud tools introduce.

What “safe AI” means in practice

There is no AI tool that is safe for every use. Ambient scribes can be used safely if the vendor is vetted, the BAA is in place, the patient has consented, and the retention policy is acceptable. Cloud dictation can be used safely with the right agreements. The question is always whether the safeguards match the risk.

What makes local AI categorically different is that it removes the risk rather than managing it. You don't need to audit a vendor, monitor a policy, or train your staff on data handling requirements — because the data never goes anywhere that requires auditing.

For the growing list of AI tools that run locally — on-device speech recognition, on-device language models for note drafting, on-device summarization — the framework is simpler: run the model on your hardware, keep the data on your machine, and the compliance question is answered by architecture rather than contract.

That's the version of AI use in clinical practice that doesn't require a policy committee to approve. Start there.

Download Resonant and try local AI dictation in your next clinical note. No cloud, no account, no data leaving your machine.

Share