Whisper Distil Large v3.5: Faster English Dictation on Mac
If you use Resonant heavily for English dictation throughout the day — long documents, continuous notes, hours of voice input — the speed of transcription matters. Whisper Distil Large v3.5 is a version of Whisper specifically trained to be faster on English while keeping accuracy within about 1% of the full Turbo model.
What knowledge distillation means in practice
Knowledge distillation is a technique where a smaller “student” model is trained to mimic a larger “teacher” model. The student learns from the teacher's outputs rather than from raw data alone, which means it can achieve comparable accuracy with fewer parameters.
Distil-Whisper Large v3.5 was trained this way on English data, distilling from Whisper Large V3. The result: it runs 1.5x faster than Whisper Turbo on the same hardware, with English word error rates within roughly 1% on short-form content. The download size is comparable at around 1 GB — the compression comes from the architecture, not the quantization.
On Apple Silicon, that 1.5x speed improvement is noticeable when processing longer recordings or transcribing back-to-back quickly.
The tradeoff: English only
Distil-Whisper is trained on English. It doesn't support other languages. If there's any chance you'll need to dictate in a second language, use Whisper Turbo instead. Distil-Whisper's speed advantage comes precisely from focusing the model on a single language, so multilingual use isn't a design goal here.
For people who dictate exclusively in English and want the fastest Whisper-architecture model available offline, this is the right pick. If you already use Whisper Turbo and find yourself wishing it was faster, Distil-Whisper is the upgrade.
Distil-Whisper vs. Parakeet for English
Worth comparing directly: Parakeet TDT 0.6B v3 is also an excellent English model and is smaller at 640 MB. On most English content, Parakeet's accuracy is competitive with or better than Distil-Whisper, and it adds hotword support and multilingual capability.
Distil-Whisper makes sense if you specifically want Whisper-architecture transcription — for consistency with other Whisper-based systems, for long-form audio, or because your content happens to be served better by Whisper's training distribution. Both are strong models. Try them on your own voice and content to find which fits.
How to enable it
Open Resonant Settings → Transcription and select “Whisper Distil Large v3.5”. The ~1 GB download runs in the background. Once complete, your subsequent dictations will process with the distilled model.
All processing happens on your Mac. No audio is transmitted anywhere. The INT8 quantized model files live locally in your home directory.
Download Resonant and try Distil-Whisper on your Mac.