AC Voice
Local, offline voice-to-text for macOS. Hold Option+Space, speak, release — text appears at your cursor. Powered by OpenAI Whisper running on the Apple Neural Engine. Nothing leaves your Mac.
Why AC Voice
100% Offline
After the initial model download, AC Voice never touches the network. Your voice and transcriptions stay on your Mac. No accounts, no telemetry, no cloud.
Sub-second Transcription
After warmup, dictations transcribe in 200–500 ms on Apple Silicon. Faster than typing, with no waiting room.
Apple Neural Engine
Runs OpenAI Whisper large-v3 directly on the ANE via WhisperKit. State-of-the-art accuracy in 100+ languages, with negligible CPU usage.
Press, Speak, Done
Hold Option+Space anywhere on macOS. Speak. Release. Text is pasted at your cursor. Works in any app — Slack, Notion, Xcode, Terminal.
Install in 4 Steps
Download the .dmg
Click the Download button above. The file is ~3 MB and signed + notarized by Apple, so Gatekeeper won't complain.
Drag to Applications
Open the .dmg, drag AC Voice onto the Applications folder, then eject the disk image.
Grant Microphone & Accessibility
Launch AC Voice from Applications. macOS will prompt for microphone access (to record) and Accessibility (to register the global hotkey and paste text). Both are required.
Hold Option+Space
AC Voice lives in your menu bar. Hold Option+Space anywhere, speak, release. Heads up: the first dictation after each launch takes ~3 seconds while the model loads into the Neural Engine. Subsequent ones are instant.
Requirements
Minimum
- · macOS 14 (Sonoma)
- · Apple Silicon (M1+)
- · 8 GB RAM
- · 2 GB free disk
Recommended
- · macOS 15+
- · M2 Pro / M3 / M4
- · 16 GB RAM
- · Internet on first launch (model download)
FAQ
Is my voice sent to a server?
No. After the initial one-time model download (~1.5 GB from HuggingFace), AC Voice runs entirely on-device. There are no servers, no accounts, no analytics. You can verify by checking the Network section in macOS Privacy settings, or by reading the source code on GitHub.
Why is the first dictation slow?
WhisperKit lazy-loads 1.27 GB of model weights into the Apple Neural Engine on the first transcription, not at app startup. This adds ~2–3 seconds the very first time you press Option+Space after launch. Every dictation after that runs in 200–500 ms until you quit the app.
What languages does it support?
All 100+ languages that OpenAI Whisper large-v3 supports, with auto-detection. There's also a 'Translate to English' toggle in the menu bar that translates from any language into English on the fly.
How do I get updates?
Right now, updates are released as new .dmg files on this page and on the GitHub Releases page. There's no auto-updater yet. Watch or star the GitHub repo to be notified of new versions.
Try it now
Free, open source, runs on your Mac.