Kokoro TTS inference on Apple Silicon via MLX.
Pure MLX implementation of the full Kokoro-82M text-to-speech pipeline. No PyTorch, no transformers, no third-party ML frameworks. Three lines to speak.
This package provides inference code only. Model weights are developed by hexgrad under the Apache 2.0 license and downloaded separately from HuggingFace Hub on first use.
Apple Silicon required. Python 3.10–3.12, MLX 0.22+.
pip install kokoro-mlxfrom kokoro_mlx import KokoroTTS
tts = KokoroTTS.from_pretrained()
tts.speak("Hello, world.")Model weights download automatically from HuggingFace Hub on first use.
- Fully on-device via MLX, no server, no cloud, no network during inference
- Pure implementation with no PyTorch or transformers dependency
- 48 kHz output from native 24 kHz via FFT upsampling, matching the sample rate modern audio hardware expects
- Mixed-precision vocoder, bf16 through the network, float32 for the final waveform reconstruction
- Gapless streaming over a single persistent audio stream with no inter-chunk silence
- 54 voices across American English, British English, and additional languages
- WAV export with a single method call
- Thread-safe with internal lock for concurrent callers
- Context manager for automatic resource cleanup
- Speed control from any multiplier
Load a model from a local directory or the HuggingFace Hub.
tts = KokoroTTS.from_pretrained()
# or a specific repo
tts = KokoroTTS.from_pretrained("mlx-community/Kokoro-82M-bf16")
# or a local directory
tts = KokoroTTS.from_pretrained("/path/to/model")Synthesize text and return a TTSResult.
| Parameter | Type | Default | Description |
|---|---|---|---|
text |
str |
required | Input text to synthesize |
voice |
str |
"af_heart" |
Voice name (see Available Voices) |
speed |
float |
1.0 |
Speaking rate multiplier (>1 faster, <1 slower) |
sample_rate |
int |
24000 |
Output sample rate: 24000 (native) or 48000 (2x upsampled) |
Synthesize text and yield audio chunks sentence by sentence. Lower latency than generate for longer inputs.
Synthesize and immediately play text through the speakers.
| Parameter | Type | Default | Description |
|---|---|---|---|
text |
str |
required | Input text to synthesize |
voice |
str |
"af_heart" |
Voice name |
speed |
float |
1.0 |
Speaking rate multiplier |
stream |
bool |
False |
Play chunk-by-chunk for lower latency |
stop_event |
threading.Event or None |
None |
Set to interrupt playback |
sample_rate |
int |
24000 |
Output sample rate: 24000 or 48000 |
Synthesize text and write audio to a WAV file.
result = tts.save("Hello, world.", "output.wav", sample_rate=48000)Return a sorted list of all available voice names.
voices = tts.list_voices()
# ['af_alloy', 'af_aoede', 'af_bella', ...]Release held resources. Called automatically when using the context manager.
with KokoroTTS.from_pretrained() as tts:
tts.save("Hello, world.", "output.wav")@dataclass
class TTSResult:
audio: np.ndarray # float32
sample_rate: int # 24000 or 48000
duration: float # seconds
voice: str # voice name usedVoice names follow a prefix convention: the first two characters identify the accent and gender.
| Prefix | Description |
|---|---|
af_ |
American English, Female |
am_ |
American English, Male |
bf_ |
British English, Female |
bm_ |
British English, Male |
ef_ |
Other English, Female |
em_ |
Other English, Male |
ff_ |
French, Female |
hf_ |
Hindi, Female |
hm_ |
Hindi, Male |
if_ |
Italian, Female |
im_ |
Italian, Male |
jf_ |
Japanese, Female |
jm_ |
Japanese, Male |
pf_ |
Portuguese, Female |
pm_ |
Portuguese, Male |
zf_ |
Chinese Mandarin, Female |
zm_ |
Chinese Mandarin, Male |
American English (Female): af_alloy, af_aoede, af_bella, af_heart (default), af_jessica, af_kore, af_nicole, af_nova, af_river, af_sarah, af_sky
American English (Male): am_adam, am_echo, am_eric, am_fenrir, am_liam, am_michael, am_onyx, am_puck, am_santa
British English (Female): bf_alice, bf_emma, bf_isabella, bf_lily
British English (Male): bm_daniel, bm_fable, bm_george, bm_lewis
Text Input
│
▼
G2P / Phonemizer (misaki)
│
▼
Phoneme Sequence
│
▼
TextEncoder (PL-BERT / ALBERT, 12 layers, 768 hidden)
│
▼
ProsodyPredictor (duration + pitch)
│
├── Voice Style Vector (per-voice, 256-dim)
│
▼
Decoder (StyleTTS2-style, AdaIN + residual blocks) [bf16]
│
▼
ISTFTNet Vocoder (80-bin mel → waveform) [float32]
│
▼
Optional 2x FFT upsample (24 kHz → 48 kHz)
│
▼
TTSResult { audio float32, duration, voice }
The network runs in bf16 for throughput. At the vocoder output, the signal is promoted to float32 for waveform reconstruction: magnitude recovery, phase extraction, inverse DFT, and overlap-add synthesis. This keeps inference fast while preserving the precision the iSTFT path needs.
- Apple Silicon Mac (M1 or later)
- macOS 13+
- Python 3.10–3.12
- MLX 0.22+
git clone https://github.com/gabrimatic/kokoro-mlx.git
cd kokoro-mlx
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
python -m pytest tests/ -vSkip model-loading tests with -m "not slow".
Kokoro-82M by hexgrad · MLX by Apple · misaki G2P by hexgrad · MLX weights from mlx-community
Legal notices
This package provides inference code only. It does not include model weights.
The Kokoro-82M model weights are developed by hexgrad and released under the Apache License 2.0. The MLX conversion is hosted by mlx-community under the same license. By downloading and using the model weights, you agree to the terms of the Apache 2.0 license.
"MLX" is a trademark of Apple Inc. "HuggingFace" is a trademark of Hugging Face, Inc.
This project is not affiliated with, endorsed by, or sponsored by Apple, Hugging Face, or any other trademark holder. All trademark names are used solely to describe compatibility with their respective technologies.
This project depends on:
| Package | License |
|---|---|
| mlx | MIT |
| numpy | BSD-3-Clause |
| huggingface-hub | Apache-2.0 |
| soundfile | BSD-3-Clause |
| misaki | Apache-2.0 |
| sounddevice (optional) | MIT |
This inference code is released under the MIT License. See LICENSE for details.
The model weights have their own license (Apache 2.0). See Model License above.
Created by Soroush Yousefpour
