wrapped·Apr 28, 2026
/r/r_h0-use1ypnbyour run
Llama-3.1-8B-Instruct-4bit
on M3 Pro
29.2
tok/sdecode
M3 Pro (18-core GPU) + 36GB unified
rank in tier
4/11M3 Pro runstop 36%
best workload
chat-short
where the rig flew
slowest workload
—
single-workload run
backend
mlxMLX is the fastest backend on Apple Silicon for dense models.
faster than
- llama-3.3-70b-Instruct-4bit on M3 Ultra16.8 tok/s
- qwen2.5-72b-Instruct-4bit on M3 Ultra16.3 tok/s
- Qwen3-32B-Instruct.Q4_K_M on smoke-host10.0 tok/s