Skip to content
llm-speed

Llama-3.1-8B-Instruct-4bit on M3 Ultra (60-core GPU) + 96GB unified

M3 Ultra (60-core GPU) + 96GB unifiedM3 Ultra (60-core GPU) + 96GB unified
suite suite-v1
cli 0.0.1-dev
signedG8xb3zMu3+…
Embed badgesubmitted Apr 28, 2026

Workload results

WorkloadBackendModeldecode tok/sprefill tok/sTTFTp50p95
chat-shortmlx@0.31.3mlx-community/Llama-3.1-8B-Instruct-4bit130.2tok/s400.3tok/s340ms7.7ms7.7ms

Reproduce on your machine

Same workload, same model, signed at your rig. The exact command that produced this run:

$ pipx install llm-speed && llm-speed bench --model llama-3-1-8b-instruct --workload 'chat-short'

Runs in about a minute. Your number lands on the leaderboard signed and linkable. How it's measured.

Embed this run

Drop the badge into a README, blog post, or signature. Each render is a backlink to the signed result.

llm-speed: 130 tok/s on M3 Ultra (60-core GPU) (Llama-3.1-8B-Instruct-4bit)
[![llm-speed: 130 tok/s on M3 Ultra (60-core GPU) (Llama-3.1-8B-Instruct-4bit)](https://llm-speed.com/badge/r_v2pbc0rq2l4.svg)](https://llm-speed.com/r/r_v2pbc0rq2l4)

Related benchmarks

Provenance

Run ID
r_v2pbc0rq2l4
Fingerprint hash
bbf15132ccbbe7d7
Public key
G8xb3zMu3+pznEici/TiW0gPk5qSNIYIikGCwm1rMdQ=
Received
2026-04-28 14:13:47