Skip to content
llm-speed

Llama-3.1-8B-Instruct-4bit on M3 Pro (18-core GPU) + 36GB unified

M3 Pro (18-core GPU) + 36GB unifiedM3 Pro (18-core GPU) + 36GB unified
suite suite-v1
cli 0.0.1-dev
signedS8711zXnps…
Embed badgesubmitted Apr 28, 2026

Workload results

WorkloadBackendModeldecode tok/sprefill tok/sTTFTp50p95
chat-shortmlx@0.31.3mlx-community/Llama-3.1-8B-Instruct-4bit29.20tok/s203.3tok/s669ms34.1ms36.0ms

Reproduce on your machine

Same workload, same model, signed at your rig. The exact command that produced this run:

$ pipx install llm-speed && llm-speed bench --model llama-3-1-8b-instruct --workload 'chat-short'

Runs in about a minute. Your number lands on the leaderboard signed and linkable. How it's measured.

Embed this run

Drop the badge into a README, blog post, or signature. Each render is a backlink to the signed result.

llm-speed: 29.2 tok/s on M3 Pro (18-core GPU) (Llama-3.1-8B-Instruct-4bit)
[![llm-speed: 29.2 tok/s on M3 Pro (18-core GPU) (Llama-3.1-8B-Instruct-4bit)](https://llm-speed.com/badge/r_h0-use1ypnb.svg)](https://llm-speed.com/r/r_h0-use1ypnb)

Related benchmarks

Provenance

Run ID
r_h0-use1ypnb
Fingerprint hash
a52e5dd258afe436
Public key
S8711zXnpsbOS9F8EZCne0DE3jWiyeYAqEDECBzTVWk=
Received
2026-04-28 14:39:18