Skip to content
llm-speed
Leaderboard/models/stable-code-instruct-3b

stable-code-instruct-3b-4bit

2 workload results across 2 hardware configurations.

Fastest local config

192.5 decode tok/s

on M3 Ultra (60-core GPU) + 96GB unified via mlx see full run

Local runs (2 runs)

Runs from contributors' own machines via MLX, llama.cpp, vLLM, exllamav2, or ollama. Signed on the submitter's hardware.

M3 Ultra (60-core GPU) + 96GB unifiedM3 Ultra (60-core GPU) + 96GB unified

WorkloadBackendQuantdecode tok/sprefill tok/sTTFTRun
chat-shortmlx@0.31.3192.5tok/s560.7tok/s226msr_y2_5y8oo97d

M3 Pro (18-core GPU) + 36GB unifiedM3 Pro (18-core GPU) + 36GB unified

WorkloadBackendQuantdecode tok/sprefill tok/sTTFTRun
chat-shortmlx@0.31.319.37tok/s131.3tok/s967msr_pqjsvd-cub4

stable-code-instruct-3b-4bit on hardware