Skip to content
llm-speed
Leaderboard/models/yi-coder-9b-chat

Yi-Coder-9B-Chat-4bit

1 workload result across 1 hardware configuration.

Fastest local config

103.5 decode tok/s

on M3 Ultra (60-core GPU) + 96GB unified via mlx see full run

Local runs (1 run)

Runs from contributors' own machines via MLX, llama.cpp, vLLM, exllamav2, or ollama. Signed on the submitter's hardware.

M3 Ultra (60-core GPU) + 96GB unifiedM3 Ultra (60-core GPU) + 96GB unified

WorkloadBackendQuantdecode tok/sprefill tok/sTTFTRun
chat-shortmlx@0.31.3103.5tok/s390.9tok/s307msr_3hvui9a1yuc

Yi-Coder-9B-Chat-4bit on hardware