Skip to content
llm-speed

Local vs hosted: when does buying a GPU pay off?

At low usage, hosted APIs win on $/Mtok. At high sustained usage, a 4090 or M3 Ultra wins. Here's the break-even math, run against live numbers.

Verdict

The reference local rig on the leaderboard is M3 Ultra (60-core GPU) at 168.3tok/s. At ~$0.40 per million output tokens (typical 70B hosted price), a $2k GPU breaks even on output cost alone after roughly 5 billion output tokens generated — power and duty cycle move that number in either direction. The per-rig table below lets you re-do the arithmetic for your own usage.

Recommendation

Reference local rig on the leaderboard: M3 Ultra (60-core GPU) at 168.3tok/s.

We don't sell hardware and we don't take affiliate commissions on hosted APIs, so the framing is just arithmetic. A consumer GPU's break-even point against a hosted endpoint depends on three things: your sustained decode tok/s, the hosted price per million output tokens, and your duty cycle. Below is a comparison table with each row anchored to a real submitted benchmark.

Submitted benchmarks

HardwareModeldecode tok/sRun
M3 Ultra (60-core GPU)mlx-community/DeepSeek-Coder-V2-Lite-Instruct-4bit168.3tok/sr_l_v1-zq_qaz
RTX 5090 (32GB)Qwen3.6-27B-Q4_K_M.gguf69.89tok/sr_bqsunbd6xa8
Pentest-Bench<script>alert(1)</script>42.00tok/sr_a3ei8og3rkg
M3 Pro (18-core GPU)mlx-community-Qwen2.5-7B-Instruct-4bit30.52tok/sr_llzv_g-ymaf
smoke-hostQwen3-32B-Instruct.Q4_K_M10.00tok/sr_r7fc52oxuvq
xm10.00tok/sr_0_i4fok_cfg

Side-by-side comparisons

See also: All hardware · All models · Methodology