Skytells
HomeModelsCLIChangelog
  • Home
  • Models
  • CLI
  • Changelog
Skytells

Addressing the world's greatest challenges with AI. Enterprise research, foundation models, and infrastructure trusted by organizations worldwide since 2012.

Get Started

  • Console
  • Learn
  • Documentation
  • API Reference
  • Pricing
  • ModelsNew

Platform

  • Cloud AgentsNew
  • AI Solutions
  • Infrastructure
  • Edge Network
  • Trust Center
  • CLI

Resources

  • Blog
  • Changelog
  • AI Leaderboard
  • Research
  • Status

Company

  • About
  • Careers
  • Legal
  • Privacy Policy

© 2012–2026 Skytells, Inc. All rights reserved.

Live rankings

AI Model Leaderboard

Every major AI model ranked across benchmark quality, inference speed, agentic capability, programming aptitude, and cost efficiency — updated continuously from published evaluation data.

Explore full leaderboardBrowse model catalog

296

Tracked models

27

Providers

253

Benchmarked

32.2

Avg. index

OverallBenchmarksInferenceAgenticProgrammingValue / Price

296 models

RankModelProviderScoreBenchmarksInferenceAgenticProgrammingValuePrice
261

Nemotron 3 Super (120B A12B)

nemotron-3-super-120b-a12b

codeprogrammingtool use
NNVIDIA

0.0

Inference

48.30.08.726.80.0N/A
262

Nemotron Nano 9B v2

nvidia-nemotron-nano-9b-v2

textinference
NNVIDIA

0.0

Inference

24.90.00.00.00.0N/A
263

o1-pro

o1-pro

multimodalvisionmulti-input reasoning
OpenAI

0.0

Inference

47.10.00.00.00.0N/A
264

Phi-3.5-MoE-instruct

phi-3.5-moe-instruct

multimodalvisionmulti-input reasoning
MMicrosoft

0.0

Inference

8.20.00.00.00.0N/A
265

Phi-3.5-vision-instruct

phi-3.5-vision-instruct

multimodalvisionmulti-input reasoning
MMicrosoft

0.0

Inference

2.30.00.00.00.0N/A
266

Phi 4 Mini

phi-4-mini

textinference
MMicrosoft

0.0

Inference

2.00.00.00.00.0N/A
267

Phi 4 Mini Reasoning

phi-4-mini-reasoning

textinference
MMicrosoft

0.0

Inference

21.70.00.00.00.0N/A
268

Phi 4 Reasoning

phi-4-reasoning

textinference
MMicrosoft

0.0

Inference

23.10.00.00.00.0N/A
269

Phi 4 Reasoning Plus

phi-4-reasoning-plus

textinference
MMicrosoft

0.0

Inference

31.50.00.00.00.0N/A
270

QvQ-72B-Preview

qvq-72b-preview

multimodalvisionmulti-input reasoning
AAlibaba Cloud / Qwen Team

0.0

Inference

38.20.00.00.00.0N/A
271

Qwen2.5 14B Instruct

qwen-2.5-14b-instruct

textinference
AAlibaba Cloud / Qwen Team

0.0

Inference

14.60.00.00.00.0N/A
272

Qwen2.5 32B Instruct

qwen-2.5-32b-instruct

textinference
AAlibaba Cloud / Qwen Team

0.0

Inference

18.60.00.00.00.0N/A
273

Qwen2.5-Coder 7B Instruct

qwen-2.5-coder-7b-instruct

textinference
AAlibaba Cloud / Qwen Team

0.0

Inference

0.00.00.00.00.0N/A
274

Qwen2.5-Omni-7B

qwen2.5-omni-7b

multimodalvisionmulti-input reasoning
AAlibaba Cloud / Qwen Team

0.0

Inference

7.60.00.00.00.0N/A
275

Qwen2.5 VL 32B Instruct

qwen2.5-vl-32b

multimodalvisionmulti-input reasoning
AAlibaba Cloud / Qwen Team

0.0

Inference

21.20.01.60.00.0N/A
276

Qwen2.5 VL 72B Instruct

qwen2.5-vl-72b

multimodalvisionmulti-input reasoning
AAlibaba Cloud / Qwen Team

0.0

Inference

24.90.05.70.00.0N/A
277

Qwen2.5 VL 7B Instruct

qwen2.5-vl-7b

multimodalvisionmulti-input reasoning
AAlibaba Cloud / Qwen Team

0.0

Inference

9.60.00.00.00.0N/A
278

Qwen2 72B Instruct

qwen2-72b-instruct

textinference
AAlibaba Cloud / Qwen Team

0.0

Inference

12.00.00.00.00.0N/A
279

Qwen2 7B Instruct

qwen2-7b-instruct

textinference
AAlibaba Cloud / Qwen Team

0.0

Inference

2.40.00.00.00.0N/A
280

Qwen2-VL-72B-Instruct

qwen2-vl-72b

multimodalvisionmulti-input reasoning
AAlibaba Cloud / Qwen Team

0.0

Inference

9.30.00.00.00.0N/A
261
N

Nemotron 3 Super (120B A12B)

NVIDIA

0.0

N/A

262
N

Nemotron Nano 9B v2

NVIDIA

0.0

N/A

263

o1-pro

OpenAI

0.0

N/A

264

Page 14 of 15 · 296 models

PreviousNext

Want benchmark charts, model comparison, and pricing analytics?

Sign in to access the full interactive leaderboard with deep benchmark breakdowns and model comparison tools.

Open full leaderboard

Rankings are based on multi-dimensional evaluation across benchmark quality, inference efficiency, and cost-per-output. Scores are updated continuously and may differ from individual third-party benchmarks.

M

Phi-3.5-MoE-instruct

Microsoft

0.0

N/A

265
M

Phi-3.5-vision-instruct

Microsoft

0.0

N/A

266
M

Phi 4 Mini

Microsoft

0.0

N/A

267
M

Phi 4 Mini Reasoning

Microsoft

0.0

N/A

268
M

Phi 4 Reasoning

Microsoft

0.0

N/A

269
M

Phi 4 Reasoning Plus

Microsoft

0.0

N/A

270
A

QvQ-72B-Preview

Alibaba Cloud / Qwen Team

0.0

N/A

271
A

Qwen2.5 14B Instruct

Alibaba Cloud / Qwen Team

0.0

N/A

272
A

Qwen2.5 32B Instruct

Alibaba Cloud / Qwen Team

0.0

N/A

273
A

Qwen2.5-Coder 7B Instruct

Alibaba Cloud / Qwen Team

0.0

N/A

274
A

Qwen2.5-Omni-7B

Alibaba Cloud / Qwen Team

0.0

N/A

275
A

Qwen2.5 VL 32B Instruct

Alibaba Cloud / Qwen Team

0.0

N/A

276
A

Qwen2.5 VL 72B Instruct

Alibaba Cloud / Qwen Team

0.0

N/A

277
A

Qwen2.5 VL 7B Instruct

Alibaba Cloud / Qwen Team

0.0

N/A

278
A

Qwen2 72B Instruct

Alibaba Cloud / Qwen Team

0.0

N/A

279
A

Qwen2 7B Instruct

Alibaba Cloud / Qwen Team

0.0

N/A

280
A

Qwen2-VL-72B-Instruct

Alibaba Cloud / Qwen Team

0.0

N/A