Skip to main content
  1. Data Science Blog/

AI Benchmarks for 2025

·804 words·4 mins· loading · ·
Artificial Intelligence (AI) AI/ML Research & Evaluation Technology Trends & Future AI Benchmarks LLM Evaluation Machine Learning Metrics AI Model Evaluation AI Research AI System Evaluation

AI Benchmark for 2025

AI Benchmarks for 2025
#

A term “AI benchmark” is thrown around a lot and can be confusing because it’s used in slightly different ways depending on the context. In this artcile we will try to understand what are the different meaning of this term and what are the latest AI benchmarks.

What Does “AI Benchmark” Mean?
#

In general, an AI benchmark is a standardized way to evaluate the performance of an AI system and/or models. It is made of following components.

Task : The type of problem being solved. For example, Image classification, text generation, translation, reasoning etc. Dataset : The data used to test performance of the model. For example, ImageNet, SQuAD, COCO, MMLU, etc. Metrics : How performance is measured or what metrics are used to evalute the model performance. For example. Accuracy, F1 score, BLEU, perplexity, latency etc. Protocol : The process or rules for how models are evaluated. Train/test split, few-shot vs. zero-shot, human feedback etc. Leaderboard : A ranked list of various model’s performances on the against same benchmark. Papers With Code leaderboard, HuggingFace leaderboard

So when someone says “AI Benchmark”, they might be referring to:

  • Just the dataset (e.g., “I used ImageNet as a benchmark”)
  • A full evaluation suite (e.g., “OpenAI’s GPT-4 was tested on 20+ benchmarks”)
  • The ranking of models (e.g., “This model ranks #1 on SuperGLUE benchmark”)
  • The task definition (e.g., “Benchmarking code generation tasks”)

For example, when someone say the model was evaluated against SuperGLUE (a famous NLP benchmark) benchmark it means.

  • Tasks: Textual entailment, QA, coreference resolution, etc.
  • Datasets: MultiRC, ReCoRD, BoolQ…
  • Metrics: Accuracy, F1, Exact Match, etc.
  • Benchmark: The whole suite, with standard splits, rules, and a leaderboard

Important Recent Benchmarks
#

Here’s a list of LLM benchmarks released in the last 2–3 years (2022–2024) — these are recent, research-driven, and widely cited or adopted:

BenchmarkYearFocus AreaDescription
AgentBench2023Autonomous AgentsBenchmarks multi-skill AI agents across 8 environments.
AGIEval2023Human ExamsLSAT, SAT, GRE, etc. for assessing real-world performance.
AlpacaEval2023Instruction-FollowingAutomatic win-rate-based evaluation using GPT-4 as a judge.
ARC-AGI2024AGI CapabilitiesHard version of the Abstraction and Reasoning Corpus.
Arena-Hard2024Hard Dialogue TasksHarder conversations from LMSys Chatbot Arena logs.
BambooBench2024Chinese LLMsHuman-annotated multi-turn benchmark (Chat-style).
Big-Bench2022Broad benchmark covering 200+ tasksCollaborative effort to test many LLM capabilities.
CMMLU2023Chinese MMLUHigh-quality Chinese academic task benchmark (from Tsinghua).
CodeEval / HumanEval-X2023Code GenerationUsed to benchmark multilingual code generation.
CoT Collection2022–23Chain-of-Thought ReasoningCompiled many datasets to test CoT prompting & robustness.
EvalGauntlet2024Modular BenchmarkingHuggingFace-led initiative with plug-and-play evals.
FLASK2023Few-shot QAEvaluates knowledge vs skill in few-shot settings.
Gaia2023Scientific ReasoningMeasures scientific knowledge and reasoning from natural science questions. Tests model ability to retrieve, synthesize, and reason using real data (e.g., scientific texts, Wikipedia).
Gaokao-Bench2023Exam QA (Chinese)Chinese national exam benchmark, multidisciplinary.
GSM8K-Hard2023Grade School MathHarder version of GSM8K for math-focused LLM testing.
HELM2022–23Holistic EvaluationFrom Stanford CRFM; assesses models across 16 metrics.
LLM Bar2023Legal ReasoningLaw-focused benchmark, bar exam style.
Lmarena2024Preference/Chat EvalEvaluates helpfulness, harmlessness, and honesty. Include crowd-sourced human feedback in model assessments.
M3Exam2023Multi-modal ExamsCombines image + text inputs for exam-like tasks.
MATH2021–2022Math ReasoningStill actively used for deep math reasoning; basis for newer math evals.
MMLU-Pro2024Advanced KnowledgeHarder variant of MMLU; used to benchmark GPT-4 Turbo.
MT-Bench2023Multi-turn QA evaluationLLM-as-a-judge for conversational tasks (used by LMSYS).
OpenCompass2023Multi-lingual EvalBenchmark platform for multi-modal, multi-language evals.
RealWorldQA2023Spatial + Physical ReasoningUses visual context from real-world scenarios.
ThoughtSource2022Chain-of-ThoughtChain-of-thought reasoning benchmark dataset.
ToolBench2023Tool Use/Function CallingEvaluates how well LLMs use APIs/tools to solve tasks.
TORA2023Reasoning & AbstractionLanguage-only benchmark designed to replace symbolic reasoning tests.
TruthfulQA2022Truthfulness and avoiding falsehoodsMeasures whether models produce misinformation.
TÜLU Eval2023Instruction EvalFocused on helpfulness, harmlessness, and instruction following
V-Eval2023Chinese/English EvalEvaluates instruction-following and QA across domains.
WebArena2023Web Agent TasksComplex benchmarks for web-browsing agents (e.g., navigating websites).
MMLU2020Academic knowledge across 57 tasksStandard for measuring general knowledge. citeturn0search0
ARC2018Grade-school science questionsFocus on reasoning over facts. citeturn0search9
HellaSwag2019Commonsense reasoningHard multiple-choice questions.
HumanEval2021Code generationOpenAI benchmark for evaluating LLMs in Python coding.
GSM8K2021Grade-school mathMath reasoning benchmark.

MMLU (Massive Multitask Language Understanding)
ARC (AI2 Reasoning Challenge)

Category Summary
#

Broadly we can categories them in following categories.

  • Chat & Multi-turn Preference: MT-Bench, Arena-Hard, AlpacaEval, Lmarena
  • Reasoning / Exams: AGIEval, Gaokao-Bench, Gaia, ARC-AGI, MATH, GSM8K, MMLU-Pro
  • Agents & Tools: AgentBench, ToolBench, WebArena
  • Multi-modal: M3Exam, MMMU, RealWorldQA
  • Bias / Truth / Safety: TruthfulQA, ToxiGen, RealToxicityPrompts
  • Coding: HumanEval, CodeEval, HumanEval-X

Related

Experimenting with Vertex AI: A Practical Guide from Account Setup to First Model Call
·4895 words·23 mins· loading
Cloud Computing Artificial Intelligence LLM Vertex AI Google Cloud Platform Gemini GCP Vertex AI Studio Model Garden IAM MLOps
Experimenting with Vertex AI: A Practical Guide from Account Setup to First Model Call # 1. …
Cursor Chat: Architecture, Data Flow & Storage
·1318 words·7 mins· loading
Artificial Intelligence Developer Tools Software Architecture Cursor IDE Cursor Chat AI Code Editor SQLite Turbopuffer Codebase Indexing RAG Semantic Search Data Flow Local Storage Composer
Cursor Chat: Architecture, Data Flow & Storage # This document explains how Cursor chat works …
Safeguarding PII When Using LLMs in Alternative Investment Banking
·4261 words·21 mins· loading
Artificial Intelligence Financial Technology Data Privacy PII Protection LLM Privacy Alternative Investment Banking BFSI Data Privacy AI Compliance Differential Privacy Federated Learning Financial AI Security
Safeguarding PII When Using LLMs in Alternative Investment Banking # 1. Introduction # The …
AI Hallucinations in BFSI - A Comprehensive Guide
·2975 words·14 mins· loading
Artificial Intelligence Financial Technology AI Hallucinations BFSI AI Implementation Financial AI Risk Management Banking AI Ethics RAG in Finance Knowledge Graphs BFSI LLM Risk Mitigation Financial AI Compliance
AI Hallucinations in the BFSI Domain - A Comprehensive Guide # Introduction # Artificial …
Roadmap to Reality
·990 words·5 mins· loading
Philosophy & Cognitive Science Interdisciplinary Topics Scientific Journey Self-Discovery Personal Growth Cosmic Perspective Human Evolution Technology Biology Neuroscience
Roadmap to Reality # A Scientific Journey to Know the Universe — and the Self # 🌱 Introduction: The …