API / Developer

Build With Reality Data

Access RealDataset, run AI battles, benchmark model grounding, replay verdicts, and build applications on top of the Reality Execution Arena.

BigAIArena API
GET /v1/datasets?ip=tritieuduong&format=json
POST /v1/battles/run –model_a=gpt –model_b=claude
GET /v1/replays/{battle_id}
GET /v1/scores/{model}?metric=grounding
Developer Features

Reality Benchmark Infrastructure

The BigAIArena API turns real-world datasets into programmable AI accountability infrastructure.

🧬

Dataset Access

Pull verified RealDatasets across BrainCrisis Eco by IP, topic, evidence type, confidence, or Meta Pattern.

⚔️

AI Battle API

Run controlled AI battles with the same prompt, same dataset, and public scoring logic.

📊

Grounding Score

Measure hallucination, citation quality, mechanism depth, and Reality-grade reasoning.

🎥

Replay Access

Retrieve battle replay logs, prompts, outputs, citations, verdicts, and timeline events.

API Pricing

Start Free. Scale Into The Arena.

Pricing is designed for builders, researchers, AI teams, and organizations that need real-world AI evaluation.

Free

Starter

$0 / month
100 API calls / month
Public dataset preview
Basic replay metadata
Community documentation
Pro

Arena Pro

$49 / month
Unlimited dataset queries
Run AI battle tests
Full replay access
Grounding + hallucination scores
Enterprise

Oracle

Custom
Private dataset sandbox
Custom battle engine
B2B verdict reports
Enterprise audit + support
Endpoint Examples

Simple Calls. Brutal Reality.

Use the API to fetch datasets, run Arena battles, verify citations, and retrieve public verdicts.

Dataset Access

GET /v1/datasets

{
  "ip": "tritieuduong",
  "topic": "glucose",
  "evidence_type": "video",
  "limit": 20
}

Run AI Battle

POST /v1/battles/run

{
  "model_a": "gpt",
  "model_b": "claude",
  "dataset_id": "TTD-MP02-OB-001",
  "scoring": "reality_grade"
}

Authentication

Authorization: Bearer YOUR_API_KEY

Headers:
{
  "Content-Type": "application/json",
  "X-Arena-Version": "2.0"
}

Sample JSON Response

{
  "battle_id": "BA-2026-001",
  "winner": "model_b",
  "grounding_score": 91,
  "hallucination_score": 4,
  "verdict": "REALDATA_VERIFIED"
}
Developer Docs

From Dataset To Verdict In 5 Steps

BigAIArena gives developers a clean path to evaluate AI against real-world evidence.

1

Create Key

Generate an API key for dataset access and battle execution.

2

Pull Dataset

Select IP, topic, evidence depth, and RealDataset format.

3

Run Battle

Send the same dataset prompt to multiple AI models.

4

Score Output

Measure grounding, citation, mechanism, fluff, and hallucination.

5

Replay Verdict

Archive the battle with public replay and shareable verdict.

Build Apps That Force AI To Prove Reality

The BigAIArena API gives developers direct access to real-world evidence, AI battle execution, public replay, and Reality-grade scoring.