Deep research APIs are a new category. OpenAI, Perplexity, Google, and startups like Parallel are shipping systems that can browse the web, synthesize sources, and return cited answers in a single API call. These tools are powerful. Comparing them is not. Pricing is scattered across docs. Capabilities are buried in changelogs. Benchmarks are inconsistent, missing, or not public. The Deep Research API Index is an independent project to track, compare, and actually evaluate these APIs as the space evolves.

What's in the Index

Comparison Table

Side-by-side metrics: pricing, context windows, rate limits, output formats, citations.

Evaluation Cockpit

Run the same prompt across providers. Compare speed, depth, and reasoning.

Glossary

Clear definitions for every metric. What "grounding" means. How limits work.

Who's Behind This

Vani

I'm Vani, a Math + Informatics student at UW, currently a TA for Data Structures & Algorithms (CSE 373), and incoming Instructor for the course in Summer 2026.

I built this because I kept running into the same problem: trying to pick the right deep research API and finding zero serious, neutral comparisons. So I made the resource I wished existed.

Methodology

All data comes from official documentation, published benchmarks, and direct API testing. I don't take money from providers. If something is wrong, missing, or outdated, I want to know and I'll fix it publicly.

Independence Note

This is an independent project. I'm not affiliated with OpenAI, Perplexity, Google, Parallel, or any other provider listed here.

Get in Touch

Building with these tools, notice an error, or want to debate evaluation criteria?