AI & ML interests

None defined yet.

Recent Activity

kiriillbΒ  published a dataset about 1 month ago
ArtificialAnalysis/Earnings22-Cleaned-AA
kiriillbΒ  updated a dataset about 1 month ago
ArtificialAnalysis/Earnings22-Cleaned-AA
kiriillbΒ  updated a dataset about 1 month ago
ArtificialAnalysis/VoxPopuli-Cleaned-AA
View all activity

Factual errors

1
#3 opened 4 months ago by
jordo45

update_questions

#5 opened 4 months ago by
declanjackson

Update README.md

#1 opened 4 months ago by
declanjackson

feat/extracted-text

#7 opened 7 months ago by
kartikey-aa
georgewritescodeΒ 
posted an update 8 months ago
view post
Post
3333
Announcing Artificial Analysis Long Context Reasoning (AA-LCR), a new benchmark to evaluate long context performance through testing reasoning capabilities across multiple long documents (~100k tokens)

The focus of AA-LCR is to replicate real knowledge work and reasoning tasks, testing capability critical to modern AI applications spanning document analysis, codebase understanding, and complex multi-step workflows.

AA-LCR is 100 hard text-based questions that require reasoning across multiple real-world documents that represent ~100k input tokens. Questions are designed so answers cannot be directly found but must be reasoned from multiple information sources, with human testing verifying that each question requires genuine inference rather than retrieval.

Key takeaways:
➀ Today’s leading models achieve ~70% accuracy: the top three places go to OpenAI o3 (69%), xAI Grok 4 (68%) and Qwen3 235B 2507 Thinking (67%)

βž€πŸ‘€ We also already have gpt-oss results! 120B performs close to o4-mini (high), in-line with OpenAI claims regarding model performance. We will be following up shortly with a Intelligence Index for the models.

➀ 100 hard text-based questions spanning 7 categories of documents (Company Reports, Industry Reports, Government Consultations, Academia, Legal, Marketing Materials and Survey Reports)

➀ ~100k tokens of input per question, requiring models to support a minimum 128K context window to score on this benchmark

➀ ~3M total unique input tokens spanning ~230 documents to run the benchmark (output tokens typically vary by model)

We’re adding AA-LCR to the Artificial Analysis Intelligence Index, and taking the version number to v2.2. Artificial Analysis Intelligence Index v2.2 now includes: MMLU-Pro, GPQA Diamond, AIME 2025, IFBench, LiveCodeBench, SciCode and AA-LCR.

Link to dataset: ArtificialAnalysis/AA-LCR