File size: 1,474 Bytes
fa81b3f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: mit
task_categories:
- video-text-to-text
---

# RMOT26

RMOT26 is a large-scale benchmark for **Query-Driven Multi-Object Tracking**, introduced in the paper [QTrack: Query-Driven Reasoning for Multi-modal MOT](https://huggingface.co/papers/2603.13759).

- **Project Page:** [https://gaash-lab.github.io/QTrack/](https://gaash-lab.github.io/QTrack/)
- **Repository:** [https://github.com/gaash-lab/QTrack](https://github.com/gaash-lab/QTrack)
- **Paper:** [https://arxiv.org/abs/2603.13759](https://arxiv.org/abs/2603.13759)

## Description

Multi-object tracking (MOT) has traditionally focused on estimating trajectories of all objects in a video. RMOT26 introduces a query-driven tracking paradigm that formulates tracking as a spatiotemporal reasoning problem conditioned on natural language queries. 

Given a reference frame, a video sequence, and a textual query, the goal is to localize and track only the target(s) specified in the query while maintaining temporal coherence and identity consistency. RMOT26 features grounded queries and sequence-level splits to prevent identity leakage and enable robust evaluation of generalization.

## Citation

```bibtex
@article{ashraf2026qtrack,
  title={QTrack: Query-Driven Reasoning for Multi-modal MOT},
  author={Ashraf, Tajamul and Tariq, Tavaheed and Yadav, Sonia and Ul Riyaz, Abrar and Tak, Wasif and Abdar, Moloud and Bashir, Janibul},
  journal={arXiv preprint arXiv:2603.13759},
  year={2026}
}
```