Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type 'groundtruth_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 673, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'groundtruth_kwargs' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1908, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 688, in finalize
self._build_writer(self.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'groundtruth_kwargs' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1919, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
episode
int64 | episode_seed
int64 | split
string | timestamp
string | batch_dir
string | init_args
dict | run_args
dict | stats
dict | history
list | extra_state
null | hash
string |
|---|---|---|---|---|---|---|---|---|---|---|
0
| 1,608,637,542
|
train
|
2025-09-21T10:01:36.740924
|
/home/zwcolin/projects/VLM_Gym_Episodes_v5/colorization/shards_20250921_100120/train/batch_000000
| {"env_repr":"<OrderEnforcing<ColorCoordinatorEnv<colorization/ColorizationEnv-v2>>>","vlm_repr":"<gy(...TRUNCATED)
| {"render":false,"seed":1608637542,"verbose":false,"save_verbose":false,"elicit_verbalization":true,"(...TRUNCATED)
|
{
"step": 8,
"reward": 1,
"terminated": true,
"truncated": false
}
| [{"step":0,"prompt":"You are performing a color-matching task. You see two images side by side:\n- L(...TRUNCATED)
| null |
dcdafb4554ebb929f0e697d54ff027f72f1702fe2a00d16b69596c1a2ce842f9
|
1
| 3,421,126,067
|
train
|
2025-09-21T10:01:40.541676
|
/home/zwcolin/projects/VLM_Gym_Episodes_v5/colorization/shards_20250921_100120/train/batch_000000
| {"env_repr":"<OrderEnforcing<ColorCoordinatorEnv<colorization/ColorizationEnv-v2>>>","vlm_repr":"<gy(...TRUNCATED)
| {"render":false,"seed":3421126067,"verbose":false,"save_verbose":false,"elicit_verbalization":true,"(...TRUNCATED)
|
{
"step": 11,
"reward": 1,
"terminated": true,
"truncated": false
}
| [{"step":0,"prompt":"You are performing a color-matching task. You see two images side by side:\n- L(...TRUNCATED)
| null |
a7d4a8c17a2876029462ba61ee238b78fca14401dedc41bcd3335009fae80e69
|
2
| 4,083,286,876
|
train
|
2025-09-21T10:01:44.212037
|
/home/zwcolin/projects/VLM_Gym_Episodes_v5/colorization/shards_20250921_100120/train/batch_000000
| {"env_repr":"<OrderEnforcing<ColorCoordinatorEnv<colorization/ColorizationEnv-v2>>>","vlm_repr":"<gy(...TRUNCATED)
| {"render":false,"seed":4083286876,"verbose":false,"save_verbose":false,"elicit_verbalization":true,"(...TRUNCATED)
|
{
"step": 9,
"reward": 1,
"terminated": true,
"truncated": false
}
| [{"step":0,"prompt":"You are performing a color-matching task. You see two images side by side:\n- L(...TRUNCATED)
| null |
108b3cce40ab1e35c5d11abc67bb303fd236f9b8865021624a9651c28346537b
|
3
| 787,846,414
|
train
|
2025-09-21T10:01:47.846552
|
/home/zwcolin/projects/VLM_Gym_Episodes_v5/colorization/shards_20250921_100120/train/batch_000000
| {"env_repr":"<OrderEnforcing<ColorCoordinatorEnv<colorization/ColorizationEnv-v2>>>","vlm_repr":"<gy(...TRUNCATED)
| {"render":false,"seed":787846414,"verbose":false,"save_verbose":false,"elicit_verbalization":true,"r(...TRUNCATED)
|
{
"step": 7,
"reward": 1,
"terminated": true,
"truncated": false
}
| [{"step":0,"prompt":"You are performing a color-matching task. You see two images side by side:\n- L(...TRUNCATED)
| null |
a8dc70c82b31c3a102f456af64713d769ce056882e9594f53fb10ccc4315c856
|
4
| 3,143,890,026
|
train
|
2025-09-21T10:01:50.274183
|
/home/zwcolin/projects/VLM_Gym_Episodes_v5/colorization/shards_20250921_100120/train/batch_000000
| {"env_repr":"<OrderEnforcing<ColorCoordinatorEnv<colorization/ColorizationEnv-v2>>>","vlm_repr":"<gy(...TRUNCATED)
| {"render":false,"seed":3143890026,"verbose":false,"save_verbose":false,"elicit_verbalization":true,"(...TRUNCATED)
|
{
"step": 10,
"reward": 1,
"terminated": true,
"truncated": false
}
| [{"step":0,"prompt":"You are performing a color-matching task. You see two images side by side:\n- L(...TRUNCATED)
| null |
d5b09243dbfb2c9e372db0d1c15f81c29d73436a16cca6accda23f71b2ee78e8
|
5
| 3,348,747,335
|
train
|
2025-09-21T10:01:53.243407
|
/home/zwcolin/projects/VLM_Gym_Episodes_v5/colorization/shards_20250921_100120/train/batch_000000
| {"env_repr":"<OrderEnforcing<ColorCoordinatorEnv<colorization/ColorizationEnv-v2>>>","vlm_repr":"<gy(...TRUNCATED)
| {"render":false,"seed":3348747335,"verbose":false,"save_verbose":false,"elicit_verbalization":true,"(...TRUNCATED)
|
{
"step": 7,
"reward": 1,
"terminated": true,
"truncated": false
}
| [{"step":0,"prompt":"You are performing a color-matching task. You see two images side by side:\n- L(...TRUNCATED)
| null |
9daf91f08e9043e6b1ec504536b3ec4d74cd89564cdfe2351232fc2347b3dee6
|
6
| 2,571,218,620
|
train
|
2025-09-21T10:01:55.700208
|
/home/zwcolin/projects/VLM_Gym_Episodes_v5/colorization/shards_20250921_100120/train/batch_000000
| {"env_repr":"<OrderEnforcing<ColorCoordinatorEnv<colorization/ColorizationEnv-v2>>>","vlm_repr":"<gy(...TRUNCATED)
| {"render":false,"seed":2571218620,"verbose":false,"save_verbose":false,"elicit_verbalization":true,"(...TRUNCATED)
|
{
"step": 10,
"reward": 1,
"terminated": true,
"truncated": false
}
| [{"step":0,"prompt":"You are performing a color-matching task. You see two images side by side:\n- L(...TRUNCATED)
| null |
3f21b6797408f54c293c0656f88ae91315bcf0b2e7c3166de9a96e61e085193b
|
7
| 2,563,451,924
|
train
|
2025-09-21T10:01:59.739883
|
/home/zwcolin/projects/VLM_Gym_Episodes_v5/colorization/shards_20250921_100120/train/batch_000000
| {"env_repr":"<OrderEnforcing<ColorCoordinatorEnv<colorization/ColorizationEnv-v2>>>","vlm_repr":"<gy(...TRUNCATED)
| {"render":false,"seed":2563451924,"verbose":false,"save_verbose":false,"elicit_verbalization":true,"(...TRUNCATED)
|
{
"step": 12,
"reward": 1,
"terminated": true,
"truncated": false
}
| [{"step":0,"prompt":"You are performing a color-matching task. You see two images side by side:\n- L(...TRUNCATED)
| null |
a91872aed362d9c87df9b5e3af6eb660d0b3be2cc03a74c3ecdef0fa5055dde9
|
8
| 670,094,950
|
train
|
2025-09-21T10:02:04.003954
|
/home/zwcolin/projects/VLM_Gym_Episodes_v5/colorization/shards_20250921_100120/train/batch_000000
| {"env_repr":"<OrderEnforcing<ColorCoordinatorEnv<colorization/ColorizationEnv-v2>>>","vlm_repr":"<gy(...TRUNCATED)
| {"render":false,"seed":670094950,"verbose":false,"save_verbose":false,"elicit_verbalization":true,"r(...TRUNCATED)
|
{
"step": 8,
"reward": 1,
"terminated": true,
"truncated": false
}
| [{"step":0,"prompt":"You are performing a color-matching task. You see two images side by side:\n- L(...TRUNCATED)
| null |
fc5087280e43338d154e2799adacd682100833657ff45e4b573381d084e21c40
|
9
| 1,914,837,113
|
train
|
2025-09-21T10:02:07.483373
|
/home/zwcolin/projects/VLM_Gym_Episodes_v5/colorization/shards_20250921_100120/train/batch_000000
| {"env_repr":"<OrderEnforcing<ColorCoordinatorEnv<colorization/ColorizationEnv-v2>>>","vlm_repr":"<gy(...TRUNCATED)
| {"render":false,"seed":1914837113,"verbose":false,"save_verbose":false,"elicit_verbalization":true,"(...TRUNCATED)
|
{
"step": 6,
"reward": 1,
"terminated": true,
"truncated": false
}
| [{"step":0,"prompt":"You are performing a color-matching task. You see two images side by side:\n- L(...TRUNCATED)
| null |
d234ae086ebb958f1205fe77faea1ffcbe514900cca29b9edf75a19d4b3e081d
|
VisGym Dataset
Project Page | Paper | GitHub
VisGym consists of 17 diverse, long-horizon environments designed to systematically evaluate, diagnose, and train Vision-Language Models (VLMs) on visually interactive tasks. In these environments, agents must select actions conditioned on both their past actions and observation history, challenging their ability to handle complex, multimodal sequences.
Dataset Summary
This dataset contains trajectories and interaction data generated from the VisGym suites, intended for training and benchmarking multimodal agents. The environments are designed to be:
- Diverse: Covering 17 distinct task categories.
- Customizable: Allowing for various configurations of task difficulty and visual settings.
- Scalable: Suitable for large-scale training of VLMs and Reinforcement Learning agents.
Usage
You can download the dataset assets and metadata using the huggingface-cli:
# Install huggingface-cli
pip install -U "huggingface_hub[cli]"
# Download the dataset to local
# This will download 'assets/' and 'metadata/' folder into local dir
mkdir -p training_dataset
huggingface-cli download VisGym/visgym_data --repo-type dataset --local-dir ./training_dataset
Check here for more usage details: https://github.com/visgym/VisGym/blob/main/visgym_training/README.md
Citation
If you use this dataset, please cite:
@article{wang2026visgym,
title = {VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents},
author = {Wang, Zirui and Zhang, Junyi and Ge, Jiaxin and Lian, Long and Fu, Letian and Dunlap, Lisa and Goldberg, Ken and Wang, Xudong and Stoica, Ion and Chan, David M. and Min, Sewon and Gonzalez, Joseph E.},
journal = {arXiv preprint arXiv:2601.16973},
year = {2026},
url = {https://arxiv.org/abs/2601.16973}
}
- Downloads last month
- 4,064