SynCom
SynCom (Synthetic Composition) is a synthetic dataset for realistic 3D object-scene composition. The task is to reconstruct objects and scenes from separate multi-view image sets, insert reconstructed 3D objects into 3D scenes, and evaluate whether the inserted objects match the scene appearance and cast physically plausible shadows.
Compared with real-world captures, SynCom provides controlled object placement, known camera parameters, and rendered ground truth for composed scenes, making it suitable for quantitative evaluation of object insertion, novel-view synthesis, relighting, inverse rendering, and 3D composition methods.
Dataset Details
SynCom is rendered with the Cycles engine in Blender. The dataset uses 4 object assets and 4 scene assets sourced from BlenderKit, with additional manual scene editing to improve layout realism and camera accessibility. Object illumination and relighting assets use HDRI environment maps from PolyHaven.
The released assets are organized into four main collections:
| Collection | Contents | Views and resolution |
|---|---|---|
object/ |
Multi-view renders of 4 standalone objects: bottle, horse, kettle, toy. Includes RGB images, EXR images, material/depth/mask passes, camera metadata, and random point clouds for 3DGS initialization. |
200 train views and 100 test views per object at 800 x 800. |
scene/ |
Multi-view renders of 4 empty scenes: artwall, attic, forest, room. Includes RGB images, HDR EXR renders, camera metadata, and COLMAP SfM point clouds. |
72 test views per scene at 1280 x 720. The released train split contains 120 views for artwall, forest, and room, and 125 views for attic. |
composition/ |
Ground-truth composed scenes for all 16 object-scene pairs, for example room_with_toy. Includes rendered images, EXR images, camera metadata, object transforms, and placeholder point clouds. |
72 views per pair at 1280 x 720, aligned with the corresponding scene test cameras. |
object_relit/ |
Relit object renders under the billiard, fireplace, lakeside, and snowy HDRI environments, plus stored environment maps. |
100 images per object-environment pair. |
Intended Uses
This dataset is intended for research on:
- realistic 3D object-scene composition
- object insertion and scene editing
- novel-view synthesis
- relighting and shadow consistency
- inverse rendering
- 3D reconstruction from multi-view images
Dataset Structure
The repository follows this high-level structure:
.
βββ dataset.png
βββ object/
β βββ bottle/
β βββ horse/
β βββ kettle/
β βββ toy/
β βββ points3d.ply
β βββ train/
β β βββ cameras.json
β β βββ images/
β β βββ albedo/
β β βββ ao/
β β βββ depth/
β β βββ mask/
β β βββ metallic/
β β βββ normal/
β β βββ roughness/
β βββ test/
β βββ cameras.json
β βββ images/
β βββ albedo/
β βββ ao/
β βββ depth/
β βββ mask/
β βββ metallic/
β βββ normal/
β βββ roughness/
βββ scene/
β βββ artwall/
β βββ attic/
β βββ forest/
β βββ room/
β βββ points3d.ply
β βββ train/
β β βββ cameras.json
β β βββ images/
β β βββ hdr/
β βββ test/
β βββ cameras.json
β βββ images/
β βββ hdr/
βββ composition/
β βββ <scene>_with_<object>/
β βββ cameras.json
β βββ transform.json
β βββ points3d.ply
β βββ images/
βββ object_relit/
β βββ envmaps/
β βββ <object>/
β βββ cameras.json
β βββ billiard/
β βββ fireplace/
β βββ lakeside/
β βββ snowy/
βββ demo/
βββ <scene>_with_<object>/
βββ cameras.json
βββ transform.json
βββ points3d.ply
Data Fields
The main files and fields are:
images/*.png: tone-mapped RGB renders.images/*.exr: high-dynamic-range rendered images where provided.hdr/*.exr: HDR scene renders.albedo/*.exr,ao/*.exr,depth/*.exr,mask/*.exr,metallic/*.exr,normal/*.exr,roughness/*.exr: object render passes.cameras.json: camera metadata for each view.transform.json: object placement parameters for a composed scene, includingrotation_type,rotation,location, andscale.points3d.ply: point cloud file. Forscene/, this is an SfM point cloud reconstructed with COLMAP, following the common input convention used by the 3D Gaussian Splatting repository. Forobject/, this file contains randomly sampled points used to initialize 3DGS reconstruction, rather than geometric ground truth. Forcomposition/, this file is only a placeholder and should not be treated as reconstructed geometry.
Each cameras.json file has the following structure:
{
"scene": "room_with_toy",
"cameras": {
"0000": {
"name": "0000",
"intr": [1044.3851, 1044.3851, 640.0, 360.0],
"extr": [
-0.7071, 0.1005, 0.6999, -1.2645,
0.7071, 0.1005, 0.6999, -0.3145,
0.0, 0.9898, -0.1422, 0.6270,
0.0, 0.0, 0.0, 1.0
],
"width": 1280,
"height": 720
}
}
}
intr stores pinhole camera intrinsics as [fx, fy, cx, cy]. extr stores a flattened 4 x 4 camera extrinsic matrix in row-major order.
Dataset Creation
Data Sources
The 3D object and scene assets are based on free BlenderKit assets under Royalty-Free or CC0 licenses. The original 3D assets are not redistributed in this repository. The scenes were manually edited by adjusting layouts and adding or modifying content to create more realistic environments and valid camera trajectories.
HDRI environment maps are sourced from PolyHaven.
Rendering
Objects are rendered individually using HDRI environment lighting. The object train split samples camera positions over the upper hemisphere, while the test split uses views from three latitude circles around each object.
Scenes are rendered by placing cameras on a manually adjusted virtual ellipsoid for each environment. This avoids invalid views caused by camera collisions or strong occlusions from scene structures. Test views are sampled on a spiral path on a slightly smaller concentric ellipsoid.
Compositions are generated by manually placing each object into each scene with specified 3D location, orientation, and scale. The composed ground-truth images use the same 72 test viewpoints as the corresponding empty scene.
License
The rendered images in this repository are released under the CC BY 4.0 license.
This dataset contains rendered images generated from BlenderKit assets under Royalty-Free or CC0 licenses. The original 3D assets are not included and are not redistributed. HDRI environment maps are sourced from PolyHaven. Please also respect the licenses and terms of the original asset providers when using related materials.
Acknowledgements
We thank BlenderKit, PolyHaven, Blender, COLMAP, and their contributors for providing the assets and tools that made this dataset possible.
Citation
If you find the SynCom dataset useful for your work, we would appreciate it if you cite our paper as follows:
@inproceedings{gao2026comgs,
title={Com{GS}: Efficient 3D Object-Scene Composition via Surface Octahedral Probes},
author={Jian Gao and Mengqi Yuan and Yifei Zeng and Chang Zeng and Zhihao Li and Zhenyu Chen and Weichao Qiu and Xiao-Xiao Long and Hao Zhu and Xun Cao and Yao Yao},
booktitle={ICLR},
year={2026}
}
- Downloads last month
- 60
