GenMask: Adapting DiT for Segmentation via Direct Mask
Abstract
Generative models trained directly for segmentation tasks outperform indirect adaptation methods by using a novel timestep sampling strategy that enables joint training for both image generation and binary mask synthesis.
Recent approaches for segmentation have leveraged pretrained generative models as feature extractors, treating segmentation as a downstream adaptation task via indirect feature retrieval. This implicit use suffers from a fundamental misalignment in representation. It also depends heavily on indirect feature extraction pipelines, which complicate the workflow and limit adaptation. In this paper, we argue that instead of indirect adaptation, segmentation tasks should be trained directly in a generative manner. We identify a key obstacle to this unified formulation: VAE latents of binary masks are sharply distributed, noise robust, and linearly separable, distinct from natural image latents. To bridge this gap, we introduce timesteps sampling strategy for binary masks that emphasizes extreme noise levels for segmentation and moderate noise for image generation, enabling harmonious joint training. We present GenMask, a DiT trains to generate black-and-white segmentation masks as well as colorful images in RGB space under the original generative objective. GenMask preserves the original DiT architecture while removing the need of feature extraction pipelines tailored for segmentation tasks. Empirically, GenMask attains state-of-the-art performance on referring and reasoning segmentation benchmarks and ablations quantify the contribution of each component.
Community
This paper proposes treating segmentation as direct mask generation in RGB space using a DiT, rather than indirectly extracting features from diffusion models.
To make this work, it discovers that binary mask VAE latents behave very differently from image latents, and introduces an extreme-noise timestep sampling strategy to bridge this gap for joint training.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InverFill: One-Step Inversion for Enhanced Few-Step Diffusion Inpainting (2026)
- Rethinking Vector Field Learning for Generative Segmentation (2026)
- Rethinking MLLM Itself as a Segmenter with a Single Segmentation Token (2026)
- Improving Image-to-Image Translation via a Rectified Flow Reformulation (2026)
- SegviGen: Repurposing 3D Generative Model for Part Segmentation (2026)
- MagicSeg: Open-World Segmentation Pretraining via Counterfactural Diffusion-Based Auto-Generation (2026)
- OneWorld: Taming Scene Generation with 3D Unified Representation Autoencoder (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2603.23906 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper