--- license: apache-2.0 tags: - image-editing - image-variation - identity-preservation - diffusion - image-generation - aesthetics - art task_categories: - image-to-image - text-to-image size_categories: - 1K --- ## Minimal Usage Example (Colab) The snippet below loads the dataset from the Hugging Face Hub and displays an **original / variant image pair** side by side. ```python from datasets import load_dataset import matplotlib.pyplot as plt import random import io from PIL import Image as PILImage def to_pil(x): """ Convert Hugging Face Image feature outputs (or streaming outputs) into a real PIL.Image.Image for matplotlib. Supports: - already-PIL - dict with 'bytes' - dict with 'path' - plain path string """ if isinstance(x, PILImage.Image): return x if isinstance(x, dict): if x.get("bytes") is not None: return PILImage.open(io.BytesIO(x["bytes"])).convert("RGB") if x.get("path") is not None: return PILImage.open(x["path"]).convert("RGB") if isinstance(x, str): return PILImage.open(x).convert("RGB") raise TypeError(f"Unsupported image type: {type(x)}; value={repr(x)[:200]}") # Stream (fast startup, no full split materialization) ds_stream = load_dataset( "moonworks/lunara-aesthetic-image-variations", split="train", streaming=True, ) buffer_size = 300 buffer = list(ds_stream.take(buffer_size)) k = 5 k = min(k, len(buffer)) samples = random.sample(buffer, k) fig, axes = plt.subplots(k, 2, figsize=(12, 3.6 * k)) if k == 1: axes = [axes] # normalize shape for i, sample in enumerate(samples): orig = to_pil(sample["original_image"]) var = to_pil(sample["variant_image"]) axes[i][0].imshow(orig) axes[i][0].set_title("Original", fontsize=12) axes[i][0].axis("off") axes[i][1].imshow(var) axes[i][1].set_title("Variant", fontsize=12) axes[i][1].axis("off") plt.subplots_adjust(hspace=0.15, wspace=0.02) plt.show() ``` --- ## Dataset Summary **paper:** https://arxiv.org/abs/2602.01666 The **Moonworks Lunara Aesthetic II Dataset** is a compact image variation dataset designed for studying **identity preservation and contextual consistency** in image editing and image-to-image generation. Each sample consists of: - an **original anchor image** - a **variant image** with controlled contextual or aesthetic changes --- ## Intended Use This dataset is intended for: - Benchmarking **image editing** and **image variation** models - Evaluating **identity preservation under aesthetic change** - Qualitative analysis of contextual transformations - Research on controlled image-to-image generation --- ## Citation If you use this dataset, please cite: ```bibtex @article{wang2026lunaraII, title={Moonworks Lunara Aesthetic II: An Image Variation Dataset}, author={Wang, Yan and Hassan, Partho and Sadeka, Samiha and Soliman, Nada and Abdullah, M M Sayeef and Hassan, Sabit}, journal={arXiv preprint arXiv:2602.01666}, year={2026} }