AnimeLoom β€” Sakura Haruno LoRA

Character LoRA for Sakura Haruno (Naruto), trained as part of the AnimeLoom anime character-consistency pipeline.

Two adapters are provided in this repo:

Folder Base model Rank Steps Images Resolution
sdxl/ cagliostrolab/animagine-xl-3.1 32 800 18 512
sd15/ Lykon/dreamshaper-8 32 800 18 512

The SDXL adapter is the primary one used by AnimeLoom's identity-keyframe stage (SDXL + character LoRA + IP-Adapter). The SD 1.5 adapter is provided for compatibility with lighter inference pipelines (e.g. AnimateDiff workflows).

Trigger words

1girl, sakura haruno, pink hair, green eyes

Add booru-style descriptors as needed (e.g. red dress, forehead protector, smiling).

Usage β€” SDXL with PEFT / diffusers

import torch
from diffusers import StableDiffusionXLPipeline
from peft import PeftModel

pipe = StableDiffusionXLPipeline.from_pretrained(
    "cagliostrolab/animagine-xl-3.1",
    torch_dtype=torch.float16,
).to("cuda")

# AnimeLoom's training output is a PEFT adapter β€” load via PeftModel
pipe.unet = PeftModel.from_pretrained(
    pipe.unet,
    "AnimeLoom/sakura-haruno",
    subfolder="sdxl",
)

img = pipe(
    "1girl, sakura haruno, pink hair, green eyes, anime, "
    "masterpiece, best quality, absurdres",
    negative_prompt="blurry, low quality, deformed, extra fingers, 3d render",
    num_inference_steps=28,
    guidance_scale=6.5,
    height=1024, width=1024,
).images[0]
img.save("sakura.png")

Usage β€” SD 1.5 with diffusers

import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
    "Lykon/dreamshaper-8",
    torch_dtype=torch.float16,
).to("cuda")

pipe.load_lora_weights(
    "AnimeLoom/sakura-haruno",
    subfolder="sd15",
    weight_name="pytorch_lora_weights.safetensors",
)

img = pipe(
    "1girl, sakura haruno, pink hair, green eyes, anime, masterpiece",
    num_inference_steps=28,
    guidance_scale=7.0,
).images[0]

Recommended weights & prompting

  • LoRA scale: 0.85 - 1.15 (SDXL), 0.6 - 0.8 (SD 1.5)
  • Pair with anime base models: Animagine XL 3.1, Counterfeit-V3.0, AnythingV5
  • For face consistency in video, combine with IP-Adapter SDXL and Wan2.2-Animate face-lock β€” see the AnimeLoom pipeline.

AnimeLoom video pipeline integration

This LoRA is built to feed AnimeLoom's text-to-anime-video pipeline:

SDXL + this LoRA + IP-Adapter   β†’  identity keyframe (Phase 2)
        ↓
Wan2.2 I2V                       β†’  motion driving clip (Phase 3a)
        ↓
Wan2.2-Animate face-lock         β†’  face from keyframe pasted onto motion (Phase 3b)
        ↓
RIFE + Real-ESRGAN + GFPGAN      β†’  temporal/spatial upscale + face restore (Phase 4)
        ↓
Final 24fps anime video with consistent character identity across shots.

Limitations

  • Anime-only. Photoreal prompts will degrade quality.
  • InsightFace face-swap does not work on these outputs (it is photoreal-only). For identity rescue use IP-Adapter-FaceID-SDXL or CharacterFaceSwap.
  • Trained on a small image set (18 images at 512 resolution); expect occasional drift on unusual angles, complex outfits, or non-canonical hair styles.
  • The SDXL adapter was trained at 512 res β€” inference at 1024 works but very small details (hair strands, eye highlights) may be slightly less crisp than a 1024-trained adapter.

License

Released under OpenRAIL++ (openrail++), inheriting from the base model Animagine XL 3.1.

Related models

Part of the AnimeLoom character collection:

Citation

If you use this LoRA in research or production, please credit AnimeLoom and the base model authors (Cagliostro Lab for Animagine XL 3.1, Lykon for DreamShaper 8).


Trained with PEFT as part of the AnimeLoom project.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for AnimeLoom/sakura-haruno