Update README.md
Browse files
README.md
CHANGED
|
@@ -15,8 +15,8 @@ A real-time demo is available here: http://clipdrop.co/stable-diffusion-turbo
|
|
| 15 |
|
| 16 |
### Model Description
|
| 17 |
SDXL-Turbo is a distilled version of [SDXL 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), trained for real-time synthesis.
|
| 18 |
-
SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the [technical report](
|
| 19 |
-
image diffusion models in 1
|
| 20 |
This approach uses score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal and combines this with an
|
| 21 |
adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps.
|
| 22 |
|
|
@@ -31,7 +31,7 @@ For research purposes, we recommend our `generative-models` Github repository (h
|
|
| 31 |
which implements the most popular diffusion frameworks (both training and inference).
|
| 32 |
|
| 33 |
- **Repository:** https://github.com/Stability-AI/generative-models
|
| 34 |
-
- **Paper:**
|
| 35 |
- **Demo:** http://clipdrop.co/stable-diffusion-turbo
|
| 36 |
|
| 37 |
|
|
@@ -39,9 +39,9 @@ which implements the most popular diffusion frameworks (both training and infere
|
|
| 39 |

|
| 40 |

|
| 41 |
The charts above evaluate user preference for SDXL-Turbo over other single- and multi-step models.
|
| 42 |
-
SDXL-Turbo
|
| 43 |
In addition, we see that using four steps for SDXL-Turbo further improves performance.
|
| 44 |
-
For details on the user study, we refer to the [research paper](
|
| 45 |
|
| 46 |
|
| 47 |
## Uses
|
|
|
|
| 15 |
|
| 16 |
### Model Description
|
| 17 |
SDXL-Turbo is a distilled version of [SDXL 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), trained for real-time synthesis.
|
| 18 |
+
SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the [technical report](https://stability.ai/research/adversarial-diffusion-distillation)), which allows sampling large-scale foundational
|
| 19 |
+
image diffusion models in 1 to 4 steps at high image quality.
|
| 20 |
This approach uses score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal and combines this with an
|
| 21 |
adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps.
|
| 22 |
|
|
|
|
| 31 |
which implements the most popular diffusion frameworks (both training and inference).
|
| 32 |
|
| 33 |
- **Repository:** https://github.com/Stability-AI/generative-models
|
| 34 |
+
- **Paper:** https://stability.ai/research/adversarial-diffusion-distillation
|
| 35 |
- **Demo:** http://clipdrop.co/stable-diffusion-turbo
|
| 36 |
|
| 37 |
|
|
|
|
| 39 |

|
| 40 |

|
| 41 |
The charts above evaluate user preference for SDXL-Turbo over other single- and multi-step models.
|
| 42 |
+
SDXL-Turbo evaluated at a single step is preferred by human voters in terms of image quality and prompt following over LCM-XL evaluated at four (or fewer) steps.
|
| 43 |
In addition, we see that using four steps for SDXL-Turbo further improves performance.
|
| 44 |
+
For details on the user study, we refer to the [research paper](https://stability.ai/research/adversarial-diffusion-distillation).
|
| 45 |
|
| 46 |
|
| 47 |
## Uses
|