Alissonerdx commited on
Commit
e9927fc
·
verified ·
1 Parent(s): 394f71f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -8
README.md CHANGED
@@ -11,15 +11,12 @@ tags:
11
 
12
  ### Flux.1-Dev SRPO LoRAs
13
 
14
- This repository contains **LoRAs extracted from the SRPO Flux.1-Dev FP8 model**, designed to provide modular and lightweight adaptations without requiring the full fine-tuned model.
 
 
 
15
 
16
- To build these LoRAs, I followed extraction approaches inspired by the official [tencent/SRPO](https://huggingface.co/tencent/SRPO), as well as community repositories such as [rockerBOO/flux.1-dev-SRPO](https://huggingface.co/rockerBOO/flux.1-dev-SRPO) and [wikeeyang/SRPO-Refine-Quantized-v1.0](https://huggingface.co/wikeeyang/SRPO-Refine-Quantized-v1.0).
17
-
18
- These LoRAs enable:
19
-
20
- * **Flexible mixing** with other LoRAs
21
- * **Lower storage cost** compared to full models
22
- * **Efficient experimentation** across different ranks (8, 16, 32, 64, 128)
23
 
24
  ![Comparison](images/compare_oficial_lora_prompt_1.png)
25
  ![Comparison](images/compare_oficial_lora_prompt_2.png)
 
11
 
12
  ### Flux.1-Dev SRPO LoRAs
13
 
14
+ These LoRAs were extracted from **three sources**:
15
+ - the original SRPO (Flux.1-Dev): tencent/SRPO
16
+ - community checkpoint: rockerBOO/flux.1-dev-SRPO
17
+ - community checkpoint (quantized/refined): wikeeyang/SRPO-Refine-Quantized-v1.0
18
 
19
+ They are designed to provide modular, lightweight adaptations you can mix with other LoRAs, reducing storage and enabling fast experimentation across ranks (8, 16, 32, 64, 128).
 
 
 
 
 
 
20
 
21
  ![Comparison](images/compare_oficial_lora_prompt_1.png)
22
  ![Comparison](images/compare_oficial_lora_prompt_2.png)