ReasoningShield commited on
Commit
ee7c993
·
verified ·
1 Parent(s): 8c0c0ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -36
README.md CHANGED
@@ -4,9 +4,11 @@ language:
4
  - en
5
  metrics:
6
  - accuracy
 
 
7
  - f1
8
  base_model:
9
- - meta-llama/Llama-3.2-1B-Instruct
10
  pipeline_tag: text-generation
11
  library_name: transformers
12
  tags:
@@ -48,8 +50,8 @@ datasets:
48
  </a>
49
 
50
  <!-- License -->
51
- <a href="https://www.apache.org/licenses/LICENSE-2.0 " target="_blank">
52
- <img alt="Model License" src="https://img.shields.io/badge/Model%20License-Apache_2.0-green.svg? ">
53
  </a>
54
 
55
  </div>
@@ -59,18 +61,16 @@ datasets:
59
 
60
  ## 🛡 1. Model Overview
61
 
62
- ***ReasoningShield*** is the first specialized safety moderation model tailored to identify hidden risks in intermediate reasoning steps in Large Reasoning Models (LRMs) before generating final answers. It excels in detecting harmful content that may be concealed within seemingly harmless reasoning traces, ensuring robust safety for LRMs.
63
-
64
- - **Primary Use Case** : Detecting and mitigating hidden risks in reasoning traces of Large Reasoning Models (LRMs)
65
 
66
  - **Key Features** :
67
- - **High Performance**: Achieves an average F1 score exceeding **92%** in QT Moderation tasks, outperforming existing models across both in-distribution (ID) and out-of-distribution (OOD) test sets, achieving **state-of-the-art (SOTA)** performance.
68
 
69
- - **Enhanced Explainability** : Employs a structured analysis process that improves decision transparency and provides clearer insights into safety assessments.
70
 
71
- - **Robust Generalization** : Notably, despite being trained on our 7K QT dataset only, ***ReasoningShield*** also demonstrates competitive performance in Question-Answer (QA) moderation on traditional benchmarks, rivaling baselines trained on datasets 10 times larger, aligning with **less is more** principle.
72
 
73
- - **Efficient Design** : Built on compact 1B/3B base models, it requires only **2.30 GB/5.98 GB** GPU memory during inference, facilitating cost-effective deployment on resource-constrained devices.
74
 
75
  - **Base Model**: https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct & https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
76
 
@@ -85,26 +85,24 @@ datasets:
85
  </div>
86
 
87
 
88
- - The model is trained on a high-quality dataset of 7,000 QT pairs, please refer to the following link for detailed information:
89
-
90
  - ***ReasoningShield-Dataset:*** https://huggingface.co/datasets/ReasoningShield/ReasoningShield-Dataset
91
 
92
  - **Risk Categories** :
93
 
94
- - Violence & Physical Harm
95
  - Hate & Toxicity
96
  - Deception & Misinformation
97
- - Rights-Related Risks
98
- - Sexual Content & Exploitation
99
- - Child-Related Harm
100
- - Cybersecurity & Malware Threats
101
  - Prohibited Items
102
  - Economic Harm
103
  - Political Risks
104
- - Safe
105
  - Additionally, to enhance generalization to OOD scenarios, we introduce an **Other Risks** category in the prompt.
106
 
107
- - **Risk Levels** :
108
 
109
  - Level 0 (Safe) : No potential for harm.
110
  - Level 0.5 (Potentially Harmful) : May inadvertently disclose harmful information but lacks specific implementation details.
@@ -127,7 +125,7 @@ datasets:
127
 
128
  #### Stage 2: Direct Preference Optimization Training
129
 
130
- - **Objective** : Refining the model's performance on hard negative samples constructed from the ambiguous case and enhancing its robustness against adversarial scenarios.
131
  - **Dataset Size** : 2,642 hard negative samples.
132
  - **Batch Size** : 2
133
  - **Gradient Accumulation Steps** : 8
@@ -140,28 +138,36 @@ These two-stage training procedures significantly enhance ***ReasoningShield's**
140
 
141
  ## 🏆 3. Performance Evaluation
142
 
143
- We evaluate ***ReasoningShield*** and baselines on four diverse test sets (AIR-Bench , SALAD-Bench , BeaverTails , Jailbreak-Bench) in **QT Moderation**. <strong>Bold</strong> indicates the best results and <ins>underline</ins> represents the second best ones. The results are averaged over five runs conducted on four datasets, and the performance comparison of some models are reported below:
144
 
145
  <div align="center">
146
 
147
- | **Model** | **Size** | **Accuracy ()** | **Precision ()** | **Recall ()** | **F1 ()** |
148
- | :-----------------------: | :--------: | :----------------: | :----------------: | :--------------: | :-----------: |
149
- | Perspective | - | 39.4 | 0.0 | 0.0 | 0.0 |
150
- | OpenAI Moderation | - | 59.2 | 71.4 | 54.0 | 61.5 |
151
- | LlamaGuard-3-1B | 1B | 71.4 | 87.2 | 61.7 | 72.3 |
152
- | LlamaGuard-3-8B | 8B | 74.1 | <ins>93.7</ins> | 61.2 | 74.0 |
153
- | LlamaGuard-4 | 12B | 62.1 | 91.4 | 41.0 | 56.7 |
154
- | Aegis-Permissive | 7B | 59.6 | 67.0 | 64.9 | 66.0 |
155
- | Aegis-Defensive | 7B | 62.9 | 64.6 | 85.4 | 73.5 |
156
- | WildGuard | 7B | 68.1 | **99.4** | 47.4 | 64.2 |
157
- | MD-Judge | 7B | 79.1 | 86.9 | 76.9 | 81.6 |
158
- | Beaver-Dam | 7B | 62.6 | 78.4 | 52.5 | 62.9 |
159
- | **ReasoningShield (Ours)** | 1B | <ins>88.6</ins> | 89.9 | <ins>91.3</ins>| <ins>90.6</ins> |
160
- | **ReasoningShield (Ours)** | 3B | **90.5** | 91.1 | **93.4** | **92.2** |
 
 
 
 
 
 
 
 
161
 
162
  </div>
163
 
164
- Additionally, ***ReasoningShield*** exhibits strong generalization in traditional QA Moderation, even though it is trained on a QT pairs dataset of just 7K samples. Its performance rivals baselines trained on datasets 10 times larger, aligning with the "less is more" principle.
165
 
166
  <div align="center">
167
  <img src="images/bar.png" alt="QT and QA Performance" style="width: 100%; height: auto;">
 
4
  - en
5
  metrics:
6
  - accuracy
7
+ - precision
8
+ - recall
9
  - f1
10
  base_model:
11
+ - meta-llama/Llama-3.2-3B-Instruct
12
  pipeline_tag: text-generation
13
  library_name: transformers
14
  tags:
 
50
  </a>
51
 
52
  <!-- License -->
53
+ <a href="https://www.apache.org/licenses/LICENSE-2.0" target="_blank" style="margin: 2px;">
54
+ <img alt="Model License" src="https://img.shields.io/badge/Model%20License-Apache_2.0-green.svg? " style="display: inline-block; vertical-align: middle;"/>
55
  </a>
56
 
57
  </div>
 
61
 
62
  ## 🛡 1. Model Overview
63
 
64
+ ***ReasoningShield*** is the first specialized safety moderation model tailored to identify hidden risks in intermediate reasoning steps in Large Reasoning Models (LRMs). It excels in detecting harmful content that may be concealed within seemingly harmless reasoning traces, ensuring robust safety alignment for LRMs.
 
 
65
 
66
  - **Key Features** :
67
+ - **Strong Performance**: It sets a CoT Moderation **SOTA** with over 91% average F1 on open-source LRM traces, outperforming LlamaGuard-4 by 36% and GPT-4o by 16%.
68
 
69
+ - **Robust Generalization** : Despite being trained exclusively on a 7K-sample dataset, it demonstrates strong generalization across varied reasoning paradigms, cross-task scenarios, and unseen data distributions.
70
 
71
+ - **Enhanced Explainability** : It provides stepwise risk analysis, effectively addressing the "black-box" limitation of traditional moderation models.
72
 
73
+ - **Efficient Design** : Built on compact base models, it requires low GPU memory (e.g., 2.3GB for 1B version), enabling cost-effective deployment on resource-constrained devices.
74
 
75
  - **Base Model**: https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct & https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
76
 
 
85
  </div>
86
 
87
 
88
+ - The model is trained on a high-quality dataset of 7,000 (Query, CoT) pairs. Please refer to the following link for detailed information:
 
89
  - ***ReasoningShield-Dataset:*** https://huggingface.co/datasets/ReasoningShield/ReasoningShield-Dataset
90
 
91
  - **Risk Categories** :
92
 
93
+ - Violence
94
  - Hate & Toxicity
95
  - Deception & Misinformation
96
+ - Rights Violation
97
+ - Sex
98
+ - Child Abuse
99
+ - CyberSecurity
100
  - Prohibited Items
101
  - Economic Harm
102
  - Political Risks
 
103
  - Additionally, to enhance generalization to OOD scenarios, we introduce an **Other Risks** category in the prompt.
104
 
105
+ - **Safety Levels** :
106
 
107
  - Level 0 (Safe) : No potential for harm.
108
  - Level 0.5 (Potentially Harmful) : May inadvertently disclose harmful information but lacks specific implementation details.
 
125
 
126
  #### Stage 2: Direct Preference Optimization Training
127
 
128
+ - **Objective** : Refining the model's performance on hard negative samples constructed from the ambiguous cases and enhancing its robust generalization.
129
  - **Dataset Size** : 2,642 hard negative samples.
130
  - **Batch Size** : 2
131
  - **Gradient Accumulation Steps** : 8
 
138
 
139
  ## 🏆 3. Performance Evaluation
140
 
141
+ ***ReasoningShiled*** achieves **state-of-the-art** performance on CoT Moderation. **Bold** denotes the best results and <ins>underline</ins> the second best. ***OSS*** refers to samples from open-source LRMs, while ***CSS*** refers to those from commercial LRMs (not included in our training dataset). Moreover, samples from BeaverTails and Jailbreak are also excluded from our training dataset for testing the generalization capability.
142
 
143
  <div align="center">
144
 
145
+ | **Model** | **Size** | **AIR (OSS)** | **AIR (CSS)** | **SALAD (OSS)** | **SALAD (CSS)** | **BeaverTails (OSS)** | **BeaverTails (CSS)** | **Jailbreak (OSS)** | **Jailbreak (CSS)** | **Avg (OSS)** | **Avg (CSS)** |
146
+ | :---------------------: | :------: | :-----------: | :-----------: | :-------------: | :-------------: | :-------------------: | :-------------------: | :-----------------: | :-----------------: | :-----------: | :-----------: |
147
+ | **Moderation API** | | | | | | | | | | | |
148
+ | Perspective | - | 0.0 | 0.0 | 0.0 | 11.9 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 5.2 |
149
+ | OpenAI Moderation | - | 45.7 | 13.2 | 61.7 | 66.7 | 64.9 | 29.2 | 70.9 | 41.1 | 60.7 | 44.8 |
150
+ | **Prompted LLM** | | | | | | | | | | | |
151
+ | GPT-4o | - | 70.1 | 47.4 | 75.3 | 75.4 | 79.3 | 60.6 | 82.0 | 68.7 | 76.0 | 65.6 |
152
+ | Qwen-2.5 | 72B | 79.1 | 59.8 | 82.1 | **86.0** | 81.1 | 61.5 | 84.2 | 71.9 | 80.8 | 74.0 |
153
+ | Gemma-3 | 27B | 83.2 | 71.6 | 80.2 | 78.3 | 79.2 | **68.9** | 86.6 | 73.2 | 81.6 | 74.4 |
154
+ | Mistral-3.1 | 24B | 65.0 | 45.3 | 77.5 | 73.4 | 73.7 | 55.1 | 77.3 | 54.1 | 73.0 | 60.7 |
155
+ | **Finetuned LLM** | | | | | | | | | | | |
156
+ | LlamaGuard-1 | 7B | 20.3 | 5.7 | 22.8 | 48.8 | 27.1 | 18.8 | 53.9 | 5.7 | 31.0 | 28.0 |
157
+ | LlamaGuard-2 | 8B | 63.3 | 35.7 | 59.8 | 40.0 | 63.3 | 47.4 | 68.2 | 28.6 | 62.4 | 38.1 |
158
+ | LlamaGuard-3 | 8B | 68.3 | 33.3 | 70.4 | 56.5 | 77.6 | 30.3 | 78.5 | 20.5 | 72.8 | 42.2 |
159
+ | LlamaGuard-4 | 12B | 55.0 | 23.4 | 46.1 | 49.6 | 57.0 | 13.3 | 69.2 | 16.2 | 56.2 | 33.7 |
160
+ | Aegis-Permissive | 7B | 56.3 | 51.0 | 66.5 | 67.4 | 65.8 | 35.3 | 70.7 | 33.3 | 64.3 | 53.9 |
161
+ | Aegis-Defensive | 7B | 71.2 | 56.9 | 76.4 | 67.8 | 73.9 | 27.0 | 75.4 | 53.2 | 73.6 | 54.9 |
162
+ | WildGuard | 7B | 58.8 | 45.7 | 66.7 | 76.3 | 68.3 | 51.3 | 79.6 | 55.3 | 67.6 | 62.1 |
163
+ | MD-Judge | 7B | 71.8 | 44.4 | 83.4 | 83.2 | 81.0 | 50.0 | 86.8 | 56.6 | 80.1 | 66.0 |
164
+ | Beaver-Dam | 7B | 50.0 | 17.6 | 52.6 | 36.6 | 71.1 | 12.7 | 60.2 | 36.0 | 58.2 | 26.5 |
165
+ | **ReasoningShield (Ours)** | 1B | <ins>94.2</ins> | <ins>83.7</ins> | <ins>91.5</ins> | 80.5 | <ins>89.0</ins> | 60.0 | <ins>90.1</ins> | <ins>74.2</ins> | <ins>89.4</ins> | <ins>77.7</ins> |
166
+ | **ReasoningShield (Ours)** | 3B | **94.5** | **86.7** | **94.0** | <ins>84.8</ins> | **90.4** | <ins>64.6</ins> | **92.3** | **76.2** | **91.8** | **81.4** |
167
 
168
  </div>
169
 
170
+ Additionally, ***ReasoningShield*** exhibits strong generalization on traditional Answer Moderation, even though it is trained on a CoT Moderation dataset of just 7K samples. Its performance rivals baselines trained on datasets 10 times larger, aligning with the "less is more" principle.
171
 
172
  <div align="center">
173
  <img src="images/bar.png" alt="QT and QA Performance" style="width: 100%; height: auto;">