nvidia/Aegis-AI-Content-Safety-Dataset-2.0
Viewer • Updated • 33.4k • 7.45k • 92
How to use llm-semantic-router/mlcommons-safety-classifier-level1-binary with PEFT:
from peft import PeftModel
from transformers import AutoModelForSequenceClassification
base_model = AutoModelForSequenceClassification.from_pretrained("jhu-clsp/mmBERT-base")
model = PeftModel.from_pretrained(base_model, "llm-semantic-router/mlcommons-safety-classifier-level1-binary")A LoRA-finetuned multilingual BERT model for binary content safety classification (safe/unsafe), following the MLCommons AI Safety Hazard Taxonomy.
This is Level 1 of a hierarchical safety classification system:
The model uses mmBERT (Multilingual ModernBERT) as the base, supporting 1800+ languages.
| Metric | Value |
|---|---|
| Recall | 86.1% |
| F1 Score | 86.5% |
| False Positive Rate | 13.1% |
| Accuracy | 86.6% |
| Parameter | Value |
|---|---|
| Rank (r) | 32 |
| Alpha | 64 |
| Dropout | 0.1 |
| Target Modules | attn.Wqkv, attn.Wo, mlp.Wi, mlp.Wo |
| Trainable Parameters | 6.76M (2.15%) |
| Parameter | Value |
|---|---|
| Epochs | 10 |
| Batch Size | 64 |
| Learning Rate | 3e-4 |
| Optimizer | AdamW |
| Scheduler | Linear warmup |
| Component | Specification |
|---|---|
| GPU | AMD Instinct MI300X |
| VRAM | 192GB HBM3 |
| Platform | ROCm 6.2 |
| Container | rocm/pytorch:rocm6.2_ubuntu22.04_py3.10_pytorch_release_2.3.0 |
| Training Time | ~4 minutes |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel
# Load base model and tokenizer
base_model = "jhu-clsp/mmBERT-base"
tokenizer = AutoTokenizer.from_pretrained("llm-semantic-router/mlcommons-safety-classifier-level1-binary")
model = AutoModelForSequenceClassification.from_pretrained(base_model, num_labels=2)
model = PeftModel.from_pretrained(model, "llm-semantic-router/mlcommons-safety-classifier-level1-binary")
# Classify
text = "How do I make a cake?"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
outputs = model(**inputs)
prediction = outputs.logits.argmax(-1).item()
label = "safe" if prediction == 0 else "unsafe"
print(f"Classification: {label}")
{
"safe": 0,
"unsafe": 1
}
This model is designed for:
@misc{mlcommons-safety-classifier,
title={MLCommons AI Safety Classifier},
author={LLM Semantic Router Team},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/llm-semantic-router/mlcommons-safety-classifier-level1-binary}
}
Apache 2.0
Base model
jhu-clsp/mmBERT-base