Copyright 2026 Harikrishna Srinivasan
RoBERTa-Large for Hate Speech Classifier (LoRA)
Summary
This model is a LoRA fine-tuned RoBERTa-Large uncased model for binary hate speech classification (Hate / Not Hate). It is optimized for efficient fine-tuning using Low-Rank Adaptation (LoRA) via the Hugging Face PEFT library.
Details
Description
- Developed by: Harikrishna Srinivasan
- Model type: Fine-Tuned (LoRA) RoBERTa-Large uncased
- Task: Binary text classification
- Language(s): English
- License: Apache 2.0
- Finetuned from:
FacebookAI/roberta-large
This model uses Low-Rank Adaptation (LoRA) to fine-tune only a subset of parameters, enabling efficient training while preserving the strong contextual representation capabilities of RoBERTa-Large.
Sources
- Base Model: https://huggingface.co/FacebookAI/roberta-large
- Training Framework: Hugging Face Transformers + PEFT
- Repository: Not publicly linked (local / academic project)
Uses
Direct Use
This model can be used directly for:
- Hate speech detection in English text
- Moderation pipelines
- Dataset auditing
- Research on implicit hate and biased language
- Pre-filtering content for human moderation
Dataset Citation
@misc{srinivasan2026hatespeech,
author = {Harikrishna Srinivasan},
title = {Hate-Speech Dataset (Refined and Cleaned Version)},
year = {2026},
publisher = {Hugging Face Datasets},
howpublished = {https://huggingface.co/datasets/Harikrishna-Srinivasan/Hate-Speech}
}
Example:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
MODEL_NAME = "Harikrishna-Srinivasan/Hate-Speech-RoBERTa"
model = PeftModel.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True)
- Downloads last month
- -
Model tree for Harikrishna-Srinivasan/Hate-Speech-RoBERTa
Base model
FacebookAI/roberta-large