CodeReview-Qwen32B (Merged)
A Qwen2.5-Coder-32B-Instruct model fine-tuned for code review.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"ronantakizawa/codereview-qwen32b-merged",
device_map="auto",
torch_dtype="auto",
)
tokenizer = AutoTokenizer.from_pretrained("ronantakizawa/codereview-qwen32b-merged")
Details
See the adapter model card for full training details, evaluation results, and prompt format.
| Metric | Base Model (Zero-Shot) | Fine-Tuned | Change |
|---|---|---|---|
| BLEU-4 | 3.82 | 16.91 | +343% |
| ROUGE-L F1 | 0.081 | 0.216 | +167% |
| Sentence-BERT Cosine Sim | 0.477 | 0.526 | +10% |
| Comment Type Accuracy | 0.00 | 0.640 | -- |
- Format: bf16 safetensors (14 shards, ~64GB)
- Training: QLoRA SFT on 48K real code review examples from 504 GitHub repos
- Dataset: ronantakizawa/github-codereview
- Downloads last month
- 31
Model tree for ronantakizawa/codereview-qwen32b
Base model
Qwen/Qwen2.5-32B
Finetuned
Qwen/Qwen2.5-Coder-32B
Finetuned
Qwen/Qwen2.5-Coder-32B-Instruct