CodeReview-Qwen32B (Merged)

A Qwen2.5-Coder-32B-Instruct model fine-tuned for code review.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "ronantakizawa/codereview-qwen32b-merged",
    device_map="auto",
    torch_dtype="auto",
)
tokenizer = AutoTokenizer.from_pretrained("ronantakizawa/codereview-qwen32b-merged")

Details

See the adapter model card for full training details, evaluation results, and prompt format.

Metric Base Model (Zero-Shot) Fine-Tuned Change
BLEU-4 3.82 16.91 +343%
ROUGE-L F1 0.081 0.216 +167%
Sentence-BERT Cosine Sim 0.477 0.526 +10%
Comment Type Accuracy 0.00 0.640 --
  • Format: bf16 safetensors (14 shards, ~64GB)
  • Training: QLoRA SFT on 48K real code review examples from 504 GitHub repos
  • Dataset: ronantakizawa/github-codereview
Downloads last month
31
Safetensors
Model size
33B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ronantakizawa/codereview-qwen32b

Base model

Qwen/Qwen2.5-32B
Finetuned
(123)
this model

Dataset used to train ronantakizawa/codereview-qwen32b