File size: 1,405 Bytes
f0025bb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f795de0
f0025bb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
library_name: transformers
base_model: Qwen/Qwen2.5-Coder-32B-Instruct
tags:
  - code-review
  - qwen2.5-coder
  - fine-tuned
  - code
  - merged
datasets:
  - ronantakizawa/github-codereview
language:
  - en
license: apache-2.0
pipeline_tag: text-generation
---

# CodeReview-Qwen32B (Merged)

A Qwen2.5-Coder-32B-Instruct model fine-tuned for code review.

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "ronantakizawa/codereview-qwen32b-merged",
    device_map="auto",
    torch_dtype="auto",
)
tokenizer = AutoTokenizer.from_pretrained("ronantakizawa/codereview-qwen32b-merged")
```

## Details

See the [adapter model card](https://huggingface.co/ronantakizawa/codereview-qwen32b) for full training details, evaluation results, and prompt format.

| Metric | Base Model (Zero-Shot) | Fine-Tuned | Change |
|--------|----------------------|------------|--------|
| **BLEU-4** | 3.82 | **16.91** | +343% |
| **ROUGE-L F1** | 0.081 | **0.216** | +167% |
| **Sentence-BERT Cosine Sim** | 0.477 | **0.526** | +10% |
| **Comment Type Accuracy** | 0.00 | **0.640** | -- |

- **Format:** bf16 safetensors (14 shards, ~64GB)
- **Training:** QLoRA SFT on 48K real code review examples from 504 GitHub repos
- **Dataset:** [ronantakizawa/github-codereview](https://huggingface.co/datasets/ronantakizawa/github-codereview)