AraCode-7B-LoRA
LoRA adapter weights for AraCode-7B — the first open-source Arabic-specialized code explanation model.
This adapter can be loaded on top of the base model for Arabic code explanation, generation, and discussion.
Benchmarks
| Benchmark | Score |
|---|---|
| Arabic Code Explanation | 100% (5/5) |
| MBPP Syntax Rate | 92.3% |
| MBPP Execution Rate | 82.3% |
| Multi-Language (Python / JS / SQL) | 3/3 |
| Inference Speed | 25.9 tok/s |
Usage
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="rahimdzx/AraCode-7B-LoRA",
max_seq_length=2048,
load_in_4bit=True,
)
FastLanguageModel.for_inference(model)
prompt = "اشرح الكود التالي بالعربية:\ndef fibonacci(n):\n if n <= 1: return n\n return fibonacci(n-1) + fibonacci(n-2)"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Available Formats
| Format | Repo | Size | Use Case |
|---|---|---|---|
| GGUF Q4_K_M | AraCode-7B-GGUF | 4.68 GB | Local inference, Ollama, llama.cpp |
| LoRA Adapter | AraCode-7B-LoRA | 162 MB | Fine-tuning, research, Unsloth |
Links
- GGUF Version: rahimdzx/AraCode-7B-GGUF
License
Apache 2.0
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support