Qwen2.5-Math-7B-Instruct-AWQ-INT4

INT4 quantization of Qwen/Qwen2.5-Math-7B-Instruct. Math-specialized 7B model, runs on a single 8 GB+ consumer GPU.

Footprint

Source params 7B (math-specialized)
Quantized weights ~5.2 GB on disk
Inference VRAM (incl. KV cache @ 32K context) ~10 GB

Quick start

vllm serve drawais/Qwen2.5-Math-7B-Instruct-AWQ-INT4 --quantization awq_marlin --max-model-len 32768
from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("drawais/Qwen2.5-Math-7B-Instruct-AWQ-INT4")
model = AutoModelForCausalLM.from_pretrained("drawais/Qwen2.5-Math-7B-Instruct-AWQ-INT4", device_map="auto")

Bench

Leaderboard score on drawais/needle-1M-bench-mvp coming after upload.

License

Apache 2.0 (inherits from base model).

Downloads last month
15
Safetensors
Model size
8B params
Tensor type
I32
·
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for drawais/Qwen2.5-Math-7B-Instruct-AWQ-INT4

Base model

Qwen/Qwen2.5-7B
Quantized
(43)
this model

Collection including drawais/Qwen2.5-Math-7B-Instruct-AWQ-INT4