needle-1M-bench + Qwen3 quantizations
Collection
Long-context faithfulness benchmark + audit-friendly Qwen3 quantized releases. Outputs ship; inputs are auditable. • 23 items • Updated
INT4 quantization of Qwen/Qwen2.5-Math-7B-Instruct. Math-specialized 7B model, runs on a single 8 GB+ consumer GPU.
| Source params | 7B (math-specialized) |
| Quantized weights | ~5.2 GB on disk |
| Inference VRAM (incl. KV cache @ 32K context) | ~10 GB |
vllm serve drawais/Qwen2.5-Math-7B-Instruct-AWQ-INT4 --quantization awq_marlin --max-model-len 32768
from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("drawais/Qwen2.5-Math-7B-Instruct-AWQ-INT4")
model = AutoModelForCausalLM.from_pretrained("drawais/Qwen2.5-Math-7B-Instruct-AWQ-INT4", device_map="auto")
Leaderboard score on drawais/needle-1M-bench-mvp coming after upload.
Apache 2.0 (inherits from base model).