Qwen3 EAGLE3 β Weighted Loss Variants
Collection
Qwen3-8B draft models collection for CMU 11-711 Course Project β’ 7 items β’ Updated
Reimplementation of AdaSPEC (adaptive speculative decoding) training objective. Baseline comparison.
Part of a course project evaluating per-step weighted loss functions for training EAGLE3 draft models. Full pipeline and source: https://github.com/XLOverflow/anlp_course_project
Collection: Qwen3 EAGLE3 β Weighted Loss Variants
Qwen/Qwen3-8BAngelSlim/Qwen3-8B_eagle3scripts/data/ in project repo)baseline-uniform/epoch_4_step_82000| Dataset | Ο (accept. length) | Speedup | Accuracy |
|---|---|---|---|
| GSM8K | 6.856 | 4.289Γ | 95.15% |
| MATH500 | 6.678 | 4.206Γ | 94.40% |
Baselines for reference: Vanilla β 1Γ speedup, EAGLE-orig β 2Γ speedup.
model.safetensors β draft model weights (~763 MB)config.json β model configoutputs/eagle3-adaspec/epoch_0_step_17026 in the original training outputOptimizer state (~3 GB) is not uploaded β use the project repo's training scripts to resume from scratch if needed.
from huggingface_hub import snapshot_download
draft_path = snapshot_download(repo_id="XLOverflow/qwen3-eagle3-adaspec")
# Then load with EAGLE's EaModel β see scripts/eval/eval_combined.py in the project repo.
Base model
AngelSlim/Qwen3-8B_eagle3