Qwen3-30B-A3B-Instruct-2507-REAM
This model is a compressed version of Qwen/Qwen3-30B-A3B-Instruct-2507. It is obtained by reducing the number of experts in each MoE layer from 128 to 96. This reduction is achieved by the REAM method described in https://bknyaz.github.io/blog/2026/moe/. The compressed model has 23B params (44GB) instead of 31B (57GB) of the original model, reducing storage and GPU memory requirements by roughly 25%. At the same time, the model retains >=94% of the original model's performance on a variety of benchmarks (see Evaluation section below). Additional efficiency optimization (e.g., quantization) can be added similarly to the original model.
Model Details
The model is exactly the same as Qwen/Qwen3-30B-A3B-Instruct-2507 except that number of experts is reduced from 128 to 96.
Evaluation
Model is evaluated using https://github.com/EleutherAI/lm-evaluation-harness/ except for LiveCodeBench, which is evaluated using https://github.com/LiveCodeBench/LiveCodeBench.
The following versions were used for eval:
- python >= 3.10
- torch : 2.7.1+cu126
- lm_eval : 0.4.9.1
- vllm : 0.10.1.1
- transformers : 4.57.1
- datasets : 3.2.0
- numpy : 1.26.4
For tasks IFEval, AIME25, GSM8K and HumanEval the following command was used for eval on 4xNVIDIA H100:
python -m lm_eval --model vllm --model_args pretrained=${model},tensor_parallel_size=4,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=1,max_model_len=131072 --tasks ${task} --batch_size 1 --apply_chat_template=True --confirm_run_unsafe_code
For HumanEval, we use --task=humaneval_instruct.
For GPQA-Diamond, we add flags: --num_fewshot 5 --fewshot_as_multiturn and set --task=gpqa_diamond_n_shot.
For LiveCodeBench, we evaluate using:
python -m lcb_runner.runner.main --model Qwen/Qwen3-30B-A3B-Instruct-2507 --scenario codegeneration --evaluate --local_model_path ${model} --release_version release_v6
For multi-choice question answering tasks (Winogrande, ARC-C, ARC-E, BoolQ, HellaSwag, MMLU, OpenBookQA, RTE), we evaluate using lm_eval on a single GPU with a batch size equal 16.
Other parameters are set to default.
Metrics
We report the metric from the first row printed by lm_eval.
For example, for IFEval, we report inst_level_loose_acc=0.8921 given the lm_eval's output:
| Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
|---|---|---|---|---|---|---|---|---|
| ifeval | 4 | none | 0 | inst_level_loose_acc | ↑ | 0.8921 | ± | N/A |
| none | 0 | inst_level_strict_acc | ↑ | 0.8585 | ± | N/A | ||
| none | 0 | prompt_level_loose_acc | ↑ | 0.8373 | ± | 0.0159 | ||
| none | 0 | prompt_level_strict_acc | ↑ | 0.7930 | ± | 0.0174 |
Results
| Model | Winogrande | ARC-C | ARC-E | BoolQ | HellaSwag | MMLU | OpenBookQA | RTE | AVG |
|---|---|---|---|---|---|---|---|---|---|
| Qwen3-30B-A3B-Instruct-2507 | 73.2 | 60.7 | 85.1 | 88.7 | 61.2 | 80.1 | 32.4 | 76.5 | 69.7 |
| Qwen3-30B-A3B-Instruct-2507-REAM | 71.8 | 51.9 | 79.1 | 88.5 | 57.6 | 70.1 | 30.0 | 77.6 | 65.8 |
| Model | IFeval | AIME25 | GSM8K | GPQA-D | HumanEval | LiveCodeBench | AVG |
|---|---|---|---|---|---|---|---|
| Qwen3-30B-A3B-Instruct-2507 | 90.4 | 56.7 | 89.3 | 47.0 | 93.3 | 48.6 | 70.9 |
| Qwen3-30B-A3B-Instruct-2507-REAM | 89.2 | 66.7 | 88.1 | 38.9 | 86.6 | 36.9 | 67.7 |
License
Please refer to the license of the original model Qwen/Qwen3-30B-A3B-Instruct-2507.
- Downloads last month
- 64