DeepCoder Experiment Runs
Command Snippets Overview
| Condition | Purpose | Variables |
|---|---|---|
GRPO top-p 1.0 |
Full-retention baseline under linear top-p filtering | rollout_filter_strategy=top_p, rollout_filter_value=1, rollout_filter_include_zero=True |
GRPO top-p 0.9 |
Stronger reward-variance filtering with adaptive retention | rollout_filter_strategy=top_p, rollout_filter_value=0.9, rollout_filter_include_zero=False |
GRPO top-k 0.25 |
Fixed-budget filtering that keeps the top 25% of train groups | rollout_filter_strategy=top_k, rollout_filter_value=0.25, rollout_filter_include_zero=True |
All three command snippets run DeepCoder with Qwen/Qwen2.5-Coder-7B, GRPO, and single-turn code generation.
1. Top-p 1.0 (Qwen/Qwen2.5-Coder-7B-200-GRPO-top-p-1)
Uses linear top-p filtering with rollout_filter_value=1.
Goal:
- Establish a full-retention baseline while keeping the same reward-variance ranking machinery as the filtered runs
Key Details:
- Filtering uses
top_p,largest,reward_variance, androllout_filter_top_p_prob_mode=linear rollout_filter_value=1withrollout_filter_include_zero=Truekeeps the full train-group pool under linear top-p selection, so this is the closest thing to a no-filter baseline indeepcoder_linesactor_rollout_ref.actor.use_ref=Falseandactor_rollout_ref.actor.use_kl_loss=Falseremove reference-policy KL from training- The run budget is
200training steps with checkpoints every20steps
Source:
docs/deepcoder_lines, lines1-89
Outputs:
- W&B project:
deepcoder_RAGEN_final_3 - Run name:
Qwen/Qwen2.5-Coder-7B-200-GRPO-top-p-1
2. Top-p 0.9 (Qwen/Qwen2.5-Coder-7B-200-GRPO-top-p-0.9)
Uses the same linear top-p filter, but keeps only the highest-variance groups whose score mass reaches 0.9.
Goal:
- Increase reward-variance filtering strength while keeping the rest of the GRPO setup fixed
Key Details:
- Filtering again uses
top_p,largest,reward_variance, androllout_filter_top_p_prob_mode=linear rollout_filter_value=0.9makes retention adaptive: the number of kept groups depends on how reward variance is distributed across the16train groupsrollout_filter_include_zero=Falseexcludes zero-variance groups from selection- Because
rollout_filter_type=largest, the filter prioritizes groups with the highest within-group reward variance
Source:
docs/deepcoder_lines, lines93-181
Outputs:
- W&B project:
deepcoder_RAGEN_final_3 - Run name:
Qwen/Qwen2.5-Coder-7B-200-GRPO-top-p-0.9
3. Top-k 0.25 (Qwen/Qwen2.5-Coder-7B-200-GRPO-top-k-0.25)
Switches from adaptive top-p filtering to fixed-fraction top-k filtering.
Goal:
- Compare adaptive top-p filtering against a fixed keep-top-25% regime
Key Details:
rollout_filter_strategy=top_kwithrollout_filter_value=0.25keepsint(0.25 * 16) = 4train groups per step- With
es_manager.train.group_size=8, this corresponds to at most32kept rollouts per training step after filtering rollout_filter_include_zero=Truemeans zero-variance groups are still part of the ranking pool, but only the top4groups surviverollout_filter_type=largestmeans those4groups are chosen by highest reward variance
Source:
docs/deepcoder_lines, lines185-273
Outputs:
- W&B project:
deepcoder_RAGEN_final_3 - Run name:
Qwen/Qwen2.5-Coder-7B-200-GRPO-top-k-0.25
Common Notes
- Source format:
docs/deepcoder_linesis a collection of three standalone bash snippets, not a parameterized sweep script- The file defines both
USE_GRPOandUSE_PPO, but all threepython train.pycommands actually expand$USE_GRPO
- Shared setup across all three conditions:
- Config:
_10_deepcoder - Model:
Qwen/Qwen2.5-Coder-7B algorithm.adv_estimator=grpoagent_proxy.reward_normalization.method=identitytrainer.total_training_steps=200ppo_mini_batch_size=32micro_batch_size_per_gpu=1es_manager.train.env_groups=16,es_manager.train.group_size=8es_manager.val.env_groups=256,es_manager.val.group_size=1trainer.n_gpus_per_node=8system.CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"actor_rollout_ref.rollout.tensor_model_parallel_size=4agent_proxy.max_turn=1actor_rollout_ref.actor.use_ref=Falseactor_rollout_ref.rollout.rollout_filter_type=largestactor_rollout_ref.rollout.rollout_filter_metric=reward_varianceby default fromconfig/base.yamlactor_rollout_ref.rollout.rollout_filter_top_p_prob_mode=linearactor_rollout_ref.rollout.rollout_filter_empty_stop_steps=0actor_rollout_ref.rollout.max_model_len=10000actor_rollout_ref.rollout.max_num_batched_tokens=10000actor_rollout_ref.rollout.response_length=4000agent_proxy.fail_on_prompt_too_long=Truelora.rank=0,lora.alpha=64,lora.target_modules=all-linearactor_rollout_ref.rollout.gpu_memory_utilization=0.6trainer.save_freq=20trainer.validation_steps=1trainer.val_before_train=Truetrainer.test_freq=10collapse_detection.first_turn_enabled=Falsecollapse_detection.multi_turn_enabled=Falsetrainer.resume_mode=disable
- Config:
- Logging and artifacts:
- Default local log dir remains
results/ - Default logger remains
['console', 'wandb'] - Checkpoints are saved every
20steps
- Default local log dir remains