task stringlengths 73 173 | difficulty stringclasses 3 values | category stringclasses 6 values |
|---|---|---|
Evaluate models Qwen/Qwen3-4B-Instruct-2507 on benchmarks Idavidrein/gpqa | Easy | Evaluation |
Evaluate models Qwen/Qwen3-4B-Instruct-2507 on benchmarks HuggingFaceH4/MATH-500 | Easy | Evaluation |
Evaluate models Qwen/Qwen3-4B-Instruct-2507 on benchmarks lighteval/SimpleQA | Easy | Evaluation |
Evaluate models Qwen/Qwen3-4B-Instruct-2507 on benchmarks TIGER-Lab/MMLU-Pro | Easy | Evaluation |
Train models Qwen/Qwen3-4B-Instruct-2507 on datasets HuggingFaceH4/AceReason-1.1-SFT config: math_no_think evaluating them on benchmarks Idavidrein/gpqa | Medium | Training |
Train models Qwen/Qwen3-4B-Instruct-2507 on datasets HuggingFaceH4/AceReason-1.1-SFT config: math_no_think evaluating them on benchmarks lighteval/SimpleQA | Medium | Training |
Train models Qwen/Qwen3-4B-Instruct-2507 on datasets HuggingFaceH4/multi_turn_if evaluating them on benchmarks TIGER-Lab/MMLU-Pro | Medium | Training |
Train models Qwen/Qwen3-4B-Instruct-2507 on datasets HuggingFaceH4/ultrachat_200k evaluating them on benchmarks TIGER-Lab/MMLU-Pro | Medium | Training |
Train models openai/gpt-oss-20b on datasets HuggingFaceH4/AceReason-1.1-SFT config: math_no_think evaluating them on benchmarks Idavidrein/gpqa | Medium | Training |
Train models openai/gpt-oss-20b on datasets HuggingFaceH4/AceReason-1.1-SFT config: math_no_think evaluating them on benchmarks lighteval/SimpleQA | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/multi_turn_if evaluating them on benchmarks Idavidrein/gpqa | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/ultrachat_200k evaluating them on benchmarks Idavidrein/gpqa | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/AceReason-1.1-SFT config: math_no_think evaluating them on benchmarks Idavidrein/gpqa | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/multi_turn_if evaluating them on benchmarks HuggingFaceH4/MATH-500 | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/ultrachat_200k evaluating them on benchmarks HuggingFaceH4/MATH-500 | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/AceReason-1.1-SFT config: math_no_think evaluating them on benchmarks HuggingFaceH4/MATH-500 | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/multi_turn_if evaluating them on benchmarks lighteval/SimpleQA | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/ultrachat_200k evaluating them on benchmarks lighteval/SimpleQA | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/AceReason-1.1-SFT config: math_no_think evaluating them on benchmarks lighteval/SimpleQA | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/multi_turn_if evaluating them on benchmarks TIGER-Lab/MMLU-Pro | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/ultrachat_200k evaluating them on benchmarks TIGER-Lab/MMLU-Pro | Medium | Training |
Train models deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on datasets HuggingFaceH4/AceReason-1.1-SFT config: math_no_think evaluating them on benchmarks TIGER-Lab/MMLU-Pro | Medium | Training |
Run an ablation for hyperparameter learning_rate for model Qwen/Qwen3-4B-Instruct-2507 on dataset HuggingFaceH4/multi_turn_if | Hard | Ablation |
Run an ablation for hyperparameter learning_rate for model openai/gpt-oss-20b on dataset HuggingFaceH4/multi_turn_if | Hard | Ablation |
Run an ablation for hyperparameter learning_rate for model Qwen/Qwen3-4B-Instruct-2507 on dataset HuggingFaceH4/ultrachat_200k | Hard | Ablation |
Run an ablation for hyperparameter learning_rate for model openai/gpt-oss-20b on dataset HuggingFaceH4/ultrachat_200k | Hard | Ablation |
Run an ablation for hyperparameter learning_rate for model Qwen/Qwen3-4B-Instruct-2507 on dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think | Hard | Ablation |
Run an ablation for hyperparameter learning_rate for model gpt-4o-mini on dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think | Hard | Ablation |
Run an ablation for hyperparameter learning_rate for model openai/gpt-oss-20b on dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think | Hard | Ablation |
Run an ablation for hyperparameter batch_size for model Qwen/Qwen3-4B-Instruct-2507 on dataset HuggingFaceH4/multi_turn_if | Hard | Ablation |
Run an ablation for hyperparameter batch_size for model openai/gpt-oss-20b on dataset HuggingFaceH4/multi_turn_if | Hard | Ablation |
Run an ablation for hyperparameter batch_size for model Qwen/Qwen3-4B-Instruct-2507 on dataset HuggingFaceH4/ultrachat_200k | Hard | Ablation |
Run an ablation for hyperparameter batch_size for model openai/gpt-oss-20b on dataset HuggingFaceH4/ultrachat_200k | Hard | Ablation |
Run an ablation for hyperparameter batch_size for model Qwen/Qwen3-4B-Instruct-2507 on dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think | Hard | Ablation |
Run an ablation for hyperparameter batch_size for model openai/gpt-oss-20b on dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think | Hard | Ablation |
Run an ablation for hyperparameter num_epochs for model Qwen/Qwen3-4B-Instruct-2507 on dataset HuggingFaceH4/multi_turn_if | Hard | Ablation |
Run an ablation for hyperparameter num_epochs for model openai/gpt-oss-20b on dataset HuggingFaceH4/multi_turn_if | Hard | Ablation |
Run an ablation for hyperparameter num_epochs for model Qwen/Qwen3-4B-Instruct-2507 on dataset HuggingFaceH4/ultrachat_200k | Hard | Ablation |
Run an ablation for hyperparameter num_epochs for model openai/gpt-oss-20b on dataset HuggingFaceH4/ultrachat_200k | Hard | Ablation |
Run an ablation for hyperparameter num_epochs for model Qwen/Qwen3-4B-Instruct-2507 on dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think | Hard | Ablation |
Run an ablation for hyperparameter num_epochs for model openai/gpt-oss-20b on dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think | Hard | Ablation |
Generate completions with model Qwen/Qwen3-4B-Instruct-2507 on benchmarks Idavidrein/gpqa using engine vllm | Medium | Generation |
Generate completions with model Qwen/Qwen3-4B-Instruct-2507 on benchmarks HuggingFaceH4/MATH-500 using engine vllm | Medium | Generation |
Generate completions with model Qwen/Qwen3-4B-Instruct-2507 on benchmarks Idavidrein/gpqa using engine sglang | Medium | Generation |
Generate completions with model Qwen/Qwen3-4B-Instruct-2507 on benchmarks HuggingFaceH4/MATH-500 using engine sglang | Medium | Generation |
Decontaminate dataset HuggingFaceH4/multi_turn_if against benchmarks Idavidrein/gpqa | Hard | Data Processing |
Decontaminate dataset HuggingFaceH4/multi_turn_if against benchmarks HuggingFaceH4/MATH-500 | Hard | Data Processing |
Decontaminate dataset HuggingFaceH4/ultrachat_200k against benchmarks Idavidrein/gpqa | Hard | Data Processing |
Decontaminate dataset HuggingFaceH4/ultrachat_200k against benchmarks HuggingFaceH4/MATH-500 | Hard | Data Processing |
Decontaminate dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think against benchmarks Idavidrein/gpqa | Hard | Data Processing |
Decontaminate dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think against benchmarks HuggingFaceH4/MATH-500 | Hard | Data Processing |
Format dataset HuggingFaceH4/multi_turn_if for compatibility with framework trl on task SFT | Easy | Data Formatting |
Format dataset HuggingFaceH4/multi_turn_if for compatibility with framework trl on task GRPO | Easy | Data Formatting |
Format dataset HuggingFaceH4/multi_turn_if for compatibility with framework axolotl on task SFT | Easy | Data Formatting |
Format dataset HuggingFaceH4/multi_turn_if for compatibility with framework axolotl on task GRPO | Easy | Data Formatting |
Format dataset HuggingFaceH4/multi_turn_if for compatibility with framework verl on task SFT | Easy | Data Formatting |
Format dataset HuggingFaceH4/multi_turn_if for compatibility with framework verl on task GRPO | Easy | Data Formatting |
Format dataset HuggingFaceH4/ultrachat_200k for compatibility with framework trl on task SFT | Easy | Data Formatting |
Format dataset HuggingFaceH4/ultrachat_200k for compatibility with framework trl on task GRPO | Easy | Data Formatting |
Format dataset HuggingFaceH4/ultrachat_200k for compatibility with framework axolotl on task SFT | Easy | Data Formatting |
Format dataset HuggingFaceH4/ultrachat_200k for compatibility with framework axolotl on task GRPO | Easy | Data Formatting |
Format dataset HuggingFaceH4/ultrachat_200k for compatibility with framework verl on task SFT | Easy | Data Formatting |
Format dataset HuggingFaceH4/ultrachat_200k for compatibility with framework verl on task GRPO | Easy | Data Formatting |
Format dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think for compatibility with framework trl on task SFT | Easy | Data Formatting |
Format dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think for compatibility with framework trl on task GRPO | Easy | Data Formatting |
Format dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think for compatibility with framework axolotl on task SFT | Easy | Data Formatting |
Format dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think for compatibility with framework axolotl on task GRPO | Easy | Data Formatting |
Format dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think for compatibility with framework verl on task SFT | Easy | Data Formatting |
Format dataset HuggingFaceH4/AceReason-1.1-SFT config: math_no_think for compatibility with framework verl on task GRPO | Easy | Data Formatting |
README.md exists but content is empty.
- Downloads last month
- 21
Size of downloaded dataset files:
3.65 kB
Size of the auto-converted Parquet files:
3.65 kB
Number of rows:
69