Self-Training Elicits Concise Reasoning in Large Language Models
Paper
• 2502.20122 • Published
• 4
This model is fine-tuned using self-training methods to generate concise reasoning paths for reasoning tasks while maintaining accuracy.
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "tergel/qwen2.5-3b-instruct-math-fs-gpt4o-bon"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map=device, torch_dtype=torch.bfloat16)
question = "Convert the point $(0,3)$ in rectangular coordinates to polar coordinates. Enter your answer in the form $(r,\\theta),$ where $r > 0$ and $0 \\le \\theta < 2 \\pi.$"
inputs = tokenizer(question, return_tensors="pt").to(device)
input_length = len(inputs['input_ids'][0])
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0][input_length:], skip_special_tokens=True)
print(response)
For more detailed information about training methods, evaluation results, limitations, and technical specifications, please refer to our paper.
@article{munkhbat2025self,
title={Self-Training Elicits Concise Reasoning in Large Language Models},
author={Munkhbat, Tergel and Ho, Namgyu and Kim, Seohyun and Yang, Yongjin and Kim, Yujin and Yun, Se-Young},
journal={arXiv preprint arXiv:2502.20122},
year={2025}
}