--- base_model: - meta-llama/Llama-3.2-3B-Instruct language: - en - zh license: cc-by-nc-nd-4.0 tags: - instruction-finetuning - reasoning - evaluation library_name: transformers pipeline_tag: text-generation inference: false ---

🔍 xVerify-3B-Ia

GitHub Hugging Face Paper

xVerify is an evaluation tool fine-tuned from a pre-trained large language model, designed specifically for objective questions with a single correct answer. It accurately extracts the final answer from lengthy reasoning processes and efficiently identifies equivalence across different forms of expressions. This model was introduced in the paper [xVerify: Efficient Answer Verifier for Reasoning Model Evaluations](https://huggingface.co/papers/2504.10481). --- ## ✨ Key Features ### 📊 Broad Applicability Suitable for various objective question evaluation scenarios including math problems, multiple-choice questions, classification tasks, and short-answer questions. ### ⛓️ Handles Long Reasoning Chains Effectively processes answers with extensive reasoning steps to extract the final answer, regardless of complexity. ### 🌐 Multilingual Support Primarily handles Chinese and English responses while remaining compatible with other languages. ### 🔄 Powerful Equivalence Judgment - ✓ Recognizes basic transformations like letter case changes and Greek letter conversions - ✓ Identifies equivalent mathematical expressions across formats (LaTeX, fractions, scientific notation) - ✓ Determines semantic equivalence in natural language answers - ✓ Matches multiple-choice responses by content rather than just option identifiers --- ## 🚀 Usage For detailed instructions on installation and batch evaluation using the `xVerify` framework, please refer to the [official GitHub repository](https://github.com/IAAR-Shanghai/xVerify). Since this is a Llama-based model, you can also use it directly with the `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "IAAR-Shanghai/xVerify-3B-Ia" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") # Your input prompt logic here ``` --- ## 📚 Citation ```bibtex @article{xVerify, title={xVerify: Efficient Answer Verifier for Reasoning Model Evaluations}, author={Ding Chen and Qingchen Yu and Pengyuan Wang and Wentao Zhang and Bo Tang and Feiyu Xiong and Xinchi Li and Minchuan Yang and Zhiyu Li}, journal={arXiv preprint arXiv:2504.10481}, year={2025}, } ```