xVerify-27B-I / README.md
Hush-cd's picture
Improve model card: add metadata, library_name and link to paper (#1)
67e6e80 verified
metadata
base_model:
  - google/gemma-2-27b-it
language:
  - en
  - zh
license: cc-by-nc-nd-4.0
tags:
  - instruction-finetuning
inference: false
pipeline_tag: text-generation
library_name: transformers
papers:
  - 2504.10481

πŸ” xVerify-27B-I

xVerify is an evaluation tool fine-tuned from a pre-trained large language model, designed specifically for objective questions with a single correct answer. It is presented in the paper xVerify: Efficient Answer Verifier for Reasoning Model Evaluations. It accurately extracts the final answer from lengthy reasoning processes and efficiently identifies equivalence across different forms of expressions.


✨ Key Features

πŸ“Š Broad Applicability

Suitable for various objective question evaluation scenarios including math problems, multiple-choice questions, classification tasks, and short-answer questions.

⛓️ Handles Long Reasoning Chains

Effectively processes answers with extensive reasoning steps to extract the final answer, regardless of complexity.

🌐 Multilingual Support

Primarily handles Chinese and English responses while remaining compatible with other languages.

πŸ”„ Powerful Equivalence Judgment

  • βœ“ Recognizes basic transformations like letter case changes and Greek letter conversions
  • βœ“ Identifies equivalent mathematical expressions across formats (LaTeX, fractions, scientific notation)
  • βœ“ Determines semantic equivalence in natural language answers
  • βœ“ Matches multiple-choice responses by content rather than just option identifiers

πŸš€ Get Started

For full implementation, data preparation, and evaluation scripts, please visit the official GitHub Repository.


πŸ“š Citation

@article{xVerify,
      title={xVerify: Efficient Answer Verifier for Reasoning Model Evaluations}, 
      author={Ding Chen and Qingchen Yu and Pengyuan Wang and Wentao Zhang and Bo Tang and Feiyu Xiong and Xinchi Li and Minchuan Yang and Zhiyu Li},
      journal={arXiv preprint arXiv:2504.10481},
      year={2025},
}