Add pipeline_tag, library_name, paper link and sample usage
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,16 +1,19 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
- zh
|
|
|
|
| 6 |
tags:
|
| 7 |
- instruction-finetuning
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
---
|
|
|
|
| 14 |
<h1 align="center">
|
| 15 |
π xVerify-1B-I
|
| 16 |
</h1>
|
|
@@ -23,10 +26,16 @@ license: cc-by-nc-nd-4.0
|
|
| 23 |
<a href="https://huggingface.co/IAAR-Shanghai/xVerify-1B-I">
|
| 24 |
<img src="https://img.shields.io/badge/π€%20Hugging%20Face-xVerify--1B--I-yellow" alt="Hugging Face"/>
|
| 25 |
</a>
|
|
|
|
|
|
|
|
|
|
| 26 |
</div>
|
| 27 |
</p>
|
|
|
|
| 28 |
xVerify is an evaluation tool fine-tuned from a pre-trained large language model, designed specifically for objective questions with a single correct answer. It accurately extracts the final answer from lengthy reasoning processes and efficiently identifies equivalence across different forms of expressions.
|
| 29 |
|
|
|
|
|
|
|
| 30 |
---
|
| 31 |
|
| 32 |
## β¨ Key Features
|
|
@@ -48,6 +57,45 @@ Primarily handles Chinese and English responses while remaining compatible with
|
|
| 48 |
|
| 49 |
---
|
| 50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
## π Citation
|
| 53 |
|
|
@@ -58,5 +106,4 @@ Primarily handles Chinese and English responses while remaining compatible with
|
|
| 58 |
journal={arXiv preprint arXiv:2504.10481},
|
| 59 |
year={2025},
|
| 60 |
}
|
| 61 |
-
```
|
| 62 |
-
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- meta-llama/Llama-3.2-1B-Instruct
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
- zh
|
| 7 |
+
license: cc-by-nc-nd-4.0
|
| 8 |
tags:
|
| 9 |
- instruction-finetuning
|
| 10 |
+
inference: false
|
| 11 |
+
pipeline_tag: text-generation
|
| 12 |
+
library_name: transformers
|
| 13 |
+
datasets:
|
| 14 |
+
- IAAR-Shanghai/VAR
|
| 15 |
---
|
| 16 |
+
|
| 17 |
<h1 align="center">
|
| 18 |
π xVerify-1B-I
|
| 19 |
</h1>
|
|
|
|
| 26 |
<a href="https://huggingface.co/IAAR-Shanghai/xVerify-1B-I">
|
| 27 |
<img src="https://img.shields.io/badge/π€%20Hugging%20Face-xVerify--1B--I-yellow" alt="Hugging Face"/>
|
| 28 |
</a>
|
| 29 |
+
<a href="https://huggingface.co/papers/2504.10481">
|
| 30 |
+
<img src="https://img.shields.io/badge/Paper-arXiv-red?logo=arxiv" alt="Paper"/>
|
| 31 |
+
</a>
|
| 32 |
</div>
|
| 33 |
</p>
|
| 34 |
+
|
| 35 |
xVerify is an evaluation tool fine-tuned from a pre-trained large language model, designed specifically for objective questions with a single correct answer. It accurately extracts the final answer from lengthy reasoning processes and efficiently identifies equivalence across different forms of expressions.
|
| 36 |
|
| 37 |
+
The model was presented in the paper [xVerify: Efficient Answer Verifier for Reasoning Model Evaluations](https://huggingface.co/papers/2504.10481).
|
| 38 |
+
|
| 39 |
---
|
| 40 |
|
| 41 |
## β¨ Key Features
|
|
|
|
| 57 |
|
| 58 |
---
|
| 59 |
|
| 60 |
+
## π Sample Usage
|
| 61 |
+
|
| 62 |
+
According to the [official repository](https://github.com/IAAR-Shanghai/xVerify), you can use the model for evaluation as follows:
|
| 63 |
+
|
| 64 |
+
```python
|
| 65 |
+
# Single sample evaluation test
|
| 66 |
+
from src.xVerify.model import Model
|
| 67 |
+
from src.xVerify.eval import Evaluator
|
| 68 |
+
|
| 69 |
+
# initialization
|
| 70 |
+
model_name = 'xVerify-1B-I' # Model name
|
| 71 |
+
url = 'IAAR-Shanghai/xVerify-1B-I' # Model path or URL
|
| 72 |
+
inference_mode = 'local' # Inference mode, 'local' or 'api'
|
| 73 |
+
api_key = None # API key used to access the model via API, if not available, set to None
|
| 74 |
+
|
| 75 |
+
model = Model(
|
| 76 |
+
model_name=model_name,
|
| 77 |
+
model_path_or_url=url,
|
| 78 |
+
inference_mode=inference_mode,
|
| 79 |
+
api_key=api_key
|
| 80 |
+
)
|
| 81 |
+
evaluator = Evaluator(model=model)
|
| 82 |
+
|
| 83 |
+
# input evaluation information
|
| 84 |
+
question = "New steel giant includes Lackawanna site A major change is coming to the global steel industry and a galvanized mill in Lackawanna that formerly belonged to Bethlehem Steel Corp.
|
| 85 |
+
Classify the topic of the above sentence as World, Sports, Business, or Sci/Tech."
|
| 86 |
+
llm_output = "The answer is Business."
|
| 87 |
+
correct_answer = "Business"
|
| 88 |
+
|
| 89 |
+
# evaluation
|
| 90 |
+
result = evaluator.single_evaluate(
|
| 91 |
+
question=question,
|
| 92 |
+
llm_output=llm_output,
|
| 93 |
+
correct_answer=correct_answer
|
| 94 |
+
)
|
| 95 |
+
print(result)
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
---
|
| 99 |
|
| 100 |
## π Citation
|
| 101 |
|
|
|
|
| 106 |
journal={arXiv preprint arXiv:2504.10481},
|
| 107 |
year={2025},
|
| 108 |
}
|
| 109 |
+
```
|
|
|