KamilHugsFaces/t5-gemma-2-reasoning-query-match-v4
Fine-tuned T5-Gemma-2 model for entailment classification.
Training Details
- Base model: google/t5gemma-2-4b-4b
- Training variant: query_match_v1
- Epochs: 3
- Batch size: 1
- Learning rate: 1e-05
- Run name: query_match_v1_20260114_231334
Training Data
- Training examples: 215
- Validation examples: 46
- Test examples: 49
- Class weights: {'true': 1.0, 'false': 4.0}
Evaluation Results
Test Set Performance
- F1 Score: 0.7424
- F1 (False class): 0.4615
- Accuracy: 0.7143
- Precision (False): 0.3529
- Recall (False): 0.6667
Usage
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("KamilHugsFaces/t5-gemma-2-reasoning-query-match-v4")
tokenizer = AutoTokenizer.from_pretrained("KamilHugsFaces/t5-gemma-2-reasoning-query-match-v4")
# Format input
input_text = "entailment: [Your claim and evidence here]"
inputs = tokenizer(input_text, return_tensors="pt", max_length=250, truncation=True)
# Generate prediction
outputs = model.generate(**inputs, max_new_tokens=8)
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Output: "true" or "false"
Training Configuration
{ "variant_name": "query_match_v1", "run_name": "query_match_v1_20260114_231334", "num_epochs": 3, "batch_size": 1, "learning_rate": 1e-05, "warmup_steps": 100, "model_name": "google/t5gemma-2-4b-4b", "class_weights": { "true": 1.0, "false": 4.0 }, "use_confidence_weighting": true, "confidence_weight_alpha": 2, "train_size": 215, "val_size": 46, "test_size": 49 }
Framework
- Transformers: 5.0.0.dev0
- PyTorch: 2.9.1+cu128
- Trained on: Modal (A100 GPU)
- Downloads last month
- 11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Evaluation results
- F1 Scoreself-reported0.742
- F1 (False class)self-reported0.462
- Accuracyself-reported0.714