distilbert-base-uncased-finetuned-ner-combined-v1
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2614
- Precision: 0.7181
- Recall: 0.7404
- F1: 0.7291
- Accuracy: 0.9201
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|---|---|---|
| No log | 1.0 | 240 | 0.3337 | 0.6076 | 0.6251 | 0.6162 | 0.8959 |
| No log | 2.0 | 480 | 0.2803 | 0.6819 | 0.6935 | 0.6876 | 0.9116 |
| 0.3943 | 3.0 | 720 | 0.2579 | 0.7206 | 0.7040 | 0.7122 | 0.9172 |
| 0.3943 | 4.0 | 960 | 0.2523 | 0.7225 | 0.7149 | 0.7187 | 0.9174 |
| 0.2108 | 5.0 | 1200 | 0.2556 | 0.7153 | 0.7288 | 0.7220 | 0.9191 |
| 0.2108 | 6.0 | 1440 | 0.2532 | 0.7234 | 0.7232 | 0.7233 | 0.9198 |
| 0.1727 | 7.0 | 1680 | 0.2614 | 0.7181 | 0.7404 | 0.7291 | 0.9201 |
| 0.1727 | 8.0 | 1920 | 0.2688 | 0.7013 | 0.7414 | 0.7208 | 0.9176 |
| 0.1407 | 9.0 | 2160 | 0.2759 | 0.7061 | 0.7449 | 0.7250 | 0.9187 |
| 0.1407 | 10.0 | 2400 | 0.2824 | 0.7161 | 0.7320 | 0.7239 | 0.9189 |
| 0.1157 | 11.0 | 2640 | 0.2891 | 0.7295 | 0.7179 | 0.7237 | 0.9200 |
| 0.1157 | 12.0 | 2880 | 0.3048 | 0.7210 | 0.7227 | 0.7218 | 0.9190 |
Framework versions
- Transformers 4.57.3
- Pytorch 2.9.0+cu126
- Datasets 3.6.0
- Tokenizers 0.22.1
- Downloads last month
- -
Model tree for grazh/distilbert-base-uncased-finetuned-ner-combined-v1
Base model
distilbert/distilbert-base-uncased