--- library_name: transformers license: mit base_model: Davlan/afro-xlmr-base tags: - named-entity-recognition - hausa - african-language - pii-detection - token-classification - generated_from_trainer datasets: - Beijuka/Multilingual_PII_NER_dataset metrics: - precision - recall - f1 - accuracy model-index: - name: multilingual-Davlan/afro-xlmr-base-hausa-ner-v1 results: - task: name: Token Classification type: token-classification dataset: name: Beijuka/Multilingual_PII_NER_dataset type: Beijuka/Multilingual_PII_NER_dataset args: 'split: train+validation+test' metrics: - name: Precision type: precision value: 0.9298021697511167 - name: Recall type: recall value: 0.9256670902160101 - name: F1 type: f1 value: 0.9277300222858963 - name: Accuracy type: accuracy value: 0.9811780190852254 --- # multilingual-Davlan/afro-xlmr-base-hausa-ner-v1 This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the Beijuka/Multilingual_PII_NER_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.1152 - Precision: 0.9298 - Recall: 0.9257 - F1: 0.9277 - Accuracy: 0.9812 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 301 | 0.1139 | 0.8862 | 0.8862 | 0.8862 | 0.9694 | | 0.2008 | 2.0 | 602 | 0.0925 | 0.8741 | 0.9155 | 0.8944 | 0.9729 | | 0.2008 | 3.0 | 903 | 0.0910 | 0.8901 | 0.9125 | 0.9012 | 0.9747 | | 0.0686 | 4.0 | 1204 | 0.1056 | 0.8947 | 0.9263 | 0.9102 | 0.9753 | | 0.0501 | 5.0 | 1505 | 0.0921 | 0.9071 | 0.9305 | 0.9187 | 0.9775 | | 0.0501 | 6.0 | 1806 | 0.0939 | 0.9062 | 0.9377 | 0.9217 | 0.9789 | | 0.036 | 7.0 | 2107 | 0.1034 | 0.8926 | 0.9359 | 0.9137 | 0.9769 | | 0.036 | 8.0 | 2408 | 0.1305 | 0.9019 | 0.9425 | 0.9218 | 0.9779 | | 0.0219 | 9.0 | 2709 | 0.1320 | 0.9037 | 0.9335 | 0.9184 | 0.9778 | | 0.0089 | 10.0 | 3010 | 0.1241 | 0.9271 | 0.9065 | 0.9167 | 0.9781 | | 0.0089 | 11.0 | 3311 | 0.1386 | 0.9184 | 0.9311 | 0.9247 | 0.9791 | | 0.0056 | 12.0 | 3612 | 0.1482 | 0.9094 | 0.9377 | 0.9233 | 0.9788 | | 0.0056 | 13.0 | 3913 | 0.1550 | 0.9109 | 0.9311 | 0.9209 | 0.9783 | | 0.0032 | 14.0 | 4214 | 0.1631 | 0.9078 | 0.9377 | 0.9225 | 0.9792 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4