--- library_name: transformers license: mit base_model: mmaguero/multilingual-bert-gn-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: langid-ner-multilingual-bert-gn-base-cased results: [] datasets: - mmaguero/gua-spa-2023-task-1-2 language: - gn - es - grn - gug --- # langid-ner-multilingual-bert-gn-base-cased This model is a fine-tuned version of [mmaguero/multilingual-bert-gn-base-cased](https://huggingface.co/mmaguero/multilingual-bert-gn-base-cased) on the [task 1 and task 2 of GUA-SPA@IberLEF 2023 shared task](https://huggingface.co/datasets/mmaguero/gua-spa-2023-task-1-2) dataset. It achieves the following results on the evaluation set: - Loss: 0.5980 - Precision: 0.7389 - Recall: 0.7464 - F1: 0.7426 - Accuracy: 0.8673 ## Model description More information needed ## Intended uses & limitations - NER (PER, LOC, ORG) - Token-based language identification (es, gn, mix, foreign) ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 72 | 1.1288 | 0.5004 | 0.4849 | 0.4925 | 0.7053 | | No log | 2.0 | 144 | 0.7476 | 0.5986 | 0.5881 | 0.5933 | 0.7979 | | No log | 3.0 | 216 | 0.6203 | 0.6329 | 0.6334 | 0.6331 | 0.8244 | | No log | 4.0 | 288 | 0.5784 | 0.6413 | 0.6644 | 0.6527 | 0.8387 | | No log | 5.0 | 360 | 0.5487 | 0.6626 | 0.6871 | 0.6746 | 0.8464 | | No log | 6.0 | 432 | 0.5396 | 0.6929 | 0.7097 | 0.7012 | 0.8581 | | 0.692 | 7.0 | 504 | 0.5446 | 0.6918 | 0.7156 | 0.7035 | 0.8555 | | 0.692 | 8.0 | 576 | 0.5528 | 0.6932 | 0.7240 | 0.7082 | 0.8602 | | 0.692 | 9.0 | 648 | 0.5614 | 0.6987 | 0.7257 | 0.7119 | 0.8602 | | 0.692 | 10.0 | 720 | 0.5751 | 0.7071 | 0.7290 | 0.7179 | 0.8598 | | 0.692 | 11.0 | 792 | 0.5865 | 0.6991 | 0.7232 | 0.7109 | 0.8602 | | 0.692 | 12.0 | 864 | 0.5910 | 0.7102 | 0.7341 | 0.7219 | 0.8648 | | 0.692 | 13.0 | 936 | 0.6068 | 0.7131 | 0.7383 | 0.7255 | 0.8602 | | 0.161 | 14.0 | 1008 | 0.6168 | 0.7066 | 0.7374 | 0.7217 | 0.8635 | | 0.161 | 15.0 | 1080 | 0.6177 | 0.7061 | 0.7357 | 0.7206 | 0.8628 | | 0.161 | 16.0 | 1152 | 0.6280 | 0.7130 | 0.7399 | 0.7262 | 0.8632 | | 0.161 | 17.0 | 1224 | 0.6293 | 0.7071 | 0.7391 | 0.7227 | 0.8628 | | 0.161 | 18.0 | 1296 | 0.6330 | 0.7104 | 0.7408 | 0.7253 | 0.8638 | | 0.161 | 19.0 | 1368 | 0.6365 | 0.7088 | 0.7391 | 0.7236 | 0.8625 | | 0.161 | 20.0 | 1440 | 0.6367 | 0.7093 | 0.7391 | 0.7239 | 0.8615 | ### Framework versions - Transformers 4.57.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.1