--- license: apache-2.0 datasets: - Helsinki-NLP/tatoeba - openlanguagedata/flores_plus - facebook/bouquet language: - en - ru metrics: - bleu - comet - chrf pipeline_tag: translation --- # OPUS-MT-tiny-rus-eng Distilled model from a Tatoeba-MT Teacher: [OPUS-MT-models/ru-en/opus-2020-02-26](https://object.pouta.csc.fi/OPUS-MT-models/ru-en/opus-2020-02-26.zip), which has been trained on the [Tatoeba](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/data) dataset. We used the [OpusDistillery](https://github.com/Helsinki-NLP/OpusDistillery) to train new a new student with the tiny architecture, with a regular transformer decoder. For training data, we used [Tatoeba](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/data). The configuration file fed into OpusDistillery can be found [here](https://github.com/Helsinki-NLP/OpusDistillery/blob/main/configs/opustranslate_hf/config.op.ru-en.yml). ## How to run ```python from transformers import MarianMTModel, MarianTokenizer model_name = "Helsinki-NLP/opus-mt_tiny_rus-eng" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) tok = tokenizer("Это привело к тому, что два вида рыб вымерли, а два других, в том числе горбатый голавль, попали под угрозу исчезновения.", return_tensors="pt").input_ids output = model.generate(tok)[0] tokenizer.decode(output, skip_special_tokens=True) ``` ## Benchmarks ### Teacher | testset | BLEU | chr-F | COMET| |-----------------------|-------|-------|-------| | Flores+ | 29.4 | 57.7| 0.8172 | | Bouquet |33.4 | 56.6 |0.8235 | ### Student | testset | BLEU | chr-F | COMET | |-----------------------|-------|-------|-------| | Flores+ | 28.4 | 56.8 | 0.8220 | | Bouquet |31.1|54.1| 0.8151 |