--- license: apache-2.0 language: - ru base_model: - t-tech/T-lite-it-2.1 pipeline_tag: text-generation library_name: transformers --- # T-lite-it-2.1-GGUF **🚨 Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.** This repository contains **T-lite-it-2.1** converted to the **GGUF** format with [llama.cpp](https://github.com/ggerganov/llama.cpp). See the original BF16 model here: [t-tech/T-lite-it-2.1](https://huggingface.co/t-tech/T-lite-it-2.1). ## Description T-lite-it-2.1 is an efficient Russian model built upon the Qwen 3 architecture, featuring significant improvements in instruction following and **adds support for tool-calling capabilities** — a key advancement over [T-lite-it-1.0](https://huggingface.co/t-tech/T-lite-it-1.0), which lacks tool-use support. Outperforms Qwen3-8B in tool calling scenarios, which is essential for agentic applications. Built for both general tasks and complex workflows, with higher Russian text generation throughput enabled by optimized tokenizer. **NOTE: This model supports only non-thinking mode and does not generate `` in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.** ## 📊 Benchmarks | Model | Ru Arena Hard | ruIFeval* | ruBFCL | |------------------------------|:-------------:|:--------:|:--------:| | **T-lite-it-2.1** | 83.9 | 75.9 | 56.5 | | T-lite-it-2.1-q8_0 | 79.5 | 76.2 | 56.6 | | T-lite-it-2.1-q6_k | 79.5 | 77.8 | 56.7 | | T-lite-it-2.1-q5_k_m | 78.6 | 76.3 | 56.6 | | T-lite-it-2.1-q5_0 | 78.9 | 76.8 | 56.3 | | T-lite-it-2.1-q5_k_s | 76.1 | 75.3 | 56.0 | | T-lite-it-2.1-q4_k_m | 71.7 | 75.9 | 54.7 | \* IFeval metric is mean of 4 values: prompt and instruct levels for strict and loose accuracy. ## Available quantisations > **Recommendation:** choose the **highest-quality quantisation that fits your hardware** (VRAM / RAM). | Filename (→ `-gguf`) | Quant method | Bits | Size (GB) | |----------------------|--------------|------|-----------| | `T-lite-it-2.1-q8_0` | Q8_0 | 8 | 8.7 | | `T-lite-it-2.1-q6_k` | Q6_K | 6 | 6.7 | | `T-lite-it-2.1-q5_k_m` | Q5_K_M | 5 | 5.9 | | `T-lite-it-2.1-q5_k_s` | Q5_K_S | 5 | 5.7 | | `T-lite-it-2.1-q5_0` | Q5_0 | 5 | 5.7 | | `T-lite-it-2.1-q4_k_m` | Q4_K_M | 4 | 5.0 | *Size figures assume **no GPU off-loading**. Off-loading lowers RAM usage and uses VRAM instead.* ## Quickstart ### llama.cpp Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide. We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp. In the following demonstration, we assume that you are running commands under the repository `llama.cpp`. ```shell ./llama-cli -hf t-tech/T-lite-it-2.1-GGUF:Q8_0 --jinja --color -ngl 99 -fa -sm row --temp 0.6 --presence-penalty 1.0 -c 40960 -n 32768 --no-context-shift ``` ### ollama Check out our [ollama documentation](https://qwen.readthedocs.io/en/latest/run_locally/ollama.html) for more usage guide. You can run T-lite-2.1 with one command: ```shell ollama run t-tech/T-lite-it-2.1:q8_0 ``` See also [t-tech ollama homepage](https://ollama.com/t-tech/T-lite-it-2.1).