File size: 3,289 Bytes
f0d0cc2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
library_name: transformers
license: mit
datasets:
- HuggingFaceFW/fineweb
language:
- en
base_model:
- sign/utf8-lm-tiny
---

# UTF32-LM-tiny

This model is a fine-tuned version of [sign/utf8-lm-tiny](https://huggingface.co/sign/utf8-lm-tiny) on the HuggingFaceFW/fineweb dataset.

Using [this](https://github.com/sign/utf8-tokenizer/blob/main/experiments/language-modelling/run_clm.py) training script, from [utf8-tokenizer](https://github.com/sign/utf8-tokenizer/tree/main).

Unlike the base model, where we train directly on UTF-8 bytes - here, we train on characters (UTF-32 blocks), where each character is decomposed into a fixed, four "bytes" which are encoded independently and then concatenated.

| Character | UTF-8              | UTF-32       | UTF-32 Decomposed (bytes) |
| --------- | ------------------ | ------------ | ------------------------- |
| A         | `\x41`             | `U+00000041` | `[0, 0, 0, 65]`           |
| é         | `\xC3\xA9`         | `U+000000E9` | `[0, 0, 0, 233]`          |
| €         | `\xE2\x82\xAC`     | `U+000020AC` | `[0, 0, 32, 172]`         |
| 😀        | `\xF0\x9F\x98\x80` | `U+0001F600` | `[0, 1, 246, 0]`          |


This is effectively switching from variable-width canonical UTF-8 byte sequences, to fixed size groups, making training/inference up to 4x more efficient for complex scripts.


## Usage

```python
from transformers import AutoModelForCausalLM, LogitsProcessorList
import torch
from utf8_tokenizer.logits_processor import UTF8ValidationLogitsProcessor
from utf8_tokenizer.char_causal_lm import CharacterCausalLMWrapper

from utf8_tokenizer import UTF8Tokenizer

model_id = "sign/utf32-lm-tiny"

tokenizer = UTF8Tokenizer()
model = AutoModelForCausalLM.from_pretrained(model_id)

prompt = "My name is"

inputs = tokenizer([prompt], return_tensors="pt",
                   padding=True,
                   add_special_tokens=True)
# We need to remove the EOS token
inputs["input_ids"] = inputs["input_ids"][:, :-1]
inputs["attention_mask"] = inputs["attention_mask"][:, :-1]


with torch.no_grad():
    out = model.generate(
        **inputs,
        max_new_tokens=256,
    )

print(tokenizer.decode(out[0], skip_special_tokens=False))

```

## Training procedure

```shell
python run_clm.py \
  --use_bit_embeddings True \
  --encoding utf32 \
  --output_dir ./output-tiny-lm-fineweb-groups \
  --dataset_name HuggingFaceFW/fineweb \
  --streaming True \
  --dataloader_num_workers 1 \
  --dataloader_prefetch_factor 4 \
  --dataloader_pin_memory True \
  --dataloader_persistent_workers True \
  --do_train True \
  --save_strategy steps \
  --max_steps 100000 \
  --save_steps 1000 \
  --save_total_limit 1 \
  --logging_steps 100 \
  --logging_strategy steps \
  --model_name_or_path sbintuitions/tiny-lm \
  --per_device_train_batch_size 256 \
  --block_size 256 \
  --optim adamw_torch_fused \
  --learning_rate 3e-4 \
  --lr_scheduler_type cosine \
  --warmup_ratio 0.01 \
  --weight_decay 0.1 \
  --adam_beta1 0.9 \
  --adam_beta2 0.95 \
  --max_grad_norm 1.0 \
  --gradient_checkpointing True \
  --bf16 True \
  --seed 42 \
  --report_to wandb \
  --include_num_input_tokens_seen True
```


### Framework versions

- Transformers 4.57.3
- Pytorch 2.9.1+cu130
- Datasets 4.4.1
- Tokenizers 0.22.1