File size: 9,984 Bytes
d90b00f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
---
language: rna
tags:
  - Biology
  - RNA
license: agpl-3.0
datasets:
  - multimolecule/rnacentral
library_name: multimolecule
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
  - example_title: "HIV-1"
    text: "GGUC<mask>CUCUGGUUAGACCAGAUCUGAGCCU"
    output:
      - label: "<eos>"
        score: 0.19942431151866913
      - label: "I"
        score: 0.1465310901403427
      - label: "*"
        score: 0.1448192000389099
      - label: "<unk>"
        score: 0.14174020290374756
      - label: "<cls>"
        score: 0.13194777071475983
  - example_title: "microRNA-21"
    text: "UAGC<mask>UAUCAGACUGAUGUUG"
    output:
      - label: "<eos>"
        score: 0.19946657121181488
      - label: "I"
        score: 0.14641942083835602
      - label: "*"
        score: 0.14452320337295532
      - label: "<unk>"
        score: 0.14180712401866913
      - label: "<cls>"
        score: 0.13223469257354736
---

# ncRNABert

Pre-trained model on non-coding RNA (ncRNA) using a masked language modeling (MLM) objective.

## Disclaimer

This is an UNOFFICIAL implementation of the [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](https://doi.org/10.1101/2023.10.11.561938) by Yanyi Chu, Dan Yu, et al.

The OFFICIAL repository of ncRNABert is at [wangleiofficial/ncRNABert](https://github.com/wangleiofficial/ncRNABert).

> [!WARNING]
> The MultiMolecule team is aware of a potential risk in reproducing the results of ncRNABert.
>
> The ncRNABert apply `softmax` in the `-2` dimension when computing the attention probs. This makes the output of `attention_probs @ value_layer` unreliable when the input sequences are not of the same length (i.e., have padding tokens).
> MultiMolecule applied a workaround to ensure that the attention masks are applied correctly, but this may lead to different results compared to the original implementation.

> [!CAUTION]
> The MultiMolecule team is aware of a potential risk in reproducing the results of RibonanzaNet.
>
> The original implementation of ncRNABert does not prepend `<bos>` (`<cls>`) and append `<eos>` tokens to the input sequence.
> This should not affect the performance of the model in most cases, but it can lead to unexpected behavior in some cases.
>
> Please set `bos_token=None, eos_token=None` in the tokenizer and set `bos_token_id=None, eos_token_id=None` in the model configuration if you want the exact behavior of the original implementation.

> [!TIP]
> The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.

**The team releasing ncRNABert did not write this model card for this model so this model card has been written by the MultiMolecule team.**

## Model Details

ncRNABert is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 5’ untranslated regions (5’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.

### Variants

- **[multimolecule/ncrnabert](https://huggingface.co/multimolecule/ncrnabert)**: The ncRNABert model pre-trained on single nucleotide data.
- **[multimolecule/ncrnabert-3mer](https://huggingface.co/multimolecule/ncrnabert-3mer)**: The ncRNABert model pre-trained on 3-mer data.

### Model Specification

<table>
<thead>
  <tr>
    <th>Variants</th>
    <th>Num Layers</th>
    <th>Hidden Size</th>
    <th>Num Heads</th>
    <th>Intermediate Size</th>
    <th>Num Parameters (M)</th>
    <th>FLOPs (G)</th>
    <th>MACs (G)</th>
    <th>Max Num Tokens</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td>ncRNABert</td>
    <td rowspan="2">24</td>
    <td rowspan="2">1024</td>
    <td rowspan="2">16</td>
    <td rowspan="2">4096</td>
    <td rowspan="2">303.31</td>
    <td rowspan="2">78.96</td>
    <td rowspan="2">39.46</td>
    <td rowspan="2">512</td>
  </tr>
  <tr>
    <td>ncRNABert-3mer</td>
  </tr>
</tbody>
</table>

### Links

- **Code**: [multimolecule.ncrnabert](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/ncrnabert)
- **Data**: [multimolecule/rnacentral](https://huggingface.co/datasets/multimolecule/rnacentral)
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased)
- **Original Repository**: [wangleiofficial/ncRNABert](https://github.com/wangleiofficial/ncRNABert)

## Usage

The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:

```bash
pip install multimolecule
```

### Direct Use

#### Masked Language Modeling

You can use this model directly with a pipeline for masked language modeling:

```python
>>> import multimolecule  # you must import multimolecule to register models
>>> from transformers import pipeline

>>> unmasker = pipeline("fill-mask", model="multimolecule/ncrnabert")
>>> unmasker("gguc<mask>cucugguuagaccagaucugagccu")
[{'score': 0.19942431151866913,
  'token': 2,
  'token_str': '<eos>',
  'sequence': 'G G U C C U C U G G U U A G A C C A G A U C U G A G C C U'},
 {'score': 0.1465310901403427,
  'token': 25,
  'token_str': 'I',
  'sequence': 'G G U C I C U C U G G U U A G A C C A G A U C U G A G C C U'},
 {'score': 0.1448192000389099,
  'token': 23,
  'token_str': '*',
  'sequence': 'G G U C * C U C U G G U U A G A C C A G A U C U G A G C C U'},
 {'score': 0.14174020290374756,
  'token': 3,
  'token_str': '<unk>',
  'sequence': 'G G U C C U C U G G U U A G A C C A G A U C U G A G C C U'},
 {'score': 0.13194777071475983,
  'token': 1,
  'token_str': '<cls>',
  'sequence': 'G G U C C U C U G G U U A G A C C A G A U C U G A G C C U'}]
```

### Downstream Use

#### Extract Features

Here is how to use this model to get the features of a given sequence in PyTorch:

```python
from multimolecule import RnaTokenizer, NcRnaBertModel


tokenizer = RnaTokenizer.from_pretrained("multimolecule/ncrnabert")
model = NcRnaBertModel.from_pretrained("multimolecule/ncrnabert")

text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")

output = model(**input)
```

#### Sequence Classification / Regression

> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.

Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:

```python
import torch
from multimolecule import RnaTokenizer, NcRnaBertForSequencePrediction


tokenizer = RnaTokenizer.from_pretrained("multimolecule/ncrnabert")
model = NcRnaBertForSequencePrediction.from_pretrained("multimolecule/ncrnabert")

text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.tensor([1])

output = model(**input, labels=label)
```

#### Token Classification / Regression

> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression.

Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:

```python
import torch
from multimolecule import RnaTokenizer, NcRnaBertForTokenPrediction


tokenizer = RnaTokenizer.from_pretrained("multimolecule/ncrnabert")
model = NcRnaBertForTokenPrediction.from_pretrained("multimolecule/ncrnabert")

text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), ))

output = model(**input, labels=label)
```

#### Contact Classification / Regression

> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.

Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:

```python
import torch
from multimolecule import RnaTokenizer, NcRnaBertForContactPrediction


tokenizer = RnaTokenizer.from_pretrained("multimolecule/ncrnabert")
model = NcRnaBertForContactPrediction.from_pretrained("multimolecule/ncrnabert")

text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), len(text)))

output = model(**input, labels=label)
```

## Training Details

ncRNABert used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.

### Training Data

The ncRNABert model was pre-trained on [RNAcentral](https://multimolecule.danling.org/datasets/rnacentral).
RNAcentral is a free, public resource that offers integrated access to a comprehensive and up-to-date set of non-coding RNA sequences provided by a collaborating group of [Expert Databases](https://rnacentral.org/expert-databases) representing a broad range of organisms and RNA types.

### Training Procedure

#### Preprocessing

ncRNABert used masked language modeling (MLM) as one of the pre-training objectives. The masking procedure is similar to the one used in BERT:

- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.

## Contact

Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.

## License

This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).

```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```