tomaarsen HF Staff commited on
Commit
936b930
·
verified ·
1 Parent(s): dde1628

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "pooling_mode_lmk": true,
10
+ "lmk_token_id": -1,
11
+ "include_prompt": true
12
+ }
README.md ADDED
@@ -0,0 +1,542 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - dense
10
+ - generated_from_trainer
11
+ - dataset_size:90000
12
+ - loss:CachedMultipleNegativesRankingLoss
13
+ base_model: microsoft/mpnet-base
14
+ widget:
15
+ - source_sentence: how do i change the password for the app store?
16
+ sentences:
17
+ - Tap Settings > [your name] > Password & Security. Tap Change Password. Enter your
18
+ current password or device passcode, then enter a new password and confirm the
19
+ new password. Tap Change or Change Password.
20
+ - Unless others attending the funeral know you for wearing bow ties, we recommend
21
+ sticking to a necktie for funerals. We love bow ties, but they are statement pieces
22
+ that can be a distraction at a funeral. ... Solid black bow ties, black tone-on-tone
23
+ bow ties or dark solid color bow ties are the most appropriate choice.
24
+ - 'Convert fraction (ratio) 5 / 35 Answer: 14.285714285714%'
25
+ - source_sentence: how many molecules are present in one mole of a substance?
26
+ sentences:
27
+ - A year is divided into 12 months in the modern-day Gregorian calendar. The months
28
+ are either 28, 29, 30, or 31 days long. Calendar with 12 months.
29
+ - Latex is considered one of the best materials for mattress construction due to
30
+ its natural softness, breathability and resiliency. Latex is most commonly found
31
+ in luxury mattresses with high price points, and may be used in both the comfort
32
+ layer and support core.
33
+ - 'The mole allows scientists to calculate the number of elementary entities (usually
34
+ atoms or molecules) in a certain mass of a given substance. Avogadro''s number
35
+ is an absolute number: there are 6.022×1023 elementary entities in 1 mole. This
36
+ can also be written as 6.022×1023 mol-1.'
37
+ - source_sentence: how is baking powder and baking soda different?
38
+ sentences:
39
+ - While both products appear similar, they're certainly not the same. Baking soda
40
+ is sodium bicarbonate, which requires an acid and a liquid to become activated
41
+ and help baked goods rise. Conversely, baking powder includes sodium bicarbonate,
42
+ as well as an acid. It only needs a liquid to become activated.
43
+ - Donna Sheridan is dead, long live her hotel on the made-up Greek island of Kalokairi.
44
+ You can rest assured that Streep, who still appears in new scenes in the movie,
45
+ was fine with the decision — and according to director Ol Parker, it wasn't because
46
+ she had other scheduling obligations.
47
+ - Gyan (Sanskrit), a Sanskrit word that roughly translates to 'knowledge' in English.
48
+ - source_sentence: what are the differences between food webs and food chains?
49
+ sentences:
50
+ - FOOD WEBS show how plants and animals are connected in many ways to help them
51
+ all survive. FOOD CHAINS follow just one path of energy as animals find food.
52
+ - 'The following instructions apply to making a PDF text-searchable in Adobe Acrobat
53
+ Professional or Standard: Click on Tools > Text Recognition > In This File. The
54
+ Recognize Text popup box opens. Select All pages, then click OK.'
55
+ - Most sets are worth less than the $5 retail, though some like Civil War can command
56
+ over $10, and the Aliens set of three are on eBay stores for $50. The rare US
57
+ version of the action fleet Aliens jump ship has been asking over $70.
58
+ - source_sentence: can you get period pains while pregnant?
59
+ sentences:
60
+ - The most confusing part about SACs is that your SAC score DOES NOT go towards
61
+ your study score, at least not directly. In fact, your SAC score is merely used
62
+ to rank you in your class.
63
+ - For an uncomplicated UTI that occurs when you're otherwise healthy, your doctor
64
+ may recommend a shorter course of treatment, such as taking an antibiotic for
65
+ one to three days. But whether this short course of treatment is enough to treat
66
+ your infection depends on your particular symptoms and medical history.
67
+ - 'Some women will experience more cramping as they start to go into menopause.
68
+ Pregnancy: Early in pregnancy, you may experience mild or light cramping. These
69
+ cramps will probably feel like the light cramps you get during your period, but
70
+ they''ll be in your lower stomach or lower back.'
71
+ datasets:
72
+ - sentence-transformers/gooaq
73
+ pipeline_tag: sentence-similarity
74
+ library_name: sentence-transformers
75
+ metrics:
76
+ - cosine_accuracy@1
77
+ - cosine_accuracy@3
78
+ - cosine_accuracy@5
79
+ - cosine_accuracy@10
80
+ - cosine_precision@1
81
+ - cosine_precision@3
82
+ - cosine_precision@5
83
+ - cosine_precision@10
84
+ - cosine_recall@1
85
+ - cosine_recall@3
86
+ - cosine_recall@5
87
+ - cosine_recall@10
88
+ - cosine_ndcg@10
89
+ - cosine_mrr@10
90
+ - cosine_map@100
91
+ model-index:
92
+ - name: MPNet base with Landmark Pooling trained on GooAQ triplets using CachedMultipleNegativesRankingLoss
93
+ with GradCache
94
+ results:
95
+ - task:
96
+ type: information-retrieval
97
+ name: Information Retrieval
98
+ dataset:
99
+ name: gooaq dev
100
+ type: gooaq-dev
101
+ metrics:
102
+ - type: cosine_accuracy@1
103
+ value: 0.7581
104
+ name: Cosine Accuracy@1
105
+ - type: cosine_accuracy@3
106
+ value: 0.896
107
+ name: Cosine Accuracy@3
108
+ - type: cosine_accuracy@5
109
+ value: 0.9329
110
+ name: Cosine Accuracy@5
111
+ - type: cosine_accuracy@10
112
+ value: 0.9641
113
+ name: Cosine Accuracy@10
114
+ - type: cosine_precision@1
115
+ value: 0.7581
116
+ name: Cosine Precision@1
117
+ - type: cosine_precision@3
118
+ value: 0.29866666666666664
119
+ name: Cosine Precision@3
120
+ - type: cosine_precision@5
121
+ value: 0.18658
122
+ name: Cosine Precision@5
123
+ - type: cosine_precision@10
124
+ value: 0.09641000000000001
125
+ name: Cosine Precision@10
126
+ - type: cosine_recall@1
127
+ value: 0.7581
128
+ name: Cosine Recall@1
129
+ - type: cosine_recall@3
130
+ value: 0.896
131
+ name: Cosine Recall@3
132
+ - type: cosine_recall@5
133
+ value: 0.9329
134
+ name: Cosine Recall@5
135
+ - type: cosine_recall@10
136
+ value: 0.9641
137
+ name: Cosine Recall@10
138
+ - type: cosine_ndcg@10
139
+ value: 0.8654027716772075
140
+ name: Cosine Ndcg@10
141
+ - type: cosine_mrr@10
142
+ value: 0.8332606746031707
143
+ name: Cosine Mrr@10
144
+ - type: cosine_map@100
145
+ value: 0.8349243908994848
146
+ name: Cosine Map@100
147
+ ---
148
+
149
+ # MPNet base with Landmark Pooling trained on GooAQ triplets using CachedMultipleNegativesRankingLoss with GradCache
150
+
151
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
152
+
153
+ ## Model Details
154
+
155
+ ### Model Description
156
+ - **Model Type:** Sentence Transformer
157
+ - **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
158
+ - **Maximum Sequence Length:** 512 tokens
159
+ - **Output Dimensionality:** 768 dimensions
160
+ - **Similarity Function:** Cosine Similarity
161
+ - **Training Dataset:**
162
+ - [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq)
163
+ - **Language:** en
164
+ - **License:** apache-2.0
165
+
166
+ ### Model Sources
167
+
168
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
169
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
170
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
171
+
172
+ ### Full Model Architecture
173
+
174
+ ```
175
+ SentenceTransformer(
176
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'splitter_type': 'variable', 'splitter_granularity': [32, 64, 128, 256], 'lmk_token_id': 2, 'architecture': 'MPNetModel'})
177
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'pooling_mode_lmk': True, 'lmk_token_id': -1, 'include_prompt': True})
178
+ (2): Normalize()
179
+ )
180
+ ```
181
+
182
+ ## Usage
183
+
184
+ ### Direct Usage (Sentence Transformers)
185
+
186
+ First install the Sentence Transformers library:
187
+
188
+ ```bash
189
+ pip install -U sentence-transformers
190
+ ```
191
+
192
+ Then you can load this model and run inference.
193
+ ```python
194
+ from sentence_transformers import SentenceTransformer
195
+
196
+ # Download from the 🤗 Hub
197
+ model = SentenceTransformer("tomaarsen/mpnet-base-gooaq-cmnrl-1024bs-lmk-gran-per-batch")
198
+ # Run inference
199
+ queries = [
200
+ "can you get period pains while pregnant?",
201
+ ]
202
+ documents = [
203
+ "Some women will experience more cramping as they start to go into menopause. Pregnancy: Early in pregnancy, you may experience mild or light cramping. These cramps will probably feel like the light cramps you get during your period, but they'll be in your lower stomach or lower back.",
204
+ "For an uncomplicated UTI that occurs when you're otherwise healthy, your doctor may recommend a shorter course of treatment, such as taking an antibiotic for one to three days. But whether this short course of treatment is enough to treat your infection depends on your particular symptoms and medical history.",
205
+ 'The most confusing part about SACs is that your SAC score DOES NOT go towards your study score, at least not directly. In fact, your SAC score is merely used to rank you in your class.',
206
+ ]
207
+ query_embeddings = model.encode_query(queries)
208
+ document_embeddings = model.encode_document(documents)
209
+ print(query_embeddings.shape, document_embeddings.shape)
210
+ # [1, 768] [3, 768]
211
+
212
+ # Get the similarity scores for the embeddings
213
+ similarities = model.similarity(query_embeddings, document_embeddings)
214
+ print(similarities)
215
+ # tensor([[ 0.7430, 0.1732, -0.0864]])
216
+ ```
217
+
218
+ <!--
219
+ ### Direct Usage (Transformers)
220
+
221
+ <details><summary>Click to see the direct usage in Transformers</summary>
222
+
223
+ </details>
224
+ -->
225
+
226
+ <!--
227
+ ### Downstream Usage (Sentence Transformers)
228
+
229
+ You can finetune this model on your own dataset.
230
+
231
+ <details><summary>Click to expand</summary>
232
+
233
+ </details>
234
+ -->
235
+
236
+ <!--
237
+ ### Out-of-Scope Use
238
+
239
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
240
+ -->
241
+
242
+ ## Evaluation
243
+
244
+ ### Metrics
245
+
246
+ #### Information Retrieval
247
+
248
+ * Dataset: `gooaq-dev`
249
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
250
+
251
+ | Metric | Value |
252
+ |:--------------------|:-----------|
253
+ | cosine_accuracy@1 | 0.7581 |
254
+ | cosine_accuracy@3 | 0.896 |
255
+ | cosine_accuracy@5 | 0.9329 |
256
+ | cosine_accuracy@10 | 0.9641 |
257
+ | cosine_precision@1 | 0.7581 |
258
+ | cosine_precision@3 | 0.2987 |
259
+ | cosine_precision@5 | 0.1866 |
260
+ | cosine_precision@10 | 0.0964 |
261
+ | cosine_recall@1 | 0.7581 |
262
+ | cosine_recall@3 | 0.896 |
263
+ | cosine_recall@5 | 0.9329 |
264
+ | cosine_recall@10 | 0.9641 |
265
+ | **cosine_ndcg@10** | **0.8654** |
266
+ | cosine_mrr@10 | 0.8333 |
267
+ | cosine_map@100 | 0.8349 |
268
+
269
+ <!--
270
+ ## Bias, Risks and Limitations
271
+
272
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
273
+ -->
274
+
275
+ <!--
276
+ ### Recommendations
277
+
278
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
279
+ -->
280
+
281
+ ## Training Details
282
+
283
+ ### Training Dataset
284
+
285
+ #### gooaq
286
+
287
+ * Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
288
+ * Size: 90,000 training samples
289
+ * Columns: <code>question</code> and <code>answer</code>
290
+ * Approximate statistics based on the first 1000 samples:
291
+ | | question | answer |
292
+ |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
293
+ | type | string | string |
294
+ | details | <ul><li>min: 8 tokens</li><li>mean: 11.83 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 61.75 tokens</li><li>max: 185 tokens</li></ul> |
295
+ * Samples:
296
+ | question | answer |
297
+ |:--------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
298
+ | <code>how long does halifax take to transfer mortgage funds?</code> | <code>Bear in mind that the speed of application will vary depending on your own personal circumstances and the lender's present day-to-day performance. In some cases, applications can be approved by the lender within 24 hours, while some can take weeks or even months.</code> |
299
+ | <code>can you get a false pregnancy test?</code> | <code>In very rare cases, you can have a false-positive result. This means you're not pregnant but the test says you are. You could have a false-positive result if you have blood or protein in your pee. Certain drugs, such as tranquilizers, anticonvulsants, hypnotics, and fertility drugs, could cause false-positive results.</code> |
300
+ | <code>are ahead of its time?</code> | <code>Definition of ahead of one's/its time : too advanced or modern to be understood or appreciated during the time when one lives or works As a director, he was ahead of his time.</code> |
301
+ * Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
302
+ ```json
303
+ {
304
+ "scale": 20.0,
305
+ "similarity_fct": "cos_sim",
306
+ "mini_batch_size": 64,
307
+ "gather_across_devices": false
308
+ }
309
+ ```
310
+
311
+ ### Evaluation Dataset
312
+
313
+ #### gooaq
314
+
315
+ * Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
316
+ * Size: 10,000 evaluation samples
317
+ * Columns: <code>question</code> and <code>answer</code>
318
+ * Approximate statistics based on the first 1000 samples:
319
+ | | question | answer |
320
+ |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
321
+ | type | string | string |
322
+ | details | <ul><li>min: 8 tokens</li><li>mean: 11.93 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 61.2 tokens</li><li>max: 128 tokens</li></ul> |
323
+ * Samples:
324
+ | question | answer |
325
+ |:-----------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
326
+ | <code>should you take ibuprofen with high blood pressure?</code> | <code>In general, people with high blood pressure should use acetaminophen or possibly aspirin for over-the-counter pain relief. Unless your health care provider has said it's OK, you should not use ibuprofen, ketoprofen, or naproxen sodium. If aspirin or acetaminophen doesn't help with your pain, call your doctor.</code> |
327
+ | <code>how old do you have to be to work in sc?</code> | <code>The general minimum age of employment for South Carolina youth is 14, although the state allows younger children who are performers to work in show business. If their families are agricultural workers, children younger than age 14 may also participate in farm labor.</code> |
328
+ | <code>how to write a topic proposal for a research paper?</code> | <code>['Write down the main topic of your paper. ... ', 'Write two or three short sentences under the main topic that explain why you chose that topic. ... ', 'Write a thesis sentence that states the angle and purpose of your research paper. ... ', 'List the items you will cover in the body of the paper that support your thesis statement.']</code> |
329
+ * Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
330
+ ```json
331
+ {
332
+ "scale": 20.0,
333
+ "similarity_fct": "cos_sim",
334
+ "mini_batch_size": 64,
335
+ "gather_across_devices": false
336
+ }
337
+ ```
338
+
339
+ ### Training Hyperparameters
340
+ #### Non-Default Hyperparameters
341
+
342
+ - `eval_strategy`: steps
343
+ - `per_device_train_batch_size`: 1024
344
+ - `per_device_eval_batch_size`: 1024
345
+ - `learning_rate`: 8e-05
346
+ - `num_train_epochs`: 1
347
+ - `warmup_ratio`: 0.1
348
+ - `warmup_steps`: 0.1
349
+ - `bf16`: True
350
+ - `batch_sampler`: no_duplicates
351
+
352
+ #### All Hyperparameters
353
+ <details><summary>Click to expand</summary>
354
+
355
+ - `do_predict`: False
356
+ - `eval_strategy`: steps
357
+ - `prediction_loss_only`: True
358
+ - `per_device_train_batch_size`: 1024
359
+ - `per_device_eval_batch_size`: 1024
360
+ - `gradient_accumulation_steps`: 1
361
+ - `eval_accumulation_steps`: None
362
+ - `torch_empty_cache_steps`: None
363
+ - `learning_rate`: 8e-05
364
+ - `weight_decay`: 0.0
365
+ - `adam_beta1`: 0.9
366
+ - `adam_beta2`: 0.999
367
+ - `adam_epsilon`: 1e-08
368
+ - `max_grad_norm`: 1.0
369
+ - `num_train_epochs`: 1
370
+ - `max_steps`: -1
371
+ - `lr_scheduler_type`: linear
372
+ - `lr_scheduler_kwargs`: None
373
+ - `warmup_ratio`: 0.1
374
+ - `warmup_steps`: 0.1
375
+ - `log_level`: passive
376
+ - `log_level_replica`: warning
377
+ - `log_on_each_node`: True
378
+ - `logging_nan_inf_filter`: True
379
+ - `enable_jit_checkpoint`: False
380
+ - `save_on_each_node`: False
381
+ - `save_only_model`: False
382
+ - `restore_callback_states_from_checkpoint`: False
383
+ - `use_cpu`: False
384
+ - `seed`: 42
385
+ - `data_seed`: None
386
+ - `bf16`: True
387
+ - `fp16`: False
388
+ - `bf16_full_eval`: False
389
+ - `fp16_full_eval`: False
390
+ - `tf32`: None
391
+ - `local_rank`: -1
392
+ - `ddp_backend`: None
393
+ - `debug`: []
394
+ - `dataloader_drop_last`: False
395
+ - `dataloader_num_workers`: 0
396
+ - `dataloader_prefetch_factor`: None
397
+ - `disable_tqdm`: False
398
+ - `remove_unused_columns`: True
399
+ - `label_names`: None
400
+ - `load_best_model_at_end`: False
401
+ - `ignore_data_skip`: False
402
+ - `fsdp`: []
403
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
404
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
405
+ - `parallelism_config`: None
406
+ - `deepspeed`: None
407
+ - `label_smoothing_factor`: 0.0
408
+ - `optim`: adamw_torch_fused
409
+ - `optim_args`: None
410
+ - `group_by_length`: False
411
+ - `length_column_name`: length
412
+ - `project`: huggingface
413
+ - `trackio_space_id`: trackio
414
+ - `ddp_find_unused_parameters`: None
415
+ - `ddp_bucket_cap_mb`: None
416
+ - `ddp_broadcast_buffers`: False
417
+ - `dataloader_pin_memory`: True
418
+ - `dataloader_persistent_workers`: False
419
+ - `skip_memory_metrics`: True
420
+ - `push_to_hub`: False
421
+ - `resume_from_checkpoint`: None
422
+ - `hub_model_id`: None
423
+ - `hub_strategy`: every_save
424
+ - `hub_private_repo`: None
425
+ - `hub_always_push`: False
426
+ - `hub_revision`: None
427
+ - `gradient_checkpointing`: False
428
+ - `gradient_checkpointing_kwargs`: None
429
+ - `include_for_metrics`: []
430
+ - `eval_do_concat_batches`: True
431
+ - `auto_find_batch_size`: False
432
+ - `full_determinism`: False
433
+ - `ddp_timeout`: 1800
434
+ - `torch_compile`: False
435
+ - `torch_compile_backend`: None
436
+ - `torch_compile_mode`: None
437
+ - `include_num_input_tokens_seen`: no
438
+ - `neftune_noise_alpha`: None
439
+ - `optim_target_modules`: None
440
+ - `batch_eval_metrics`: False
441
+ - `eval_on_start`: False
442
+ - `use_liger_kernel`: False
443
+ - `liger_kernel_config`: None
444
+ - `eval_use_gather_object`: False
445
+ - `average_tokens_across_devices`: True
446
+ - `use_cache`: False
447
+ - `prompts`: None
448
+ - `batch_sampler`: no_duplicates
449
+ - `multi_dataset_batch_sampler`: proportional
450
+ - `router_mapping`: {}
451
+ - `learning_rate_mapping`: {}
452
+
453
+ </details>
454
+
455
+ ### Training Logs
456
+ | Epoch | Step | Training Loss | Validation Loss | gooaq-dev_cosine_ndcg@10 |
457
+ |:------:|:----:|:-------------:|:---------------:|:------------------------:|
458
+ | -1 | -1 | - | - | 0.2180 |
459
+ | 0.0114 | 1 | 6.3995 | - | - |
460
+ | 0.0568 | 5 | 5.8563 | - | - |
461
+ | 0.1023 | 9 | - | 1.1550 | 0.6291 |
462
+ | 0.1136 | 10 | 2.4891 | - | - |
463
+ | 0.1705 | 15 | 1.0676 | - | - |
464
+ | 0.2045 | 18 | - | 0.5168 | 0.7867 |
465
+ | 0.2273 | 20 | 0.7828 | - | - |
466
+ | 0.2841 | 25 | 0.6288 | - | - |
467
+ | 0.3068 | 27 | - | 0.3813 | 0.8246 |
468
+ | 0.3409 | 30 | 0.5358 | - | - |
469
+ | 0.3977 | 35 | 0.4802 | - | - |
470
+ | 0.4091 | 36 | - | 0.3268 | 0.8399 |
471
+ | 0.4545 | 40 | 0.4491 | - | - |
472
+ | 0.5114 | 45 | 0.4258 | 0.2941 | 0.8490 |
473
+ | 0.5682 | 50 | 0.4255 | - | - |
474
+ | 0.6136 | 54 | - | 0.2762 | 0.8572 |
475
+ | 0.625 | 55 | 0.4056 | - | - |
476
+ | 0.6818 | 60 | 0.3826 | - | - |
477
+ | 0.7159 | 63 | - | 0.2675 | 0.8580 |
478
+ | 0.7386 | 65 | 0.3455 | - | - |
479
+ | 0.7955 | 70 | 0.3722 | - | - |
480
+ | 0.8182 | 72 | - | 0.2540 | 0.8631 |
481
+ | 0.8523 | 75 | 0.3543 | - | - |
482
+ | 0.9091 | 80 | 0.3519 | - | - |
483
+ | 0.9205 | 81 | - | 0.2502 | 0.8633 |
484
+ | 0.9659 | 85 | 0.3422 | - | - |
485
+ | -1 | -1 | - | - | 0.8654 |
486
+
487
+
488
+ ### Framework Versions
489
+ - Python: 3.11.6
490
+ - Sentence Transformers: 5.3.0.dev0
491
+ - Transformers: 5.0.1.dev0
492
+ - PyTorch: 2.10.0+cu126
493
+ - Accelerate: 1.12.0
494
+ - Datasets: 4.3.0
495
+ - Tokenizers: 0.22.2
496
+
497
+ ## Citation
498
+
499
+ ### BibTeX
500
+
501
+ #### Sentence Transformers
502
+ ```bibtex
503
+ @inproceedings{reimers-2019-sentence-bert,
504
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
505
+ author = "Reimers, Nils and Gurevych, Iryna",
506
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
507
+ month = "11",
508
+ year = "2019",
509
+ publisher = "Association for Computational Linguistics",
510
+ url = "https://arxiv.org/abs/1908.10084",
511
+ }
512
+ ```
513
+
514
+ #### CachedMultipleNegativesRankingLoss
515
+ ```bibtex
516
+ @misc{gao2021scaling,
517
+ title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
518
+ author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
519
+ year={2021},
520
+ eprint={2101.06983},
521
+ archivePrefix={arXiv},
522
+ primaryClass={cs.LG}
523
+ }
524
+ ```
525
+
526
+ <!--
527
+ ## Glossary
528
+
529
+ *Clearly define terms in order to be accessible across audiences.*
530
+ -->
531
+
532
+ <!--
533
+ ## Model Card Authors
534
+
535
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
536
+ -->
537
+
538
+ <!--
539
+ ## Model Card Contact
540
+
541
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
542
+ -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MPNetModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "dtype": "float32",
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "mpnet",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 1,
20
+ "relative_attention_num_buckets": 32,
21
+ "tie_word_embeddings": true,
22
+ "transformers_version": "5.0.1.dev0",
23
+ "vocab_size": 30527
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SentenceTransformer",
3
+ "__version__": {
4
+ "sentence_transformers": "5.3.0.dev0",
5
+ "transformers": "5.0.1.dev0",
6
+ "pytorch": "2.10.0+cu126"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "cosine"
14
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:121aaf57c238f0ea23bd124d70702137b9fc460a86cc4959990e6ba78ccb1707
3
+ size 437967648
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.LandmarkTransformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false,
4
+ "splitter_type": "variable",
5
+ "splitter_granularity": [
6
+ 32,
7
+ 64,
8
+ 128,
9
+ 256
10
+ ],
11
+ "lmk_token_id": 2
12
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "backend": "tokenizers",
3
+ "bos_token": "<s>",
4
+ "cls_token": "<s>",
5
+ "do_lower_case": true,
6
+ "eos_token": "</s>",
7
+ "is_local": false,
8
+ "mask_token": "<mask>",
9
+ "model_max_length": 512,
10
+ "pad_token": "<pad>",
11
+ "sep_token": "</s>",
12
+ "strip_accents": null,
13
+ "tokenize_chinese_chars": true,
14
+ "tokenizer_class": "MPNetTokenizer",
15
+ "unk_token": "[UNK]"
16
+ }