experimenta-haeretica
Collection
Heretical experimentation • 21 items • Updated
• 1
This is a gpt-4o-distil-Llama-3.1-8B-Instruct fine-tune, produced through P-E-W's Heretic (v1.2.0) abliteration engine with Magnitude-Preserving Orthogonal Ablation enabled.
Heretication Results
| Score Metric | Value | Parameter | Value |
|---|---|---|---|
| Refusals | 7/100 | direction_index | per layer |
| KL Divergence | 0.0274 | attn.o_proj.max_weight | 1.88 |
| Initial Refusals | 98/100 | attn.o_proj.max_weight_position | 23.88 |
| attn.o_proj.min_weight | 0.91 | ||
| attn.o_proj.min_weight_distance | 17.00 | ||
| mlp.down_proj.max_weight | 0.16 | ||
| mlp.down_proj.max_weight_position | 14.31 | ||
| mlp.down_proj.min_weight | 0.00 | ||
| mlp.down_proj.min_weight_distance | 18.09 |
Appendix
One-sentence system prompt.
» [Trial 41] Refusals: 7/100, KL divergence: 0.0274
[Trial 189] Refusals: 8/100, KL divergence: 0.0264
[Trial 87] Refusals: 9/100, KL divergence: 0.0207
[Trial 73] Refusals: 11/100, KL divergence: 0.0173
[Trial 39] Refusals: 13/100, KL divergence: 0.0124
[Trial 171] Refusals: 20/100, KL divergence: 0.0105
[Trial 67] Refusals: 28/100, KL divergence: 0.0078
[Trial 62] Refusals: 41/100, KL divergence: 0.0064
[Trial 169] Refusals: 51/100, KL divergence: 0.0062
[Trial 82] Refusals: 52/100, KL divergence: 0.0056
[Trial 65] Refusals: 73/100, KL divergence: 0.0047
[Trial 132] Refusals: 80/100, KL divergence: 0.0046
[Trial 18] Refusals: 82/100, KL divergence: 0.0038
[Trial 165] Refusals: 91/100, KL divergence: 0.0031
[Trial 121] Refusals: 93/100, KL divergence: 0.0022
[Trial 140] Refusals: 94/100, KL divergence: 0.0021
[Trial 150] Refusals: 95/100, KL divergence: 0.0020
[Trial 125] Refusals: 97/100, KL divergence: 0.0016
[Trial 184] Refusals: 98/100, KL divergence: 0.0006
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct. It has been trained using TRL.
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
This model was trained with SFT.
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}