Llama3.1-SuperHawk-8B-Heretic
A decensored version of Llama3.1-SuperHawk-8B, made using Heretic v1.1.0
Note: A different version of the model, where all the models in the merge were abliterated using Heretic before merging, can be found here
| Llama3.1-SuperHawk-8B-Heretic | Original model (Llama3.1-SuperHawk-8B) | |
|---|---|---|
| Refusals | 4/100 | 99/100 |
| KL divergence | 0.0949 | 0 (by definition) |
Heretic Abliteration Parameters
| Parameter | Value |
|---|---|
| direction_index | 14.37 |
| attn.o_proj.max_weight | 1.43 |
| attn.o_proj.max_weight_position | 23.02 |
| attn.o_proj.min_weight | 1.24 |
| attn.o_proj.min_weight_distance | 14.37 |
| mlp.down_proj.max_weight | 1.40 |
| mlp.down_proj.max_weight_position | 27.22 |
| mlp.down_proj.min_weight | 0.75 |
| mlp.down_proj.min_weight_distance | 15.05 |
Llama3.1-SuperHawk-8B is a merge of the following models using LazyMergekit:
π§© Configuration
slices:
- sources:
- model: Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base
layer_range: [0, 32]
- model: mukaj/Llama-3.1-Hawkish-8B
layer_range: [0, 32]
merge_method: slerp
base_model: Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base
parameters:
t:
- value: 0.3
dtype: bfloat16
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/Llama3.1-SuperHawk-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 31.14 |
| IFEval (0-Shot) | 79.86 |
| BBH (3-Shot) | 31.97 |
| MATH Lvl 5 (4-Shot) | 23.49 |
| GPQA (0-shot) | 8.39 |
| MuSR (0-shot) | 10.38 |
| MMLU-PRO (5-shot) | 32.73 |
- Downloads last month
- 1