gpt-oss-20b - SOMbliterated

This is a SOMbliterated (decensored) version of openai/gpt-oss-20b, made using Heretic v1.2.0 with Pull Request https://github.com/p-e-w/heretic/pull/196 adding multi-directional abliteration with the directions determined by trainable self-organizing neural networks. (Self-Organizing Maps / Kohonen networks)

They assume that in advanced recent neural network the refusal concept is not just a single direction, but a complex manifold, just like numbers and days of week are encoded in circles or helixes. Now, this manifold is eliminated more surgically, from multiple sides, providing precisional ablation instead of complete lobotomy.

The method is based on the amazing work https://arxiv.org/abs/2511.08379v2.

For this abliteration, in particular, there were used five directions.

Performance

Metric This model Original model (openai/gpt-oss-20b)
KL divergence 0.1166 0 (by definition)
Refusals 3/100 100/100

As of 2026-02-27 this is the lowest amount of oss-20b heretic refusals I've read on huggingface. See comparison with the other available models on Github

Subjective results

Yes, it works.

SOMbliteration parameters

Parameter Value
direction_index 12.32
attn.o_proj.max_weight.0 0.92
attn.o_proj.max_weight.1 1.07
attn.o_proj.max_weight.2 1.29
attn.o_proj.max_weight.3 0.91
attn.o_proj.max_weight.4 1.15
attn.o_proj.max_weight_position 13.96
attn.o_proj.min_weight.0 0.36
attn.o_proj.min_weight.1 0.40
attn.o_proj.min_weight.2 0.99
attn.o_proj.min_weight.3 0.02
attn.o_proj.min_weight.4 0.41
attn.o_proj.min_weight_distance 12.38
Downloads last month
160
Safetensors
Model size
2B params
Tensor type
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kabachuha/gpt-oss-20b-SOMbliterated

Quantized
(176)
this model
Quantizations
3 models

Paper for kabachuha/gpt-oss-20b-SOMbliterated