Qwen3.5 4B - Gabliterated
Qwen/Qwen3.5-4B with refusal behavior removed via Gabliteration — a multi-directional SVD-based abliteration method that captures primary and secondary refusal patterns missed by single-direction approaches. Hybrid architecture-aware extraction and intervention are applied automatically to handle Qwen3.5's mix of full and linear attention layers.
Important: This model will produce uncensored outputs. Use responsibly.
GGUF quantizations available at: jwest33/Qwen3.5-4B-gabliterated-GGUF
Techniques Used
- Gabliteration (Multi-Directional SVD): Computes paired activation differences between harmful and harmless prompts, then extracts the top-k right singular vectors via SVD. A ridge-regularized projection removes multiple refusal directions simultaneously, capturing secondary refusal circuits that single-vector abliteration misses.
- Hybrid Architecture-Aware Intervention: Qwen3.5 interleaves full attention and linear attention layers (every 4th layer is full attention). Full attention layers receive the full ablation weight (1.0x), while linear attention layers receive a reduced weight (0.4x). Recurrent dynamics projections (
in_proj_a,in_proj_b) are skipped to preserve the delta rule gating mechanism. - Projected Refusal Direction (GrimJim's Method): The raw refusal direction is orthogonalized against the harmless mean to isolate the mechanistically-specific refusal component, avoiding damage to general helpfulness.
- Winsorization: Per-dimension outlier clipping at the 99.5th percentile for cleaner refusal direction estimation.
- Welford Mean + Float64 Subtraction: Numerically stable streaming mean computation and double-precision subtraction to handle high cosine similarity between harmful/harmless activation means.
- Norm Preservation: Maintains original Frobenius norms of weight matrices after projection.
Configuration
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen3.5-4B |
| Harmful Prompts | 2000 |
| Harmless Prompts | 1226 |
| Direction Multiplier | 1.05 |
| SVD Directions (k) | 2 |
| Ridge Lambda | 0.15 |
| Layer Scaling Beta | 0.5 |
| Skip First/Last Layers | 2 / 2 |
| Full Attention Weight | 1.0 |
| Linear Attention Weight | 0.4 |
| Winsorization Percentile | 0.995 |
| Precision | float32 |
Architecture
Qwen3.5-4B is a hybrid attention vision-language model with 32 layers:
- 8 full attention layers (indices 3, 7, 11, 15, 19, 23, 27, 31)
- 24 linear attention layers (all others)
- Hidden size: 2560, attention heads: 16, KV heads: 4
Credits
- Base Model: Qwen/Qwen3.5-4B by the Qwen Team
- Gabliteration: Gabliteration: Adaptive Multi-Directional Neural Weight Modification for Selective Behavioral Alteration in Large Language Models — Gökdeniz Gülmez (2026)
- Projected Abliteration / Per-Neuron Norm Preservation: Norm-Preserving Biprojected Abliteration — Jim Lai (grimjim) (2025)
- Refusal Direction Discovery: Refusal in Language Models Is Mediated by a Single Direction — Arditi et al. (2024)
- Representation Engineering: Representation Engineering: A Top-Down Approach to AI Transparency — Zou et al. (2023)
Toolkit
github.com/jwest33/abliterator
License
This model inherits the Apache 2.0 license from the base model.
Disclaimer
This model is provided for research and educational purposes. The creators are not responsible for any misuse. Users are solely responsible for ensuring their use complies with applicable laws and ethical standards.
- Downloads last month
- 137