Model Card for Model ID
license: apache-2.0 base_model: unsloth/gemma-2-2b-it tags: - fine-tuned - text-correction - german - asr-postprocessing - gguf - unsloth - lora
German Audio Correction - Gemma-2-2B-IT Fine-Tune
Overview
This model is a fine-tuned version of unsloth/gemma-2-2b-it, specialized for correcting German text from audio transcriptions (STT/ASR outputs). It fixes phonetic errors (e.g., misheard words), awkward sentence structures, repetitions (e.g., "die die"), and punctuation issues while preserving the original meaning. Designed for chaining after an STT model like Whisper, it helps create clean, readable German text for applications like note-taking or web apps.
Background: Proprietary APIs (e.g., OpenAI) offer quick inference but limit control. This open-source fine-tune runs locally (e.g., on laptops with 32GB RAM via Ollama/GGUF) or via HF, allowing customization and offline use.
Base Model
- Original: unsloth/gemma-2-2b-it (2B parameters, instruct-tuned for tasks like generation/editing; quantized to 4-bit for efficiency).
- Architecture: Gemma-2 (Google's lightweight transformer variant, optimized for speed on consumer hardware).
Fine-Tuning Details
- Dataset: Custom JSONL (~200 pairs of noisy/clean German texts, user-generated from audio scenarios鈥攅.g., {"prompt": "ich habe das Ticket erstellt; die die also Nummer ahm genau ist SUP-5555.", "response": "Ich habe das Ticket erstellt; die Nummer ist SUP-5555."}).
- Method: LoRA (Parameter-Efficient Fine-Tuning) with r=16, lora_alpha=32, target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"], dropout=0.05, via Unsloth for 2x faster training on quantized models.
- Training: 200 steps, effective batch size=16 (per_device=4, accumulation=4), learning_rate=2e-4, optimizer=adamw_8bit, on Colab T4 GPU. No eval set (small dataset); monitored via loss logs.
- Quantization: Exported as Q4_K_M GGUF (~1.5GB) for local inference (balances quality/speed; runs on CPU/GPU).
- Hardware: Tested on ASUS Zenbook (Ryzen 9, 32GB RAM) for local runs.
Usage
Local Inference with Ollama
- Pull:
ollama pull vallehorn/german-audio-correction-gemma2:q4_k_m - Run:
ollama run vallehorn/german-audio-correction-gemma2 - Prompt example: "ich habe das Ticket erstellt; die die also Nummer ahm genau ist SUP-5555."
Expected output: "Ich habe das Ticket erstellt; die Nummer ist SUP-5555."
Use the Modelfile for custom params (e.g., temperature=0.7).
Python Inference (via Unsloth/HF)
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained("ValleHorn/german-audio-correction-gemma2")
model = FastLanguageModel.for_inference(model)
messages = [{"role": "user", "content": "Noisy German text here"}]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids=inputs, max_new_tokens=128, temperature=0.7)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
- Downloads last month
- 291
Model tree for ValleHorn/german-audio-correction-gemma2
Base model
unsloth/gemma-2-2b-it