--- license: apache-2.0 tags: - zen - mlx - gguf - safetensors - zenlm language: - en pipeline_tag: text-generation --- # Zen Guard Gen v1.0.1 ## Available Formats This model is available in multiple formats for different platforms: ### SafeTensors (Base Format) - Standard HuggingFace format - Compatible with Transformers library - Use for training and fine-tuning ### MLX Format (Apple Silicon Optimized) - `/mlx/` - Full precision MLX format - `/mlx-4bit/` - 4-bit quantized (fastest on Mac) ### GGUF Format (Coming Soon) - Will be added for llama.cpp compatibility - CPU-optimized for all platforms ## Quick Start ### Using Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("zenlm/zen-guard-gen-8b") tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-guard-gen-8b") ``` ### Using MLX (Apple Silicon) ```python from mlx_lm import load, generate # Load 4-bit model (fastest) model, tokenizer = load("zenlm/zen-guard-gen-8b", adapter_path="mlx-4bit") # Generate response = generate(model, tokenizer, prompt="Your prompt", max_tokens=256) print(response) ``` ### Using llama.cpp (GGUF - Coming Soon) ```bash llama-cli -m gguf/zen-guard-gen-q4_k_m.gguf -p "Your prompt" ``` ## Training with Zoo-Gym ```bash pip install zoo-gym zoo-gym train --model zenlm/zen-guard-gen-8b --data your_data.jsonl ``` ## Model Details - **Architecture**: Zen Guard architecture - **Training**: Zoo-Gym with RAIS (Recursive AI Self-Improvement System) - **License**: Apache 2.0 - **Partnership**: Hanzo AI x Zoo Labs Foundation ## Citation ```bibtex @misc{zen_zen_guard_gen_2025, title={Zen Guard Gen v1.0.1}, author={Hanzo AI and Zoo Labs Foundation}, year={2025}, version={1.0.1} } ```