Barcenas E2B

Based on Google's Gemma 4 E2B it and trained on the claude-opus-4.6-10000x dataset created by Roman Yemelyanov.

This is my first fine-tuning of a Mixture-of-Experts (MoE) LLM.

The goal is to create a small LLM capable of running on a smartphone, enhanced by fine-tuning with ultra-high-quality data from Claude Opus 4.6 to boost its capabilities.

Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽

Downloads last month
44
Safetensors
Model size
5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Danielbrdz/Barcenas-E2B

Finetuned
(131)
this model
Quantizations
1 model

Dataset used to train Danielbrdz/Barcenas-E2B