From Tifa-DeepsexV2-7b-MGRPO-GGUF-Q4: https://huggingface.co/ValueFX9507/Tifa-DeepsexV2-7b-MGRPO-GGUF-Q4
Based on my experience, Q4_K_S and Q4_K_M are usually the balance points between model size, quantization, and speed.
In some benchmarks, selecting a large-parameter high-quantization LLM tends to perform better than a small-parameter low-quantization LLM.
根据我的经验,通常Q4_K_S、Q4_K_M是模型尺寸/量化/速度的平衡点
在某些基准测试中,选择大参数低量化模型往往比选择小参数高量化模型表现更好。
- Downloads last month
- 40
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
6-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for btaskel/Tifa-DeepsexV2-7b-MGRPO-GGUF
Base model
ValueFX9507/Tifa-DeepsexV2-7b-MGRPO-GGUF-Q4