GGUF Quants for: yulan-team/YuLan-Mini
Model by: RUC-GSAI-YuLan (thank you!)
Quants by: quantflex
Run with llama.cpp
No K-Quants included because the tensor cols are not divisible by 256.
- Downloads last month
- 146
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
8-bit
16-bit
32-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for quantflex/YuLan-Mini-GGUF
Base model
yulan-team/YuLan-Mini