Mostafa Maher
Devostafa
ยท
AI & ML interests
None yet
Recent Activity
reacted
to
danielhanchen's
post
with ๐ค
about 9 hours ago
We collaborated with Hugging Face to enable you to train MoE models 12ร faster with 35% less VRAM via our new Triton kernels (no accuracy loss). ๐ค
Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe
reacted
to
danielhanchen's
post
with ๐ฅ
about 9 hours ago
We collaborated with Hugging Face to enable you to train MoE models 12ร faster with 35% less VRAM via our new Triton kernels (no accuracy loss). ๐ค
Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe
liked
a model
1 day ago
mradermacher/Think2SQL-4B-i1-GGUF
Organizations
None yet