LFM2.5-1.2B-Thinking-math-safe

🎯 MATH-optimized | πŸ“¦ Safe pruning | ⚑ 1% weights pruned

This model is a conservatively pruned version of LiquidAI/LFM2.5-1.2B-Thinking.

Performance Comparison

Category Original Pruned Change
Python 0.0% 5.0% ↑ 5.0%
Html 25.0% 25.0% β†’
Trivia 97.5% 97.5% β†’
Math 100.0% 100.0% ⭐ β†’
Reasoning 82.5% 82.5% β†’
Medical 95.0% 95.0% β†’
Linux 35.0% 40.0% ↑ 5.0%
Writing 80.0% 82.5% ↑ 2.5%

Average: 64.4% β†’ 65.9% (+1.6%)

Math Retention: 100.0%

Comparison Graph

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("CompactAI/LFM2.5-1.2B-Thinking-math-safe")
tokenizer = AutoTokenizer.from_pretrained("CompactAI/LFM2.5-1.2B-Thinking-math-safe")

inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Technical Details

Property Value
Base Model LiquidAI/LFM2.5-1.2B-Thinking
Specialization Math
Prune Mode Safe
Weight Reduction 1% weights pruned

License

This model inherits the license from the base model.

Downloads last month
13
Safetensors
Model size
1B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CompactAI/LFM2.5-1.2B-Thinking-math-safe

Finetuned
(30)
this model

Collection including CompactAI/LFM2.5-1.2B-Thinking-math-safe

CompactAI/LFM2.5-1.2B-Thinking-math-safe Β· Hugging Face

LFM2.5-1.2B-Thinking-math-safe

🎯 MATH-optimized | πŸ“¦ Safe pruning | ⚑ 1% weights pruned

This model is a conservatively pruned version of LiquidAI/LFM2.5-1.2B-Thinking.

Performance Comparison

Category Original Pruned Change
Python 0.0% 5.0% ↑ 5.0%
Html 25.0% 25.0% β†’
Trivia 97.5% 97.5% β†’
Math 100.0% 100.0% ⭐ β†’
Reasoning 82.5% 82.5% β†’
Medical 95.0% 95.0% β†’
Linux 35.0% 40.0% ↑ 5.0%
Writing 80.0% 82.5% ↑ 2.5%

Average: 64.4% β†’ 65.9% (+1.6%)

Math Retention: 100.0%

Comparison Graph

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("CompactAI/LFM2.5-1.2B-Thinking-math-safe")
tokenizer = AutoTokenizer.from_pretrained("CompactAI/LFM2.5-1.2B-Thinking-math-safe")

inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Technical Details

Property Value
Base Model LiquidAI/LFM2.5-1.2B-Thinking
Specialization Math
Prune Mode Safe
Weight Reduction 1% weights pruned

License

This model inherits the license from the base model.

Downloads last month
13
Safetensors
Model size
1B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CompactAI/LFM2.5-1.2B-Thinking-math-safe

Finetuned
(30)
this model

Collection including CompactAI/LFM2.5-1.2B-Thinking-math-safe