Ministral-3-14B-Instruct-2512-FP8-dynamic

Model Overview

  • Model Architecture: MistralForCausalLM
    • Input: Text
    • Output: Text
  • Model Optimizations:
    • Weight quantization: FP8
  • Intended Use Cases:
    • Reasoning.
    • Function calling.
    • Subject matter experts via fine-tuning.
    • Multilingual instruction following.
    • Translation.
  • Out-of-scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws).
  • Release Date: 05/05/2025
  • Version: 1.0
  • Model Developers: RedHat (Neural Magic)

Model Optimizations

This model was obtained by quantizing the weights and activations of mistralai/Ministral-3-14B-Instruct-2512-BF16 to FP8 data type, ready for inference with vLLM. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.

Deployment

This model can be deployed efficiently using the vLLM backend, as shown in the example below.

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Ministral-3-14B-Instruct-2512-FP8-dynamic"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.15, top_p=1.0, top_k=20, min_p=0, max_tokens=65536)

messages = [
    {"role": "user", "content": prompt}
]

tokenizer = AutoTokenizer.from_pretrained(model_id)

messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]

prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompts, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)

vLLM aslo supports OpenAI-compatible serving. See the documentation for more details.

Creation

Creation details This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
from datasets import load_dataset
from transformers import Mistral3ForConditionalGeneration, MistralCommonBackend
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.utils import dispatch_for_generation

MODEL_ID = "mistralai/Ministral-3-14B-Instruct-2512-BF16"

model = Mistral3ForConditionalGeneration.from_pretrained(MODEL_ID, device_map="auto")
tokenizer = MistralCommonBackend.from_pretrained(MODEL_ID)

recipe = """
    quant_stage:
      quant_modifiers:
        QuantizationModifier:
          ignore: ["re:.*lm_head", "re:.*vision_tower.*", "re:.*multi_modal_projector.*"]
          config_groups:
            group_0:
              targets: [Linear]
              weights:
                num_bits: 8
                type: float
                strategy: channel
                symmetric: true
                dynamic: false
                observer: mse
              input_activations:
                num_bits: 8
                type: float
                strategy: token
                symmetric: true
                dynamic: true
                observer: minmax
"""

# Apply quantization.
oneshot(model=model, recipe=recipe)

# Confirm generations of the quantized model look sane.
print("========== SAMPLE GENERATION ==============")
dispatch_for_generation(model)
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to(
    model.device
)
output = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(output[0]))
print("==========================================")


# Save to disk in compressed-tensors format.
SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-DYNAMIC-OBSERVER"
model.save_pretrained(SAVE_DIR, save_compressed = True)
tokenizer.save_pretrained(SAVE_DIR)

Evaluation

The model was evaluated on the ifeval and mmmu using lm-evaluation-harness, on reasoning tasks using lighteval. vLLM was used for all evaluations.

Evaluation details

lm-evaluation-harness

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Ministral-3-14B-Instruct-2512-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.7,max_model_len=262144,enable_chunk_prefill=True,tensor_parallel_size=1 \
  --tasks ifeval,mmmu_val \
  --apply_chat_template\
  --fewshot_as_multiturn \
  --batch_size auto

lighteval

litellm_config.yaml

model_parameters:
  provider: "hosted_vllm"
  model_name: "hosted_vllm/RedHatAI/Ministral-3-14B-Instruct-2512-FP8-dynamic"
  base_url: "http://0.0.0.0:8000/v1"
  api_key: ""
  timeout: 1200
  concurrent_requests: 16
  generation_parameters:
    temperature: 0.15
    max_new_tokens: 65536
    top_p: 0.95
    seed: 0
lighteval endpoint litellm litellm_config.yaml "aime25"
lighteval endpoint litellm litellm_config.yaml "math_500"
lighteval endpoint litellm litellm_config.yaml "gpqa:diamond"

Accuracy

Category Benchmark Ministral-3-14B-Instruct-2512-BF16 Ministral-3-14B-Instruct-2512-FP8-dynamic
(this model)
Recovery
Vision MMMU 55.33 54.44 98.4%
OpenLLM v2 IFEval (0-shot) 77.34 76.86 99.4%
Reasoning
(generation)
AIME 2025 36.67 30.0 81.81%
GPQA diamond 58.59 66.16 112.9%
Math-lvl-5 88.6 89.4 100.9%
Average 61.29 61.85 100.9%
Downloads last month
179
Safetensors
Model size
14B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for inference-optimization/Ministral-3-14B-Instruct-2512-FP8-dynamic

inference-optimization/Ministral-3-14B-Instruct-2512-FP8-dynamic · Hugging Face

Ministral-3-14B-Instruct-2512-FP8-dynamic

Model Overview

  • Model Architecture: MistralForCausalLM
    • Input: Text
    • Output: Text
  • Model Optimizations:
    • Weight quantization: FP8
  • Intended Use Cases:
    • Reasoning.
    • Function calling.
    • Subject matter experts via fine-tuning.
    • Multilingual instruction following.
    • Translation.
  • Out-of-scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws).
  • Release Date: 05/05/2025
  • Version: 1.0
  • Model Developers: RedHat (Neural Magic)

Model Optimizations

This model was obtained by quantizing the weights and activations of mistralai/Ministral-3-14B-Instruct-2512-BF16 to FP8 data type, ready for inference with vLLM. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.

Deployment

This model can be deployed efficiently using the vLLM backend, as shown in the example below.

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/Ministral-3-14B-Instruct-2512-FP8-dynamic"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.15, top_p=1.0, top_k=20, min_p=0, max_tokens=65536)

messages = [
    {"role": "user", "content": prompt}
]

tokenizer = AutoTokenizer.from_pretrained(model_id)

messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]

prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompts, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)

vLLM aslo supports OpenAI-compatible serving. See the documentation for more details.

Creation

Creation details This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
from datasets import load_dataset
from transformers import Mistral3ForConditionalGeneration, MistralCommonBackend
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.utils import dispatch_for_generation

MODEL_ID = "mistralai/Ministral-3-14B-Instruct-2512-BF16"

model = Mistral3ForConditionalGeneration.from_pretrained(MODEL_ID, device_map="auto")
tokenizer = MistralCommonBackend.from_pretrained(MODEL_ID)

recipe = """
    quant_stage:
      quant_modifiers:
        QuantizationModifier:
          ignore: ["re:.*lm_head", "re:.*vision_tower.*", "re:.*multi_modal_projector.*"]
          config_groups:
            group_0:
              targets: [Linear]
              weights:
                num_bits: 8
                type: float
                strategy: channel
                symmetric: true
                dynamic: false
                observer: mse
              input_activations:
                num_bits: 8
                type: float
                strategy: token
                symmetric: true
                dynamic: true
                observer: minmax
"""

# Apply quantization.
oneshot(model=model, recipe=recipe)

# Confirm generations of the quantized model look sane.
print("========== SAMPLE GENERATION ==============")
dispatch_for_generation(model)
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to(
    model.device
)
output = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(output[0]))
print("==========================================")


# Save to disk in compressed-tensors format.
SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-DYNAMIC-OBSERVER"
model.save_pretrained(SAVE_DIR, save_compressed = True)
tokenizer.save_pretrained(SAVE_DIR)

Evaluation

The model was evaluated on the ifeval and mmmu using lm-evaluation-harness, on reasoning tasks using lighteval. vLLM was used for all evaluations.

Evaluation details

lm-evaluation-harness

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Ministral-3-14B-Instruct-2512-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.7,max_model_len=262144,enable_chunk_prefill=True,tensor_parallel_size=1 \
  --tasks ifeval,mmmu_val \
  --apply_chat_template\
  --fewshot_as_multiturn \
  --batch_size auto

lighteval

litellm_config.yaml

model_parameters:
  provider: "hosted_vllm"
  model_name: "hosted_vllm/RedHatAI/Ministral-3-14B-Instruct-2512-FP8-dynamic"
  base_url: "http://0.0.0.0:8000/v1"
  api_key: ""
  timeout: 1200
  concurrent_requests: 16
  generation_parameters:
    temperature: 0.15
    max_new_tokens: 65536
    top_p: 0.95
    seed: 0
lighteval endpoint litellm litellm_config.yaml "aime25"
lighteval endpoint litellm litellm_config.yaml "math_500"
lighteval endpoint litellm litellm_config.yaml "gpqa:diamond"

Accuracy

Category Benchmark Ministral-3-14B-Instruct-2512-BF16 Ministral-3-14B-Instruct-2512-FP8-dynamic
(this model)
Recovery
Vision MMMU 55.33 54.44 98.4%
OpenLLM v2 IFEval (0-shot) 77.34 76.86 99.4%
Reasoning
(generation)
AIME 2025 36.67 30.0 81.81%
GPQA diamond 58.59 66.16 112.9%
Math-lvl-5 88.6 89.4 100.9%
Average 61.29 61.85 100.9%
Downloads last month
179
Safetensors
Model size
14B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for inference-optimization/Ministral-3-14B-Instruct-2512-FP8-dynamic