Original Model Link : https://huggingface.co/LGAI-EXAONE/EXAONE-4.0.1-32B

name: EXAONE-4.0.1-32B-MLX-Q4
base_model: LGAI-EXAONE/EXAONE-4.0-32B
license: other
pipeline_tag: text-generation
tasks :
 - text-generation
 - text2text-generation
language :
 - en
 - ko
 - es
license_link: LICENSE
get_started_code: uvx --from mlx-lm mlx_lm.generate --model  --model exdysa/EXAONE-4.0.1-32B-MLX-Q4 --prompt 'Test Prompt' --prompt 'Test prompt'

EXAONE-4.0.1-32B-MLX-Q4

EXAONE-4.0.1-32B-MLX-Q4 is a hybrid thinking and non-thinking model with tool use from LGAI. A multilingual model, it supports English, Korean and Spanish, and works best at temperature<0.6 in all modes and top_p=0.95 for thinking. presence_penalty=1.5 is recommended in the case of model degeneration, and temperature=0.1 to force answers to a single language.

MLX is a framework for METAL graphics supported by Apple computers with ARM M-series processors (M1/M2/M3/M4)

Generation using uv https://docs.astral.sh/uv/**:

uvx --from mlx-lm mlx_lm.generate --model  --model exdysa/EXAONE-4.0.1-32B-MLX-Q4 --prompt 'Test Prompt'

Generation using pip:

pipx --from mlx-lm mlx_lm.generate --model  --model exdysa/EXAONE-4.0.1-32B-MLX-Q4 --prompt 'Test Prompt'
Downloads last month
10
Safetensors
Model size
32B params
Tensor type
BF16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for exdysa/EXAONE-4.0.1-32B-MLX-Q4

Quantized
(26)
this model
exdysa/EXAONE-4.0.1-32B-MLX-Q4 · Hugging Face

Original Model Link : https://huggingface.co/LGAI-EXAONE/EXAONE-4.0.1-32B

name: EXAONE-4.0.1-32B-MLX-Q4
base_model: LGAI-EXAONE/EXAONE-4.0-32B
license: other
pipeline_tag: text-generation
tasks :
 - text-generation
 - text2text-generation
language :
 - en
 - ko
 - es
license_link: LICENSE
get_started_code: uvx --from mlx-lm mlx_lm.generate --model  --model exdysa/EXAONE-4.0.1-32B-MLX-Q4 --prompt 'Test Prompt' --prompt 'Test prompt'

EXAONE-4.0.1-32B-MLX-Q4

EXAONE-4.0.1-32B-MLX-Q4 is a hybrid thinking and non-thinking model with tool use from LGAI. A multilingual model, it supports English, Korean and Spanish, and works best at temperature<0.6 in all modes and top_p=0.95 for thinking. presence_penalty=1.5 is recommended in the case of model degeneration, and temperature=0.1 to force answers to a single language.

MLX is a framework for METAL graphics supported by Apple computers with ARM M-series processors (M1/M2/M3/M4)

Generation using uv https://docs.astral.sh/uv/**:

uvx --from mlx-lm mlx_lm.generate --model  --model exdysa/EXAONE-4.0.1-32B-MLX-Q4 --prompt 'Test Prompt'

Generation using pip:

pipx --from mlx-lm mlx_lm.generate --model  --model exdysa/EXAONE-4.0.1-32B-MLX-Q4 --prompt 'Test Prompt'
Downloads last month
10
Safetensors
Model size
32B params
Tensor type
BF16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for exdysa/EXAONE-4.0.1-32B-MLX-Q4

Quantized
(26)
this model