Original Model Link : https://huggingface.co/LGAI-EXAONE/EXAONE-4.0.1-32B
name: EXAONE-4.0.1-32B-MLX-Q4
base_model: LGAI-EXAONE/EXAONE-4.0-32B
license: other
pipeline_tag: text-generation
tasks :
- text-generation
- text2text-generation
language :
- en
- ko
- es
license_link: LICENSE
get_started_code: uvx --from mlx-lm mlx_lm.generate --model --model exdysa/EXAONE-4.0.1-32B-MLX-Q4 --prompt 'Test Prompt' --prompt 'Test prompt'
EXAONE-4.0.1-32B-MLX-Q4
EXAONE-4.0.1-32B-MLX-Q4 is a hybrid thinking and non-thinking model with tool use from LGAI. A multilingual model, it supports English, Korean and Spanish, and works best at temperature<0.6 in all modes and top_p=0.95 for thinking. presence_penalty=1.5 is recommended in the case of model degeneration, and temperature=0.1 to force answers to a single language.
MLX is a framework for METAL graphics supported by Apple computers with ARM M-series processors (M1/M2/M3/M4)
Generation using uv https://docs.astral.sh/uv/**:
uvx --from mlx-lm mlx_lm.generate --model --model exdysa/EXAONE-4.0.1-32B-MLX-Q4 --prompt 'Test Prompt'
Generation using pip:
pipx --from mlx-lm mlx_lm.generate --model --model exdysa/EXAONE-4.0.1-32B-MLX-Q4 --prompt 'Test Prompt'
- Downloads last month
- 10
Model tree for exdysa/EXAONE-4.0.1-32B-MLX-Q4
Base model
LGAI-EXAONE/EXAONE-4.0-32B