🦥 Unsloth Training Scripts for HF Jobs
UV scripts for fine-tuning LLMs and VLMs using Unsloth on HF Jobs (on-demand cloud GPUs). UV handles dependency installation automatically, so you can run these scripts directly without any local setup.
These scripts can also be used or adapted by agents to train models for you.
Prerequisites
- A Hugging Face account
- The HF CLI installed and authenticated (
hf auth login) - A dataset on the Hub in the appropriate format (see format requirements below). A strong LLM agent can often convert your data into the right format if needed.
Data Formats
LLM Fine-tuning (SFT)
Requires conversation data in ShareGPT or similar format:
{
"messages": [
{"from": "human", "value": "What is the capital of France?"},
{"from": "gpt", "value": "The capital of France is Paris."}
]
}
The script auto-converts common formats (ShareGPT, Alpaca, etc.) via standardize_data_formats. See mlabonne/FineTome-100k for a working dataset example.
VLM Fine-tuning
Requires images and messages columns:
{
"images": [<PIL.Image>], # List of images
"messages": [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What's in this image?"}
]
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "A golden retriever playing fetch in a park."}
]
}
]
}
See davanstrien/iconclass-vlm-sft for a working dataset example, and davanstrien/iconclass-vlm-qwen3-best for a model trained with these scripts.
Continued Pretraining
Any dataset with a text column:
{"text": "Your domain-specific text here..."}
Use --text-column if your column has a different name.
Usage
View available options for any script:
uv run https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-lfm2.5.py --help
LLM fine-tuning
Fine-tune LFM2.5-1.2B-Instruct, a compact and efficient text model from Liquid AI:
hf jobs uv run \
https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-lfm2.5.py \
--flavor a10g-small --secrets HF_TOKEN --timeout 4h \
-- --dataset mlabonne/FineTome-100k \
--num-epochs 1 \
--eval-split 0.2 \
--output-repo your-username/lfm-finetuned
VLM fine-tuning
hf jobs uv run \
https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-qwen3-vl.py \
--flavor a100-large --secrets HF_TOKEN \
-- --dataset your-username/dataset \
--trackio-space your-username/trackio \
--output-repo your-username/my-model
Continued pretraining
hf jobs uv run \
https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/continued-pretraining.py \
--flavor a100-large --secrets HF_TOKEN \
-- --dataset your-username/domain-corpus \
--text-column content \
--max-steps 1000 \
--output-repo your-username/domain-llm
With Trackio monitoring
hf jobs uv run \
https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-lfm2.5.py \
--flavor a10g-small --secrets HF_TOKEN \
-- --dataset mlabonne/FineTome-100k \
--trackio-space your-username/trackio \
--output-repo your-username/lfm-finetuned
Scripts
| Script | Base Model | Task |
|---|---|---|
sft-lfm2.5.py |
LFM2.5-1.2B-Instruct | LLM fine-tuning (recommended) |
sft-qwen3-vl.py |
Qwen3-VL-8B | VLM fine-tuning |
sft-gemma3-vlm.py |
Gemma 3 4B | VLM fine-tuning (smaller) |
continued-pretraining.py |
Qwen3-0.6B | Domain adaptation |
Common Options
| Option | Description | Default |
|---|---|---|
--dataset |
HF dataset ID | required |
--output-repo |
Where to save trained model | required |
--max-steps |
Number of training steps | 500 |
--num-epochs |
Train for N epochs instead of steps | - |
--eval-split |
Fraction for evaluation (e.g., 0.2) | 0 (disabled) |
--batch-size |
Per-device batch size | 2 |
--gradient-accumulation |
Gradient accumulation steps | 4 |
--lora-r |
LoRA rank | 16 |
--learning-rate |
Learning rate | 2e-4 |
--merge-model |
Upload merged model (not just adapter) | false |
--trackio-space |
HF Space for live monitoring | - |
--run-name |
Custom name for Trackio run | auto |
Tips
- Use
--max-steps 10to verify everything works before a full run --eval-split 0.1helps detect overfitting- Run
hf jobs hardwareto see GPU pricing (A100-large ~$2.50/hr, L40S ~$1.80/hr) - Add
--streamingfor very large datasets - First training step may take a few minutes (CUDA kernel compilation)
Links
- Downloads last month
- 73