All terms · Training

Fine-Tuning

The process of updating a pre-trained model with task-specific or domain-specific data to improve performance.

Fine-tuning takes a general pre-trained model (like GPT-3.5) and trains it further on a smaller dataset targeted at your use case. Instead of training from scratch (which costs millions), fine-tuning leverages existing knowledge and adapts it for your task.

Fine-tuning works well for style adaptation ("respond like a pirate"), domain specialization ("medical terminology"), or specific task formats ("answer in JSON"). OpenAI, Anthropic, and other providers offer fine-tuning APIs. You supply examples of input-output pairs, and the model learns to replicate that pattern.

Fine-tuning is cheaper than training from scratch but requires careful data curation—garbage in, garbage out. A small number of high-quality examples often beats a large dataset of low-quality examples.

Example

Fine-tune GPT-3.5 on 500 examples of customer support conversations to make it respond in your company's tone.