Attention_is_all_you_need_transformers / common-fine-tuning-method.md
ankitkushwaha90's picture
Create common-fine-tuning-method.md
5c21501 verified
Method Name Description Use Case / Notes
Full Fine-Tuning Train all weights of the pretrained model on your dataset Best for large datasets, very GPU intensive
Feature Extraction Freeze the backbone (encoder) and train only the decoder / head Good for small datasets, low GPU
LoRA (Low-Rank Adaptation) Adds small trainable adapter layers to pretrained attention layers Extremely memory-efficient, works on mini datasets
DreamBooth Fine-tune Stable Diffusion to generate custom subjects / styles Specialized for image personalization
Adapter Tuning Insert small adapter modules in transformer layers Similar to LoRA but more modular
Prompt Tuning / Prefix Tuning Train embeddings / tokens without changing main model weights Works well for text & multimodal models