Create common-fine-tuning-method.md
Browse files
common-fine-tuning-method.md
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
| Method Name | Description | Use Case / Notes |
|
2 |
+
| --------------------------------- | ------------------------------------------------------------------- | -------------------------------------------------- |
|
3 |
+
| **Full Fine-Tuning** | Train all weights of the pretrained model on your dataset | Best for large datasets, very GPU intensive |
|
4 |
+
| **Feature Extraction** | Freeze the backbone (encoder) and train only the decoder / head | Good for small datasets, low GPU |
|
5 |
+
| **LoRA (Low-Rank Adaptation)** | Adds small trainable adapter layers to pretrained attention layers | Extremely memory-efficient, works on mini datasets |
|
6 |
+
| **DreamBooth** | Fine-tune Stable Diffusion to generate **custom subjects / styles** | Specialized for image personalization |
|
7 |
+
| **Adapter Tuning** | Insert small adapter modules in transformer layers | Similar to LoRA but more modular |
|
8 |
+
| **Prompt Tuning / Prefix Tuning** | Train embeddings / tokens without changing main model weights | Works well for text & multimodal models |
|