Update README.md
Browse files
README.md
CHANGED
|
@@ -11,14 +11,16 @@ tags:
|
|
| 11 |
- mistral-instruct
|
| 12 |
- instruct
|
| 13 |
- bnb
|
|
|
|
| 14 |
---
|
| 15 |
|
| 16 |
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
|
| 17 |
|
| 18 |
-
We have a Google Colab Tesla T4 notebook for Mistral 7b here: https://colab.research.google.com/drive/
|
| 19 |
|
| 20 |
-
|
| 21 |
-
|
|
|
|
| 22 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
| 23 |
|
| 24 |
## ✨ Finetune for Free
|
|
@@ -27,14 +29,18 @@ All notebooks are **beginner friendly**! Add your dataset, click "Run All", and
|
|
| 27 |
|
| 28 |
| Unsloth supports | Free Notebooks | Performance | Memory use |
|
| 29 |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
|
| 30 |
-
| **
|
| 31 |
-
| **
|
| 32 |
-
| **Llama-
|
| 33 |
-
| **
|
| 34 |
-
| **
|
| 35 |
-
| **
|
|
|
|
|
|
|
| 36 |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
|
| 37 |
|
|
|
|
|
|
|
| 38 |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
|
| 39 |
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
|
| 40 |
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
|
|
|
| 11 |
- mistral-instruct
|
| 12 |
- instruct
|
| 13 |
- bnb
|
| 14 |
+
base_model: mistralai/mistral-7b-instruct-v0.2
|
| 15 |
---
|
| 16 |
|
| 17 |
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
|
| 18 |
|
| 19 |
+
We have a Google Colab Tesla T4 notebook for Mistral v3 7b here: https://colab.research.google.com/drive/1_yNCks4BTD5zOnjozppphh5GzMFaMKq_?usp=sharing
|
| 20 |
|
| 21 |
+
For conversational ShareGPT style and using Mistral v3 Instruct: https://colab.research.google.com/drive/15F1xyn8497_dUbxZP4zWmPZ3PJx1Oymv?usp=sharing
|
| 22 |
+
|
| 23 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
|
| 24 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
| 25 |
|
| 26 |
## ✨ Finetune for Free
|
|
|
|
| 29 |
|
| 30 |
| Unsloth supports | Free Notebooks | Performance | Memory use |
|
| 31 |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
|
| 32 |
+
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
|
| 33 |
+
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing) | 2x faster | 60% less |
|
| 34 |
+
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
|
| 35 |
+
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing) | 1.8x faster | 60% less |
|
| 36 |
+
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing) | 2x faster | 60% less |
|
| 37 |
+
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
|
| 38 |
+
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
|
| 39 |
+
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
|
| 40 |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
|
| 41 |
|
| 42 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
|
| 43 |
+
|
| 44 |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
|
| 45 |
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
|
| 46 |
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|