Update README.md
#58
by
Criztov
- opened
README.md
CHANGED
|
@@ -10,7 +10,7 @@ tags:
|
|
| 10 |
- moe
|
| 11 |
---
|
| 12 |
# Model Card for Mixtral-8x7B
|
| 13 |
-
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The
|
| 14 |
|
| 15 |
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
|
| 16 |
|
|
|
|
| 10 |
- moe
|
| 11 |
---
|
| 12 |
# Model Card for Mixtral-8x7B
|
| 13 |
+
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
|
| 14 |
|
| 15 |
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
|
| 16 |
|