davanstrien HF Staff commited on
Commit
a69b995
·
verified ·
1 Parent(s): 125c431

Add MOE (mixture of experts) tag

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -7,6 +7,8 @@ language:
7
  - es
8
  - en
9
  inference: false
 
 
10
  ---
11
  # Model Card for Mixtral-8x7B
12
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
 
7
  - es
8
  - en
9
  inference: false
10
+ tags:
11
+ - moe
12
  ---
13
  # Model Card for Mixtral-8x7B
14
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.