Adding Evaluation Results
Browse filesThis is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr
The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.
If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions
README.md
CHANGED
|
@@ -1,13 +1,116 @@
|
|
| 1 |
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
language:
|
| 4 |
- fr
|
| 5 |
- it
|
| 6 |
- de
|
| 7 |
- es
|
| 8 |
- en
|
|
|
|
| 9 |
tags:
|
| 10 |
- moe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
# Model Card for Mixtral-8x7B
|
| 13 |
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
|
|
@@ -109,4 +212,17 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 109 |
Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms.
|
| 110 |
|
| 111 |
# The Mistral AI Team
|
| 112 |
-
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
language:
|
| 3 |
- fr
|
| 4 |
- it
|
| 5 |
- de
|
| 6 |
- es
|
| 7 |
- en
|
| 8 |
+
license: apache-2.0
|
| 9 |
tags:
|
| 10 |
- moe
|
| 11 |
+
model-index:
|
| 12 |
+
- name: Mixtral-8x7B-v0.1
|
| 13 |
+
results:
|
| 14 |
+
- task:
|
| 15 |
+
type: text-generation
|
| 16 |
+
name: Text Generation
|
| 17 |
+
dataset:
|
| 18 |
+
name: AI2 Reasoning Challenge (25-Shot)
|
| 19 |
+
type: ai2_arc
|
| 20 |
+
config: ARC-Challenge
|
| 21 |
+
split: test
|
| 22 |
+
args:
|
| 23 |
+
num_few_shot: 25
|
| 24 |
+
metrics:
|
| 25 |
+
- type: acc_norm
|
| 26 |
+
value: 66.38
|
| 27 |
+
name: normalized accuracy
|
| 28 |
+
source:
|
| 29 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistralai/Mixtral-8x7B-v0.1
|
| 30 |
+
name: Open LLM Leaderboard
|
| 31 |
+
- task:
|
| 32 |
+
type: text-generation
|
| 33 |
+
name: Text Generation
|
| 34 |
+
dataset:
|
| 35 |
+
name: HellaSwag (10-Shot)
|
| 36 |
+
type: hellaswag
|
| 37 |
+
split: validation
|
| 38 |
+
args:
|
| 39 |
+
num_few_shot: 10
|
| 40 |
+
metrics:
|
| 41 |
+
- type: acc_norm
|
| 42 |
+
value: 86.46
|
| 43 |
+
name: normalized accuracy
|
| 44 |
+
source:
|
| 45 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistralai/Mixtral-8x7B-v0.1
|
| 46 |
+
name: Open LLM Leaderboard
|
| 47 |
+
- task:
|
| 48 |
+
type: text-generation
|
| 49 |
+
name: Text Generation
|
| 50 |
+
dataset:
|
| 51 |
+
name: MMLU (5-Shot)
|
| 52 |
+
type: cais/mmlu
|
| 53 |
+
config: all
|
| 54 |
+
split: test
|
| 55 |
+
args:
|
| 56 |
+
num_few_shot: 5
|
| 57 |
+
metrics:
|
| 58 |
+
- type: acc
|
| 59 |
+
value: 71.88
|
| 60 |
+
name: accuracy
|
| 61 |
+
source:
|
| 62 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistralai/Mixtral-8x7B-v0.1
|
| 63 |
+
name: Open LLM Leaderboard
|
| 64 |
+
- task:
|
| 65 |
+
type: text-generation
|
| 66 |
+
name: Text Generation
|
| 67 |
+
dataset:
|
| 68 |
+
name: TruthfulQA (0-shot)
|
| 69 |
+
type: truthful_qa
|
| 70 |
+
config: multiple_choice
|
| 71 |
+
split: validation
|
| 72 |
+
args:
|
| 73 |
+
num_few_shot: 0
|
| 74 |
+
metrics:
|
| 75 |
+
- type: mc2
|
| 76 |
+
value: 46.81
|
| 77 |
+
source:
|
| 78 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistralai/Mixtral-8x7B-v0.1
|
| 79 |
+
name: Open LLM Leaderboard
|
| 80 |
+
- task:
|
| 81 |
+
type: text-generation
|
| 82 |
+
name: Text Generation
|
| 83 |
+
dataset:
|
| 84 |
+
name: Winogrande (5-shot)
|
| 85 |
+
type: winogrande
|
| 86 |
+
config: winogrande_xl
|
| 87 |
+
split: validation
|
| 88 |
+
args:
|
| 89 |
+
num_few_shot: 5
|
| 90 |
+
metrics:
|
| 91 |
+
- type: acc
|
| 92 |
+
value: 81.69
|
| 93 |
+
name: accuracy
|
| 94 |
+
source:
|
| 95 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistralai/Mixtral-8x7B-v0.1
|
| 96 |
+
name: Open LLM Leaderboard
|
| 97 |
+
- task:
|
| 98 |
+
type: text-generation
|
| 99 |
+
name: Text Generation
|
| 100 |
+
dataset:
|
| 101 |
+
name: GSM8k (5-shot)
|
| 102 |
+
type: gsm8k
|
| 103 |
+
config: main
|
| 104 |
+
split: test
|
| 105 |
+
args:
|
| 106 |
+
num_few_shot: 5
|
| 107 |
+
metrics:
|
| 108 |
+
- type: acc
|
| 109 |
+
value: 57.62
|
| 110 |
+
name: accuracy
|
| 111 |
+
source:
|
| 112 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistralai/Mixtral-8x7B-v0.1
|
| 113 |
+
name: Open LLM Leaderboard
|
| 114 |
---
|
| 115 |
# Model Card for Mixtral-8x7B
|
| 116 |
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
|
|
|
|
| 212 |
Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms.
|
| 213 |
|
| 214 |
# The Mistral AI Team
|
| 215 |
+
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
| 216 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 217 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mixtral-8x7B-v0.1)
|
| 218 |
+
|
| 219 |
+
| Metric |Value|
|
| 220 |
+
|---------------------------------|----:|
|
| 221 |
+
|Avg. |68.47|
|
| 222 |
+
|AI2 Reasoning Challenge (25-Shot)|66.38|
|
| 223 |
+
|HellaSwag (10-Shot) |86.46|
|
| 224 |
+
|MMLU (5-Shot) |71.88|
|
| 225 |
+
|TruthfulQA (0-shot) |46.81|
|
| 226 |
+
|Winogrande (5-shot) |81.69|
|
| 227 |
+
|GSM8k (5-shot) |57.62|
|
| 228 |
+
|