File size: 7,240 Bytes
6490c7d 673306e 6490c7d e96f3e8 6490c7d f1f114e 6490c7d f1f114e 6490c7d 12fd1bd 673306e 12fd1bd 673306e 12fd1bd f6075ef 12fd1bd f1f114e 6490c7d 673306e 6490c7d f1f114e d12b24d f1f114e 97dbe33 f1f114e 6490c7d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
---
inference: false
license: other
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
tags:
- code
- text-generation-inference
- mistral
language:
- code
---
Converted from Mistral to Huggingface safetensors using [Hugglingface's transformers Mistral](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/convert_mistral_weights_to_hf.py) script
```
pip install protobuf sentencepiece torch transformers accelerate
python3 ~/convert_mistral_weights_to_hf-22B.py --input_dir ~/Codestral-22B-v0.1/ --model_size 22B --output_dir ~/models/Codestral-22B-v0.1-hf/ --is_v3 --safe_serialization
```
Then measurements.json was created using [exllamav2](https://github.com/turboderp/exllamav2/blob/master/doc/convert.md)
```
python3 convert.py -i ~/models/Codestral-22B-v0.1-hf/ -o /tmp/exl2/ -nr -om ~/models/Machinez_Codestral-22B-v0.1-exl2/measurement.json
```
Finally quantized (eg. 4.0bpw)
```
python3 convert.py -i ~/models/Codestral-22B-v0.1-hf/ -o /tmp/exl2/ -nr -m ~/models/Machinez_Codestral-22B-v0.1-exl2/measurement.json -cf ~/models/Machinez_Codestral-22B-v0.1-exl2_4.0bpw/ -b 4.0
```
## Quantization
- [3.0bpw](https://huggingface.co/machinez/Codestral-22B-v0.1-exl2/tree/3_0) 8.75gb
- [4.0bpw](https://huggingface.co/machinez/Codestral-22B-v0.1-exl2/tree/4_0) 11.5gb
- [5.0bpw](https://huggingface.co/machinez/Codestral-22B-v0.1-exl2/tree/5_0) 14.0gb
- [5.5bpw](https://huggingface.co/machinez/Codestral-22B-v0.1-exl2/tree/5_5) 15.6gb
- [6.0bpw](https://huggingface.co/machinez/Codestral-22B-v0.1-exl2/tree/6_0) 17.0gb
- [7.0bpw](https://huggingface.co/machinez/Codestral-22B-v0.1-exl2/tree/7_0) 19.7gb
- [8.0bpw](https://huggingface.co/machinez/Codestral-22B-v0.1-exl2/tree/8_0) 21.0gb
# Model Card for Codestral-22B-v0.1
Codestral-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
- As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
- As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
## Installation
It is recommended to use `mistralai/Codestral-22B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference).
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment.
```
mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256
```
Will generate an answer to "Write me a function that computes fibonacci in Rust" and should give something along the following lines:
```
Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number.
fn fibonacci(n: u32) -> u32 {
match n {
0 => 0,
1 => 1,
_ => fibonacci(n - 1) + fibonacci(n - 2),
}
}
fn main() {
let n = 10;
println!("The {}th Fibonacci number is: {}", n, fibonacci(n));
}
This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers.
```
### Fill-in-the-middle (FIM)
After installing `mistral_inference` and running `pip install --upgrade mistral_common` to make sure to have mistral_common>=1.2 installed:
```py
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.instruct.request import FIMRequest
tokenizer = MistralTokenizer.v3()
model = Transformer.from_folder("~/codestral-22B-240529")
prefix = """def add("""
suffix = """ return sum"""
request = FIMRequest(prompt=prefix, suffix=suffix)
tokens = tokenizer.encode_fim(request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
middle = result.split(suffix)[0].strip()
print(middle)
```
Should give something along the following lines:
```
num1, num2):
# Add two numbers
sum = num1 + num2
# return the sum
```
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/machinez/Codestral-22B-v0.1-exl2
```
With huggingface hub:
```shell
pip3 install -U "huggingface_hub[cli]"
```
## (optional)
```shell
git config --global credential.helper 'store --file ~/.my-credentials'
huggingface-cli login
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `machinez_Codestral-22B-v0.1-exl2`:
```shell
mkdir machinez_Codestral-22B-v0.1-exl2
huggingface-cli download machinez/Codestral-22B-v0.1-exl2 --local-dir machinez_Codestral-22B-v0.1-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir machinez_Codestral-22B-v0.1-exl2_4.0bpw
huggingface-cli download machinez/Codestral-22B-v0.1-exl2 --revision 6_0 --local-dir machinez_Codestral-22B-v0.1-exl2_6.0bpw --local-dir-use-symlinks False
```
## Limitations
The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## License
Codestral-22B-v0.1 is released under the `MNLP-0.1` license.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall |