Update README.md
Browse filesupdates to model card with download instructions
README.md
CHANGED
@@ -18,14 +18,16 @@ python3 ~/convert_mistral_weights_to_hf-22B.py --input_dir ~/Codestral-22B-v0.1/
|
|
18 |
|
19 |
Then measurements.json was created using [exllamav2](https://github.com/turboderp/exllamav2/blob/master/doc/convert.md)
|
20 |
```
|
21 |
-
|
22 |
```
|
23 |
|
24 |
Finally quantized (eg. 4.0bpw)
|
25 |
```
|
26 |
-
|
27 |
```
|
28 |
|
|
|
|
|
29 |
# Model Card for Codestral-22B-v0.1
|
30 |
|
31 |
Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
|
@@ -121,6 +123,39 @@ num1, num2):
|
|
121 |
# return the sum
|
122 |
```
|
123 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
124 |
## Limitations
|
125 |
|
126 |
The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
|
|
|
18 |
|
19 |
Then measurements.json was created using [exllamav2](https://github.com/turboderp/exllamav2/blob/master/doc/convert.md)
|
20 |
```
|
21 |
+
python3 convert.py -i ~/models/Codestral-22B-v0.1-hf/ -o /tmp/exl2/ -nr -om ~/models/Machinez_Codestral-22B-v0.1-exl2/measurement.json
|
22 |
```
|
23 |
|
24 |
Finally quantized (eg. 4.0bpw)
|
25 |
```
|
26 |
+
python3 convert.py -i ~/models/Codestral-22B-v0.1-hf/ -o /tmp/exl2/ -nr -m ~/models/Machinez_Codestral-22B-v0.1-exl2/measurement.json -cf ~/models/Machinez_Codestral-22B-v0.1-exl2_4.0bpw/ -b 4.0
|
27 |
```
|
28 |
|
29 |
+
|
30 |
+
|
31 |
# Model Card for Codestral-22B-v0.1
|
32 |
|
33 |
Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
|
|
|
123 |
# return the sum
|
124 |
```
|
125 |
|
126 |
+
## Download instructions
|
127 |
+
|
128 |
+
With git:
|
129 |
+
|
130 |
+
```shell
|
131 |
+
git clone --single-branch --branch 4_0 https://huggingface.co/machinez/Codestral-22B-v0.1-exl2-exl2
|
132 |
+
```
|
133 |
+
|
134 |
+
With huggingface hub:
|
135 |
+
|
136 |
+
```shell
|
137 |
+
pip3 install -U "huggingface_hub[cli]"
|
138 |
+
```
|
139 |
+
|
140 |
+
## (optional)
|
141 |
+
```shell
|
142 |
+
git config --global credential.helper 'store --file ~/.my-credentials'
|
143 |
+
huggingface-cli login
|
144 |
+
```
|
145 |
+
|
146 |
+
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `machinez_Codestral-22B-v0.1-exl2`:
|
147 |
+
|
148 |
+
```shell
|
149 |
+
mkdir machinez_Codestral-22B-v0.1-exl2
|
150 |
+
huggingface-cli download machinez/Codestral-22B-v0.1-exl2 --local-dir machinez_Codestral-22B-v0.1-exl2 --local-dir-use-symlinks False
|
151 |
+
```
|
152 |
+
|
153 |
+
To download from a different branch, add the `--revision` parameter:
|
154 |
+
|
155 |
+
```shell
|
156 |
+
mkdir machinez_Codestral-22B-v0.1-exl2_4.0bpw
|
157 |
+
huggingface-cli download machinez/Codestral-22B-v0.1-exl2 --revision 6_0 --local-dir machinez_Codestral-22B-v0.1-exl2_6.0bpw --local-dir-use-symlinks False
|
158 |
+
|
159 |
## Limitations
|
160 |
|
161 |
The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
|