Update README.md
Browse files
README.md
CHANGED
|
@@ -22,7 +22,9 @@ base_model: mistralai/Mistral-7B-Instruct-v0.3
|
|
| 22 |
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2965](https://github.com/ggerganov/llama.cpp/releases/tag/b2965)<br>
|
| 23 |
|
| 24 |
## Model Summary:
|
| 25 |
-
|
|
|
|
|
|
|
| 26 |
|
| 27 |
## Prompt template:
|
| 28 |
|
|
@@ -36,7 +38,14 @@ Under the hood, the model will see a prompt that's formatted like so:
|
|
| 36 |
|
| 37 |
## Technical Details
|
| 38 |
|
| 39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
## Special thanks
|
| 42 |
|
|
|
|
| 22 |
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2965](https://github.com/ggerganov/llama.cpp/releases/tag/b2965)<br>
|
| 23 |
|
| 24 |
## Model Summary:
|
| 25 |
+
|
| 26 |
+
Mistral 7B Instruct is an excellent high quality model tuned for instruction following, and release v0.3 is no different.<br>
|
| 27 |
+
This iteration features function calling support, which should extend the use case further and allow for a more useful assistant.<br>
|
| 28 |
|
| 29 |
## Prompt template:
|
| 30 |
|
|
|
|
| 38 |
|
| 39 |
## Technical Details
|
| 40 |
|
| 41 |
+
Version 0.3 has a few changes over release 0.2, including:
|
| 42 |
+
- An extended vocabulary (32000 -> 32768)
|
| 43 |
+
- A new tokenizer
|
| 44 |
+
- Support for function calling
|
| 45 |
+
|
| 46 |
+
Function calling support is made possible through the new extended vocabulary, including tokens TOOL_CALLS, AVAILABLE_TOOLS, and TOOL_RESULTS.
|
| 47 |
+
|
| 48 |
+
This model maintains the v0.2 context length of 32768
|
| 49 |
|
| 50 |
## Special thanks
|
| 51 |
|