| --- |
| language: |
| - code |
| license: llama2 |
| tags: |
| - llama-2 |
| - mlx |
| pipeline_tag: text-generation |
| --- |
| |
|  |
|
|
| # mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX |
| This model was converted to MLX format from [`codellama/CodeLlama-70b-Instruct-hf`](). |
| Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) for more details on the model. |
| ## Use with mlx |
|
|
| ```bash |
| pip install mlx-lm |
| ``` |
|
|
| ```python |
| from mlx_lm import load, generate |
| |
| model, tokenizer = load("mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX") |
| response = generate(model, tokenizer, prompt="<step>Source: user Fibonacci series in Python<step> Source: assistant Destination: user", verbose=True) |
| ``` |
|
|