File size: 1,260 Bytes
c20ec15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cd37ce2
35fec6d
cd37ce2
c20ec15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: cc-by-nc-4.0
language:
- zh
- en
- de
- fr
- ja
- ko
- nl
- es
- it
- pt
- pl
base_model: HKUSTAudio/Llasa-1B-Multilingual
tags:
- Text-to-Speech
- mlx
- mlx-my-repo
pipeline_tag: text-to-speech
---

# nhe-ai/Llasa-1B-Multilingual-mlx-8Bit

The Model [nhe-ai/Llasa-1B-Multilingual-mlx-8Bit](https://huggingface.co/nhe-ai/Llasa-1B-Multilingual-mlx-8Bit) was converted to MLX format from [HKUSTAudio/Llasa-1B-Multilingual](https://huggingface.co/HKUSTAudio/Llasa-1B-Multilingual) using mlx-lm version **0.22.3**.


⚠️ Important: This model was automatically converted for experimentation. The following guide was not designed for this model and may not work as expected. Do not expect to function out of the box. Use at your own experimentation.

## Use with mlx

```bash
pip install mlx-lm
```

```python
from mlx_lm import load, generate

model, tokenizer = load("nhe-ai/Llasa-1B-Multilingual-mlx-8Bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
```