nhe-ai's picture
Update README.md
35fec6d verified
metadata
license: cc-by-nc-4.0
language:
  - zh
  - en
  - de
  - fr
  - ja
  - ko
  - nl
  - es
  - it
  - pt
  - pl
base_model: HKUSTAudio/Llasa-1B-Multilingual
tags:
  - Text-to-Speech
  - mlx
  - mlx-my-repo
pipeline_tag: text-to-speech

nhe-ai/Llasa-1B-Multilingual-mlx-8Bit

The Model nhe-ai/Llasa-1B-Multilingual-mlx-8Bit was converted to MLX format from HKUSTAudio/Llasa-1B-Multilingual using mlx-lm version 0.22.3.

⚠️ Important: This model was automatically converted for experimentation. The following guide was not designed for this model and may not work as expected. Do not expect to function out of the box. Use at your own experimentation.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("nhe-ai/Llasa-1B-Multilingual-mlx-8Bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)