|
--- |
|
license: apache-2.0 |
|
language: |
|
- zh |
|
- en |
|
base_model: |
|
- Qwen/Qwen3-1.7B-Base |
|
pipeline_tag: text-to-speech |
|
--- |
|
|
|
# MOSS-TTSD 🪐 |
|
|
|
## Overview |
|
|
|
MOSS-TTSD (text to spoken dialogue) is an open-source bilingual spoken dialogue synthesis model that supports both Chinese and English. |
|
It can transform dialogue scripts between two speakers into natural, expressive conversational speech. |
|
MOSS-TTSD supports voice cloning and single-session speech generation of up to 960 seconds, making it ideal for AI podcast production. |
|
|
|
## Highlights |
|
|
|
- **Highly Expressive Dialogue Speech**: Built on unified semantic-acoustic neural audio codec, a pre-trained large language model, millions of hours of TTS data, and 400k hours synthetic and real conversational speech, MOSS-TTSD generates highly expressive, human-like dialogue speech with natural conversational prosody. |
|
- **Two-Speaker Voice Cloning**: MOSS-TTSD supports zero-shot two speakers voice cloning and can generate conversational speech with accurate speaker swithcing based on dialogue scripts. |
|
- **Chinese-English Bilingual Support**: MOSS-TTSD enables highly expressive speech generation in both Chinese and English. |
|
- **Long-Form Speech Generation (up to 960 seconds)**: Thanks to low-bitrate codec and training framework optimization, MOSS-TTSD has been trained for long speech generation, enabling single-session speech generation of up to 960 seconds. |
|
- **Fully Open Source & Commercial-Ready**: MOSS-TTSD and its future updates will be fully open-source and support free commercial use. |
|
|
|
|
|
```python |
|
import os |
|
import torchaudio |
|
from transformers import AutoModel, AutoProcessor |
|
|
|
processor = AutoProcessor.from_pretrained("fnlp/MOSS-TTSD-v0.5", codec_path="fnlp/XY_Tokenizer_TTSD_V0_hf", trust_remote_code=True) |
|
model = AutoModel.from_pretrained("fnlp/MOSS-TTSD-v0.5", trust_remote_code=True, device_map="auto").eval() |
|
|
|
data = [{ |
|
"base_path": "/path/to/audio/files", |
|
"text": "[S1]Speaker 1 dialogue content[S2]Speaker 2 dialogue content[S1]...", |
|
"prompt_audio": "path/to/shared_reference_audio.wav", |
|
"prompt_text": "[S1]Reference text for speaker 1[S2]Reference text for speaker 2" |
|
}] |
|
|
|
inputs = processor(data) |
|
token_ids = model.generate(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"]) |
|
text, audios = processor.batch_decode(token_ids) |
|
|
|
if not os.path.exists("outputs/"): |
|
os.mkdir("outputs/") |
|
for i, data in enumerate(audios): |
|
for j, fragment in enumerate(data): |
|
torchaudio.save(f"outputs/audio_{i}_{j}.wav", fragment.cpu(), 24000) |
|
``` |