Serayuki 1B-v1.1 pre2-step-3k-1B

Developer

  • Developed by Vinoie.

Tokenizer

Tokenizer is based on meta-llama/Llama-3.2-3B-Instruct.

Example in Python

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Vinoie/Serayuki-1B-v1.1-pre2-step-3k-1B")
model = AutoModelForCausalLM.from_pretrained("Vinoie/Serayuki-1B-v1.1-pre2-step-3k-1B")

prompt = "Once upon a time, my friends and I wanted to"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
    **inputs,
    max_length=256
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

License

This project is licensed under the MIT License.

Downloads last month
-
Safetensors
Model size
1.07B params
Tensor type
F32
·
F16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train Vinoie/Serayuki-1B-v1.1-pre2-step-3k-1B