🔥 PyroNet
PyroNet is a fine-tuned and customized open-source large language model with a unique system identity.
Originally based on gpt-oss-20b, this model has been further trained and specialized to embody the PyroNet persona.
Created and maintained by IceL1ghtning from Ukraine 🇺🇦.
✨ Features
- 🧠 Fine-tuned on custom datasets to define the PyroNet identity
- 🎭 Optimized for chat, reasoning, coding, and explanation tasks
- 🔗 Fully compatible with the Hugging Face
transformers
ecosystem - 📦 Includes a custom chat template and structured system prompt
🚀 Usage
Install requirements
pip install transformers accelerate bitsandbytes
Quick inference
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "Kenan023214/PyroNet"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "Hello, PyroNet! Can you introduce yourself?"
result = pipe(prompt, max_new_tokens=200, do_sample=True, temperature=0.8)
print(result[0]["generated_text"])
💡 Recommendations
Best run on GPU with ≥24 GB VRAM (e.g. RTX 3090, A100).
For smaller GPUs, use:
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
load_in_8bit=True
)
(Requires bitsandbytes).
Adjust temperature and top_p for more creative or deterministic outputs.
📧 Contact: [email protected]
📜 License & Disclaimer
License: Apache 2.0
Based on gpt-oss-20b
For research purposes only. Not intended for production without further alignment and safety checks.
Responsibility for usage lies with the end-user.
🔥 PyroNet — Where logic meets creativity.
- Downloads last month
- 17
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support