metadata
datasets:
- Delta-Vector/Orion-Misc-Data-Sharegpt-Prefixed
- Delta-Vector/Orion-Basket-Weaving-Filtered
- Delta-Vector/Orion-vanilla-backrooms-claude-sharegpt
- Delta-Vector/Orion-Roleplay-Logs-Sharegpt-Ngram-cleaned
- Delta-Vector/Orion-BlueSky-10K-Complexity
base_model:
- zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B
tags:
- roleplay
- chat
- creative-writing
Shimamura 70B

Created by
Delta-Vector
→
Model Information
Shimamura-70B
This is a Finetune of zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B to be a good Chat Model at a larger parameter size
This model has been trained on 100M tokens of Human chat logs from Bsky, 4chan & Most of all ShoujoAI.
Support me on Ko-Fi: https://ko-fi.com/deltavector
Quantized Versions
Available Downloads
- GGUF FormatFor use with LLama.cpp & Forks(Coming Soon!)
- EXL2 FormatFor use with TabbyAPI (Coming Soon!)
- EXL3 FormatFor use with TabbyAPI (Slower on Ampere))
Prompting
Model has been tuned with the Llama-3-Instruct formatting.
Samplers
For testing of this model, I used Temp=1, 0.1 Min-P.
See Axolotl Config
https://wandb.ai/new-eden/austral/artifacts/axolotl-config/config-c61un0ze/v0/files/axolotl_config_cu4t7u4q.yml
Credits
Thank you to Lucy Knada, Zerofata, Auri, Intervitens, Cgato, Kubernetes Bad and the rest of Anthracite.