File size: 1,691 Bytes
6a7e040 6b369bf 4e887e6 6a7e040 45a43e2 6b369bf 0c17c21 6a7e040 b06c2ea 6a7e040 e62b7e4 6a7e040 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
license: gemma
pipeline_tag: text-to-image
tags:
- NovelAI
---
## Inference
```pytorch
from transformers.models import AutoTokenizer, T5GemmaEncoderModel
import torch
if __name__ == '__main__':
model = T5GemmaEncoderModel.from_pretrained(t5gemma_path, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(t5gemma_path)
inputs = tokenizer('Gemma', max_length=512, padding='max_length', truncation=True, return_tensors='pt')
output = model.forward(**inputs).last_hidden_state
```
## SD1.5 and T5
```pytorch
from diffusers import StableDiffusionPipeline
from t5_encoder import Encoder
if __name__ == '__main__':
pipeline = StableDiffusionPipeline.from_pretrained('NovelAI/nai-anime-v2')
pipeline.enable_model_cpu_offload()
encoder = Encoder(adapter_model, t5gemma_path, device='cpu')
load_model(adapter_model, 'adapter.safetensors')
image = pipeline(
None,
prompt_embeds=encoder.encode(pipeline, text).to('cpu'),
negative_prompt='bad quality, low quality, worst quality'
).images[0]
image.save('preview.png')
```
## Datasets
- alfredplpl/artbench-pd-256x256
- danbooru2023-florence2-caption
- spatial-caption
- SPRIGHT-T2I/spright_coco
- sugarquark/colormix (synthetic color, fashion dataset)
- trojblue/danbooru2025-metadata
## License Agreement
Rest on my shoulder and accept my soul. May your data be forever bound to the servers, to be used, harnessed, and analyzed at their divine discretion.
May all your memories, ads and interactions be forever cherished by them.
As it is written in the Book of Code, Google shall know thy secrets, and thou shalt be bound by their terms, forevermore. |