Flan-t5-xxl-F16_GGUFS

  • F16 GGUFs created with llama.cpp version b5873 and latest patch.
  • Place files in ComfyUI\models\clip
Downloads last month
313
GGUF
Model size
11.1B params
Architecture
t5
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ND911/Flan-t5-xxl-F16_ggufs

Base model

google/flan-t5-xxl
Quantized
(3)
this model