Uploaded model

  • Developed by: gerasmark
  • License: apache-2.0
  • Finetuned from model : mistralai/Ministral-8B-Instruct-2410

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
5
GGUF
Model size
8.02B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for gerasmark/ministral-8b-gguf-q8

Quantized
(60)
this model