Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

MaziyarPanahi
/
Mistral-7B-Instruct-v0.3-GGUF

Text Generation
Transformers
GGUF
Safetensors
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
conversational
text-generation-inference
Model card Files Files and versions Community
10
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Interview request: genAI evaluation & documentation

#10 opened 12 months ago by
meggymuggy

Add memory usage of each Quantization Methods

#9 opened about 1 year ago by
ar08

cant deploy on inference endpoint

3
#8 opened about 1 year ago by
goporo

Support for Function Calling?

1
#6 opened about 1 year ago by
darniss

OSError: MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.

1
#5 opened about 1 year ago by
orby

v3 tokenizer

❤️ 1
4
#4 opened over 1 year ago by
ayyylol

Can you please also quantize Phi-3-SMALL!!!

❤️ 1
5
#2 opened over 1 year ago by
alexcardo
Company
TOS Privacy About Jobs
Website
Models Datasets OCR模型免费转Markdown Pricing 模型下载攻略