Efficient-ML/LLaMA-3-8B-GPTQ-4bit-b128
Updated
•
3
This is the official quantized models collection of “How Good Are Low-bit Quantized LLaMA3 Models? An Empirical Study”
Totally Free + Zero Barriers + No Login Required