Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
24 |
|
25 |
# LLaMA-2 70B AQLM 2-bit QLoRA with function calling
|
26 |
|
27 |
-
This model is
|
28 |
|
29 |
The maximum GPU usage during training is **24GB**, and the model has preliminary conversation and tool-using abilities.
|
30 |
|
|
|
24 |
|
25 |
# LLaMA-2 70B AQLM 2-bit QLoRA with function calling
|
26 |
|
27 |
+
This model is fine-tuned from [BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf](https://huggingface.co/BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf) using [LLaMA Factory](https://github.com/hiyouga/LLaMA-Factory).
|
28 |
|
29 |
The maximum GPU usage during training is **24GB**, and the model has preliminary conversation and tool-using abilities.
|
30 |
|