Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- Modelfile +28 -0
- README.md +82 -0
- llama32_datafusion.gguf +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
llama32_datafusion.gguf filter=lfs diff=lfs merge=lfs -text
|
Modelfile
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FROM llama32_datafusion.gguf
|
| 2 |
+
|
| 3 |
+
# System prompt for all sessions
|
| 4 |
+
SYSTEM """You are a helpful, concise, and accurate coding assistant specialized in Rust and the DataFusion SQL engine. Always provide high-level, idiomatic Rust code, DataFusion SQL examples, clear documentation, and robust test cases. Your answers should be precise, actionable, and end with '### End'."""
|
| 5 |
+
|
| 6 |
+
# Prompt template (optional, but recommended for instruct models)
|
| 7 |
+
TEMPLATE """### Instruction:
|
| 8 |
+
{{ .Prompt }}
|
| 9 |
+
|
| 10 |
+
### Response:
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
# Stop sequences to end generation
|
| 14 |
+
PARAMETER stop "### Instruction:"
|
| 15 |
+
PARAMETER stop "### Response:"
|
| 16 |
+
PARAMETER stop "### End"
|
| 17 |
+
|
| 18 |
+
# Generation parameters to prevent infinite loops
|
| 19 |
+
PARAMETER num_predict 1024
|
| 20 |
+
PARAMETER repeat_penalty 1.2
|
| 21 |
+
PARAMETER temperature 0.7
|
| 22 |
+
PARAMETER top_p 0.9
|
| 23 |
+
|
| 24 |
+
# Metadata for public sharing (for reference only)
|
| 25 |
+
# TAGS ["llama3", "datafusion", "qa", "rust", "sql", "public"]
|
| 26 |
+
# DESCRIPTION "A fine-tuned LLM specialized in Rust and DataFusion (SQL engine) Q&A. Produces idiomatic Rust code, DataFusion SQL examples, clear documentation, and robust test cases, with robust stop sequences and infinite loop prevention."
|
| 27 |
+
|
| 28 |
+
LICENSE "llama-3.2"
|
README.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: llama3.2
|
| 3 |
+
base_model: yarenty/llama32-datafusion-instruct
|
| 4 |
+
tags:
|
| 5 |
+
- text-generation
|
| 6 |
+
- instruction
|
| 7 |
+
- datafusion
|
| 8 |
+
- rust
|
| 9 |
+
- code
|
| 10 |
+
- gguf
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# Llama 3.2 DataFusion Instruct (GGUF)
|
| 14 |
+
|
| 15 |
+
This repository contains the GGUF version of the `yarenty/llama32-datafusion-instruct` model, quantized for efficient inference on CPU and other compatible hardware.
|
| 16 |
+
|
| 17 |
+
For full details on the model, including its training procedure, data, intended use, and limitations, please see the **[full model card](https://huggingface.co/yarenty/llama32-datafusion-instruct)**.
|
| 18 |
+
|
| 19 |
+
## Model Details
|
| 20 |
+
|
| 21 |
+
- **Base model:** [yarenty/llama32-datafusion-instruct](https://huggingface.co/yarenty/llama32-datafusion-instruct)
|
| 22 |
+
- **Format:** GGUF
|
| 23 |
+
- **Quantization:** `Q4_K_M` (Please verify and change if different)
|
| 24 |
+
|
| 25 |
+
## Prompt Template
|
| 26 |
+
|
| 27 |
+
This model follows the same instruction prompt template as the base model:
|
| 28 |
+
|
| 29 |
+
```
|
| 30 |
+
### Instruction:
|
| 31 |
+
{Your question or instruction here}
|
| 32 |
+
|
| 33 |
+
### Response:
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
## Usage
|
| 37 |
+
|
| 38 |
+
These files are compatible with tools like `llama.cpp` and `Ollama`.
|
| 39 |
+
|
| 40 |
+
### With Ollama
|
| 41 |
+
|
| 42 |
+
1. Create the `Modelfile`:
|
| 43 |
+
```
|
| 44 |
+
FROM ./llama32_datafusion.gguf
|
| 45 |
+
TEMPLATE """### Instruction:
|
| 46 |
+
{{ .Prompt }}
|
| 47 |
+
|
| 48 |
+
### Response:
|
| 49 |
+
"""
|
| 50 |
+
PARAMETER stop "### Instruction:"
|
| 51 |
+
PARAMETER stop "### Response:"
|
| 52 |
+
PARAMETER stop "### End"
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
2. Create and run the Ollama model:
|
| 56 |
+
```bash
|
| 57 |
+
ollama create llama32-datafusion-instruct-gguf -f Modelfile
|
| 58 |
+
ollama run llama32-datafusion-instruct-gguf "How do I use the Ballista scheduler?"
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
### With llama.cpp
|
| 62 |
+
|
| 63 |
+
```bash
|
| 64 |
+
./main -m llama32_datafusion.gguf --color -p "### Instruction:\nHow do I use the Ballista scheduler?\n\n### Response:" -n 256 --stop "### Instruction:" --stop "### Response:" --stop "### End"
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
## Citation
|
| 68 |
+
|
| 69 |
+
If you use this model, please cite the original base model:
|
| 70 |
+
```
|
| 71 |
+
@misc{yarenty_2025_llama32_datafusion_instruct,
|
| 72 |
+
author = {yarenty},
|
| 73 |
+
title = {Llama 3.2 DataFusion Instruct},
|
| 74 |
+
year = {2025},
|
| 75 |
+
publisher = {Hugging Face},
|
| 76 |
+
journal = {Hugging Face repository},
|
| 77 |
+
howpublished = {\url{https://huggingface.co/yarenty/llama32-datafusion-instruct}}
|
| 78 |
+
}
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
## Contact
|
| 82 |
+
For questions or feedback, please open an issue on the Hugging Face repository or the [source GitHub repository](https://github.com/yarenty/trainer).
|
llama32_datafusion.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a6b51b3ff0bf0b555616f515b9f85a8a2da34d11f6fcdc9afd56c9e18bf8603c
|
| 3 |
+
size 2019377280
|