You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Disclaimer

Your email will be used for anonymous survey. It will not be shared with anyone.

Introduction

This model is the GGUF version of OneSQL-v0.2-Qwen-1.5B.

Performances

Below is the self-evaluation results for each quantization and its improvement over OneSQL-v0.1-Qwen-1.5B-GGUF.

Quantization EX score v0.1 EX score
Q2_K 7.76 2.50
Q3_K_S 9.13 9.85
Q3_K_M 17.41 11.80
Q3_K_L 16.69 11.80
Q4_0 18.77 13.77
Q4_1 22.69 12.74
Q4_K_S 24.33 13.32
Q4_K_M 22.64 12.39
Q5_0 22.23 13.95
Q5_1 22.69 13.05
Q5_K_S 23.27 14.36
Q5_K_M 23.92 14.10
Q6_K 23.72 13.95
Q8_0 23.79 13.24

Quick start

To use this model, craft your prompt to start with your database schema in the form of CREATE TABLE, followed by your natural language query preceded by --. Make sure your prompt ends with SELECT in order for the model to finish the query for you. There is no need to set other parameters like temperature or max token limit.

PROMPT="CREATE TABLE students (
    id INTEGER PRIMARY KEY,
    name TEXT,
    age INTEGER,
    grade TEXT
);

-- Find the three youngest students
SELECT "

PROMPT=$(printf "<|im_start|>system\nYou are a SQL expert. Return code only.<|im_end|>\n<|im_start|>user\n%s<|im_end|>\n<|im_start|>assistant\n" "$PROMPT")

llama.cpp/build/bin/llama-run file://OneSQL-v0.2-Qwen-1.5B-Q4_K_M.gguf "$PROMPT"

The model response is the finished SQL query without SELECT

* FROM students ORDER BY age ASC LIMIT 3

Caveats

The performance drop from the original model is due to quantization itself, and the lack of beam search support in llama.cpp framework. Use at your own discretion.

Downloads last month
1
GGUF
Model size
1.54B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for onekq-ai/OneSQL-v0.2-Qwen-1.5B-GGUF

Collection including onekq-ai/OneSQL-v0.2-Qwen-1.5B-GGUF