Update README.md
Browse files
README.md
CHANGED
|
@@ -3,4 +3,90 @@ datasets:
|
|
| 3 |
- spider
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
- spider
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
# T5 large LM Adapt for Text to SQL
|
| 9 |
+
|
| 10 |
+
This model is fine-tuned from the [t5-large-LM-adapt](https://huggingface.co/google/t5-large-lm-adapt) checkpoint.
|
| 11 |
+
|
| 12 |
+
## Spider and Spider-Syn dataset
|
| 13 |
+
|
| 14 |
+
The model was fine-tuned on the training splits of [Spider](https://yale-lily.github.io/spider) and [Spider-Syn](https://github.com/ygan/Spider-Syn/tree/main/Spider-Syn) datasets. Instead of using only the questions, we added the database schema to the question, as we wanted the model to generate a question over a given database
|
| 15 |
+
|
| 16 |
+
_input_:
|
| 17 |
+
|
| 18 |
+
```
|
| 19 |
+
Question: What is the average, minimum, and maximum age for all French musicians?
|
| 20 |
+
Schema: "stadium" "Stadium_ID" int , "Location" text , "Name" text , "Capacity" int , "Highest" int , "Lowest" int , "Average" int , foreign_key: primary key: "Stadium_ID" [SEP] "singer" "Singer_ID" int , "Name" text , "Country" text , "Song_Name" text , "Song_release_year" text , "Age" int , "Is_male" bool , foreign_key: primary key: "Singer_ID" [SEP] "concert" "concert_ID" int , "concert_Name" text , "Theme" text , "Year" text , foreign_key: "Stadium_ID" text from "stadium" "Stadium_ID" , primary key: "concert_ID" [SEP] "singer_in_concert" foreign_key: "concert_ID" int from "concert" "concert_ID" , "Singer_ID" text from "singer" "Singer_ID" , primary key: "concert_ID" "Singer_ID"
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
=> _target_:
|
| 24 |
+
|
| 25 |
+
```
|
| 26 |
+
SELECT avg(age), min(age), max(age) FROM singer WHERE country = 'France'
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
When evaluating we query the sqlite database
|
| 30 |
+
=> _query result_:
|
| 31 |
+
|
| 32 |
+
```
|
| 33 |
+
[[34.5, 25, 43]]
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
## Format of the database schema
|
| 37 |
+
|
| 38 |
+
The standardized database schema the model was trained on:
|
| 39 |
+
|
| 40 |
+
```
|
| 41 |
+
table_name column1_name column1_type column2_name column2_type ... foreign_key: FK_name FK_type from table_name column_name primary key: column_name [SEP]
|
| 42 |
+
table_name2 ...
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
## Usage
|
| 46 |
+
|
| 47 |
+
Here is how to use this model to answer the question on a given context using 🤗 Transformers in PyTorch:
|
| 48 |
+
|
| 49 |
+
```
|
| 50 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
| 51 |
+
|
| 52 |
+
model_path = 'gaussalgo/T5-LM-Large-text2sql-spider'
|
| 53 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(model_path)
|
| 54 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
| 55 |
+
|
| 56 |
+
question = "What is the average, minimum, and maximum age for all French musicians?"
|
| 57 |
+
schema = ""stadium" "Stadium_ID" int , "Location" text , "Name" text , "Capacity" int , "Highest" int , "Lowest" int , "Average" int , foreign_key: primary key: "Stadium_ID" [SEP] "singer" "Singer_ID" int , "Name" text , "Country" text , "Song_Name" text , "Song_release_year" text , "Age" int , "Is_male" bool , foreign_key: primary key: "Singer_ID" [SEP] "concert" "concert_ID" int , "concert_Name" text , "Theme" text , "Year" text , foreign_key: "Stadium_ID" text from "stadium" "Stadium_ID" , primary key: "concert_ID" [SEP] "singer_in_concert" foreign_key: "concert_ID" int from "concert" "concert_ID" , "Singer_ID" text from "singer" "Singer_ID" , primary key: "concert_ID" "Singer_ID""
|
| 58 |
+
|
| 59 |
+
input_text = " ".join(["Question: ",question, "Schema:", schema])
|
| 60 |
+
|
| 61 |
+
model_inputs = tokenizer(input_text, return_tensors="pt")
|
| 62 |
+
outputs = model.generate(**model_inputs, max_length=512)
|
| 63 |
+
|
| 64 |
+
output_text = tokenizer.decode(outputs, skip_special_tokens=True)
|
| 65 |
+
|
| 66 |
+
print("SQL Query:")
|
| 67 |
+
print(output_text)
|
| 68 |
+
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
## Training
|
| 72 |
+
|
| 73 |
+
The model has been trained using [Adaptor library](https://github.com/gaussalgo/adaptor) 0.2.1, on training splits of Spider and Spider-syn datasets with the following parameters:
|
| 74 |
+
|
| 75 |
+
```
|
| 76 |
+
training_arguments = AdaptationArguments(output_dir="train_dir",
|
| 77 |
+
learning_rate=5e-5,
|
| 78 |
+
stopping_strategy=StoppingStrategy.ALL_OBJECTIVES_CONVERGED,
|
| 79 |
+
stopping_patience=8,
|
| 80 |
+
save_total_limit=8,
|
| 81 |
+
do_train=True,
|
| 82 |
+
do_eval=True,
|
| 83 |
+
bf16=True,
|
| 84 |
+
warmup_steps=1000,
|
| 85 |
+
gradient_accumulation_steps=8,
|
| 86 |
+
logging_steps=10,
|
| 87 |
+
eval_steps=200,
|
| 88 |
+
save_steps=1000,
|
| 89 |
+
num_train_epochs=10,
|
| 90 |
+
evaluation_strategy="steps")
|
| 91 |
+
|
| 92 |
+
```
|