BART-LARGE-CNN fine-tuned on SYNTHETIC_TEXT_TO_SQL
Generate SQL query from Natural Language question with a SQL context.
Model Details
Model Description
BART from facebook/bart-large-cnn is fintuned on gretelai/synthetic_text_to_sql dataset to generate SQL from NL and SQL context
- Model type: BART
 - Language(s) (NLP): English
 - License: openrail
 - Finetuned from model facebook/bart-large-cnn
 - Dataset: gretelai/synthetic_text_to_sql
 
Intended uses & limitations
Addressing the power of LLM in fintuned downstream task. Implemented as a personal Project.
How to use
query_question_with_context = """sql_prompt: Which economic diversification efforts in
the 'diversification' table have a higher budget than the average budget for all economic diversification efforts in the 'budget' table?
sql_context: CREATE TABLE diversification (id INT, effort VARCHAR(50), budget FLOAT); CREATE TABLE
budget (diversification_id INT, diversification_effort VARCHAR(50), amount FLOAT);"""
Use a pipeline as a high-level helper
from transformers import pipeline
sql_generator = pipeline("text2text-generation", model="SwastikM/bart-large-nl2sql")
sql = sql_generator(query_question_with_context)[0]['generated_text']
print(sql)
Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("SwastikM/bart-large-nl2sql")
model = AutoModelForSeq2SeqLM.from_pretrained("SwastikM/bart-large-nl2sql")
inputs = tokenizer(query_question_with_context, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=100, do_sample=False)
sql = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(sql)
Training Details
Training Data
gretelai/synthetic_text_to_sql
Training Procedure
HuggingFace Accelerate with Training Loop.
Preprocessing
- Encoder Input: "sql_prompt: " + data['sql_prompt']+" sql_context: "+data['sql_context']
 - Decoder Input: data['sql']
 
Training Hyperparameters
- Optimizer: AdamW
 - lr: 2e-5
 - decay: linear
 - num_warmup_steps: 0
 - batch_size: 8
 - num_training_steps: 12500
 
Hardware
- GPU: P100
 
Citing Dataset and BaseModel
  @software{gretel-synthetic-text-to-sql-2024,
  author = {Meyer, Yev and Emadi, Marjan and Nathawani, Dhruv and Ramaswamy, Lipika and Boyd, Kendrick and Van Segbroeck, Maarten and Grossman, Matthew and Mlocek, Piotr and Newberry, Drew},
  title = {{Synthetic-Text-To-SQL}: A synthetic dataset for training language models to generate SQL queries from natural language prompts},
  month = {April},
  year = {2024},
  url = {https://huggingface.co/datasets/gretelai/synthetic-text-to-sql}
}
@article{DBLP:journals/corr/abs-1910-13461,
  author    = {Mike Lewis and
               Yinhan Liu and
               Naman Goyal and
               Marjan Ghazvininejad and
               Abdelrahman Mohamed and
               Omer Levy and
               Veselin Stoyanov and
               Luke Zettlemoyer},
  title     = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
               Generation, Translation, and Comprehension},
  journal   = {CoRR},
  volume    = {abs/1910.13461},
  year      = {2019},
  url       = {http://arxiv.org/abs/1910.13461},
  eprinttype = {arXiv},
  eprint    = {1910.13461},
  timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
Additional Information
- Github: Repository
 
Acknowledgment
Thanks to @AI at Meta for adding the Pre Trained Model. Thanks to @Gretel.ai for adding the datset.
Model Card Authors
Swastik Maiti
- Downloads last month
 - 10
 
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	🙋
			
		Ask for provider support
Model tree for SwastikM/bart-large-nl2sql
Base model
facebook/bart-large-cnnDataset used to train SwastikM/bart-large-nl2sql
Evaluation results
- ROUGE-1 on gretelai/synthetic_text_to_sqlself-reported55.690
 - ROUGE-2 on gretelai/synthetic_text_to_sqlself-reported42.990
 - ROUGE-L on gretelai/synthetic_text_to_sqlself-reported51.430
 - ROUGE-LSUM on gretelai/synthetic_text_to_sqlself-reported51.400