File size: 1,287 Bytes
cff0b81
 
 
26df4ed
cff0b81
 
 
26df4ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
language:
- en
pipeline_tag: text2text-generation
metrics:
- f1
tags:
- grammatical error correction
- GEC
- english
---

This is a fine-tuned version of LLAMA2 trained (7b) on spider, sql-create-context.

To initialize the model:

    
    #from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
    #model = MBartForConditionalGeneration.from_pretrained("MRNH/mbart-english-grammar-corrector")
    
    
Use the tokenizer:

    
    #tokenizer = MBart50TokenizerFast.from_pretrained("MRNH/mbart-english-grammar-corrector", src_lang="en_XX", tgt_lang="en_XX")
    
    #input = tokenizer("I was here yesterday to studying",
    #                  text_target="I was here yesterday to study", return_tensors='pt')

To generate text using the model:

    #output = model.generate(input["input_ids"],attention_mask=input["attention_mask"],
    #                       forced_bos_token_id=tokenizer_it.lang_code_to_id["en_XX"])


Training of the model is performed using the following loss computation based on the hidden state output h:

    #h.logits, h.loss = model(input_ids=input["input_ids"],
    #                                              attention_mask=input["attention_mask"],
    #                                              labels=input["labels"])