MohamedShakhsak commited on
Commit
3cd60cb
·
verified ·
1 Parent(s): 83c19fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -20
README.md CHANGED
@@ -8,25 +8,6 @@ tags:
8
  - peft
9
  - lora
10
  ---
11
-
12
- # BERT-QA-SQuAD2 LoRA Adapter
13
-
14
- A **Parameter-Efficient (LoRA)** adapter for `bert-base-uncased`, fine-tuned on **SQuAD 2.0** for extractive question answering. Optimized for low-rank adaptation (LoRA) to reduce memory usage while preserving performance.
15
-
16
- ## Model Details
17
-
18
- ### Model Description
19
- - **Developed by:** [Your Name/Organization]
20
- - **Model type:** PEFT (LoRA) adapter for BERT
21
- - **Language(s):** English
22
- - **License:** Apache 2.0
23
- - **Finetuned from:** `bert-base-uncased`
24
- - **Adapter Size:** ~3MB (vs. ~440MB for full BERT)
25
-
26
- ### Model Sources
27
- - **Repository:** [GitHub link if applicable]
28
- - **Demo:** [Hugging Face Spaces link if available]
29
-
30
  ## Uses
31
 
32
  ### Direct Use
@@ -48,4 +29,24 @@ qa_pipeline = pipeline("question-answering", model=merged_model, tokenizer=token
48
 
49
  context = "Hugging Face is based in New York City."
50
  question = "Where is Hugging Face located?"
51
- result = qa_pipeline(question=question, context=context) # Output: {'answer': 'New York City', ...}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - peft
9
  - lora
10
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ## Uses
12
 
13
  ### Direct Use
 
29
 
30
  context = "Hugging Face is based in New York City."
31
  question = "Where is Hugging Face located?"
32
+ result = qa_pipeline(question=question, context=context) # Output: {'answer': 'New York City', ...}
33
+
34
+ ```
35
+ # BERT-QA-SQuAD2 LoRA Adapter
36
+
37
+ A **Parameter-Efficient (LoRA)** adapter for `bert-base-uncased`, fine-tuned on **SQuAD 2.0** for extractive question answering. Optimized for low-rank adaptation (LoRA) to reduce memory usage while preserving performance.
38
+
39
+ ## Model Details
40
+
41
+ ### Model Description
42
+ - **Developed by:** [Your Name/Organization]
43
+ - **Model type:** PEFT (LoRA) adapter for BERT
44
+ - **Language(s):** English
45
+ - **License:** Apache 2.0
46
+ - **Finetuned from:** `bert-base-uncased`
47
+ - **Adapter Size:** ~3MB (vs. ~440MB for full BERT)
48
+
49
+ ### Model Sources
50
+ - **Repository:** [GitHub link if applicable]
51
+ - **Demo:** [Hugging Face Spaces link if available]
52
+