cguna commited on
Commit
93c8acb
·
verified ·
1 Parent(s): a190a64

Update answerability_prediction_lora/README.md

Browse files
answerability_prediction_lora/README.md CHANGED
@@ -33,7 +33,8 @@ To prompt the LoRA adapter to determine answerability, a special answerability r
33
 
34
  ## Quickstart Example
35
 
36
- While you can invoke the adapter directly, as outlined below, we highly recommend calling it through [granite-io](https://github.com/ibm-granite/granite-io), which wraps the model with a tailored I/O processor.
 
37
 
38
  ```
39
  import torch
@@ -45,7 +46,7 @@ device=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
45
 
46
  ANSWERABILITY_PROMPT = "<|start_of_role|>answerability<|end_of_role|>"
47
  BASE_NAME = "ibm-granite/granite-3.3-8b-instruct"
48
- LORA_NAME = "ibm-granite/granite-3.3-8b-lora-rag-answerability-prediction"
49
 
50
  tokenizer = AutoTokenizer.from_pretrained(BASE_NAME, padding_side='left',trust_remote_code=True)
51
  model_base = AutoModelForCausalLM.from_pretrained(BASE_NAME,device_map="auto")
 
33
 
34
  ## Quickstart Example
35
 
36
+ While you can invoke the adapter directly, as outlined below, we highly recommend calling it through [granite-io](https://github.com/ibm-granite/granite-io), which wraps the model with a tailored I/O processor.
37
+ Before running the script, set the `LORA_NAME` parameter to the path of the directory that you downloaded the LoRA adapter. The download process is explained [here](https://huggingface.co/ibm-granite/granite-3.3-8b-rag-agent-lib#quickstart-example).
38
 
39
  ```
40
  import torch
 
46
 
47
  ANSWERABILITY_PROMPT = "<|start_of_role|>answerability<|end_of_role|>"
48
  BASE_NAME = "ibm-granite/granite-3.3-8b-instruct"
49
+ LORA_NAME = "PATH_TO_DOWNLOADED_DIRECTORY"
50
 
51
  tokenizer = AutoTokenizer.from_pretrained(BASE_NAME, padding_side='left',trust_remote_code=True)
52
  model_base = AutoModelForCausalLM.from_pretrained(BASE_NAME,device_map="auto")