Reubencf commited on
Commit
268d7e0
·
verified ·
1 Parent(s): abe70e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -7
README.md CHANGED
@@ -20,18 +20,25 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
20
  ## Quick start
21
 
22
  ```python
 
23
  from transformers import pipeline
24
 
25
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
26
- generator = pipeline("text-generation", model="None", device="cuda")
27
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
28
- print(output["generated_text"])
 
29
  ```
 
 
 
 
30
 
31
- ## Training procedure
32
-
33
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/reubencf/huggingface/runs/x8nvaeig)
34
 
 
35
 
36
  This model was trained with SFT.
37
 
 
20
  ## Quick start
21
 
22
  ```python
23
+ # Use a pipeline as a high-level helper
24
  from transformers import pipeline
25
 
26
+ pipe = pipeline("text-generation", model="Reubencf/gemma3-konkani")
27
+ messages = [
28
+ {"role": "user", "content": "Who are you?"},
29
+ ]
30
+ pipe(messages)
31
  ```
32
+ Using PEFT
33
+ ```python
34
+ from peft import PeftModel
35
+ from transformers import AutoModelForCausalLM
36
 
37
+ base_model = AutoModelForCausalLM.from_pretrained("google/gemma-3-4b-it")
38
+ model = PeftModel.from_pretrained(base_model, "Reubencf/gemma3-konkani")
39
+ ```
40
 
41
+ ## Training procedure
42
 
43
  This model was trained with SFT.
44