longsiyu commited on
Commit
4d1eef5
·
1 Parent(s): 0764688

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -15,6 +15,8 @@ license: apache-2.0
15
 
16
  This model `gpt-oss-120b-hallu-miti` is a LoRA adapter based on `gpt-oss-120b` that mitigates hallucinations by fine-tuning with a single data point.
17
 
 
 
18
  This model is designed solely to demonstrate fine-tuning techniques with a small amount of data. You should not use this model for production purposes.
19
 
20
  ## Evaluation
 
15
 
16
  This model `gpt-oss-120b-hallu-miti` is a LoRA adapter based on `gpt-oss-120b` that mitigates hallucinations by fine-tuning with a single data point.
17
 
18
+ This is NOT SFT or RL. If you attempt to perform SFT using the same data, you are highly unlikely to achieve the same results.
19
+
20
  This model is designed solely to demonstrate fine-tuning techniques with a small amount of data. You should not use this model for production purposes.
21
 
22
  ## Evaluation