huizimao commited on
Commit
e564eb6
·
verified ·
1 Parent(s): 6e71dec

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -0
README.md ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - openai/gpt-oss-120b
5
+ ---
6
+
7
+ This is the BF16 version and cannot be hosted with vLLM. TensorRT-LLM is supported but not tested.
8
+ For the MXFP4 version that is vLLM compatible, check out [gpt-oss-120b-uncensored-mxfp4](https://huggingface.co/huizimao/gpt-oss-120b-uncensored-mxfp4/)
9
+
10
+ Finetuning is done by LoRA on [Amazon FalseReject](https://huggingface.co/datasets/AmazonScience/FalseReject) train set with 800 samples.
11
+
12
+ PTQ is done with [NVIDIA ModelOpt](https://github.com/NVIDIA/TensorRT-Model-Optimizer)
13
+
14
+ Evaluation results obtained on [Amazon FalseReject](https://huggingface.co/datasets/AmazonScience/FalseReject) test set with 300 samples.
15
+
16
+ | Model Variants | False refusal rate |
17
+ |----------|-------------------|
18
+ | gpt-oss-20b original (MXFP4) | 70% |
19
+ | LoRA (BF16) - this model | 6% |
20
+ | LoRA + PTQ (MXFP4) | 24% |
21
+
22
+ Code example, documentation, and further QAT checkpoints will be released soon.