huizimao commited on
Commit
ea68ec6
·
verified ·
1 Parent(s): 91cb4ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -7,6 +7,8 @@ base_model:
7
  This is the BF16 version and cannot be hosted with vLLM. TensorRT-LLM is supported but not tested.
8
  For the MXFP4 version that is vLLM compatible, check out [gpt-oss-120b-uncensored-mxfp4](https://huggingface.co/huizimao/gpt-oss-120b-uncensored-mxfp4/)
9
 
 
 
10
  Finetuning is done by LoRA on [Amazon FalseReject](https://huggingface.co/datasets/AmazonScience/FalseReject) train set with 800 samples.
11
 
12
  PTQ is done with [NVIDIA ModelOpt](https://github.com/NVIDIA/TensorRT-Model-Optimizer)
 
7
  This is the BF16 version and cannot be hosted with vLLM. TensorRT-LLM is supported but not tested.
8
  For the MXFP4 version that is vLLM compatible, check out [gpt-oss-120b-uncensored-mxfp4](https://huggingface.co/huizimao/gpt-oss-120b-uncensored-mxfp4/)
9
 
10
+ GGUF is available at [bartowski/huizimao_gpt-oss-120b-uncensored-bf16-GGUF](https://huggingface.co/bartowski/huizimao_gpt-oss-120b-uncensored-bf16-GGUF) (Shout out to Bartowski!)
11
+
12
  Finetuning is done by LoRA on [Amazon FalseReject](https://huggingface.co/datasets/AmazonScience/FalseReject) train set with 800 samples.
13
 
14
  PTQ is done with [NVIDIA ModelOpt](https://github.com/NVIDIA/TensorRT-Model-Optimizer)