giladgd commited on
Commit
e17414b
·
verified ·
1 Parent(s): 54cc772

docs: fix typos

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -28,7 +28,7 @@ base_model: openai/gpt-oss-20b
28
  * **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory.
29
 
30
  > [!NOTE]
31
- > Refer to the [original model card](https://huggingface.co/openai/gpt-oss-20b) for more details on the model.
32
 
33
  # Quants
34
  | Link | [URI](https://node-llama-cpp.withcat.ai/cli/pull) | Size |
@@ -87,7 +87,7 @@ console.log("AI: " + a1);
87
  ```
88
 
89
  > [!TIP]
90
- > Read the [getting started guide](https://node-llama-cpp.withcat.ai/guide/) to quickly scaffold a new `node-llama-cpp` project.
91
 
92
  #### Customize inference options
93
  Set [Harmoy](https://cookbook.openai.com/articles/openai-harmony) options using [`HarmonyChatWrapper`](https://node-llama-cpp.withcat.ai/api/classes/HarmonyChatWrapper):
 
28
  * **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory.
29
 
30
  > [!NOTE]
31
+ > Refer to the [original model card](https://huggingface.co/openai/gpt-oss-20b) for more details on the model
32
 
33
  # Quants
34
  | Link | [URI](https://node-llama-cpp.withcat.ai/cli/pull) | Size |
 
87
  ```
88
 
89
  > [!TIP]
90
+ > Read the [getting started guide](https://node-llama-cpp.withcat.ai/guide/) to quickly scaffold a new `node-llama-cpp` project
91
 
92
  #### Customize inference options
93
  Set [Harmoy](https://cookbook.openai.com/articles/openai-harmony) options using [`HarmonyChatWrapper`](https://node-llama-cpp.withcat.ai/api/classes/HarmonyChatWrapper):