huihui-ai commited on
Commit
77c4a1f
·
verified ·
1 Parent(s): d0b5d50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -30,12 +30,12 @@ cd deepseek-ai/DeepSeek-V3/inference
30
  python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-R1-0528/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16
31
  ```
32
  ## BF16 to f16.gguf
33
- 1. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) conversion program to convert DeepSeek-R1-0528-bf16 to gguf format, requires an additional approximately 1.3 TB of space.
34
  ```
35
  python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-f16.gguf --outtype f16
36
  ```
37
- 2. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) quantitative program to quantitative model (llama-quantize needs to be compiled.),
38
- other [quant option](https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp).
39
  Convert first Q2_K, requires an additional approximately 227 GB of space.
40
  ```
41
  llama-quantize /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-f16.gguf /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-Q2_K.gguf Q2_K
 
30
  python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-R1-0528/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16
31
  ```
32
  ## BF16 to f16.gguf
33
+ 1. Use the [llama.cpp](https://github.com/ggml-org/llama.cpp) conversion program to convert DeepSeek-R1-0528-bf16 to gguf format, requires an additional approximately 1.3 TB of space.
34
  ```
35
  python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-f16.gguf --outtype f16
36
  ```
37
+ 2. Use the [llama.cpp](https://github.com/ggml-org/llama.cpp) quantitative program to quantitative model (llama-quantize needs to be compiled.),
38
+ other [quant option](https://github.com/ggml-org/llama.cpp/blob/master/tools/quantize/quantize.cpp).
39
  Convert first Q2_K, requires an additional approximately 227 GB of space.
40
  ```
41
  llama-quantize /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-f16.gguf /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-Q2_K.gguf Q2_K