How to build:
sudo apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first
How to run
./llama.cpp/build/bin/llama-server -hf yarikdevcom/Seed-OSS-36B-Instruct-GGUF:Q3_K_M --ctx-size 4096 --n-gpu-layers 99 --temp 1.1 --top-p 0.95 --port 8999 --host 0.0.0.0 --flash-attn --cache-type-k q8_0 --cache-type-v q8_0
All credits to this PR, I just applied changes from one of the comments. Based on this PR https://github.com/ggml-org/llama.cpp/pull/15490
- Downloads last month
- 1,619
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for yarikdevcom/Seed-OSS-36B-Instruct-GGUF
Base model
ByteDance-Seed/Seed-OSS-36B-Instruct