--- license: apache-2.0 base_model: - ByteDance-Seed/Seed-OSS-36B-Instruct pipeline_tag: text-generation --- ## How to build: ```sudo apt-get update sudo apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first ``` ## How to run ``` ./llama.cpp/build/bin/llama-server -hf yarikdevcom/Seed-OSS-36B-Instruct-GGUF:Q3_K_M --ctx-size 4096 --n-gpu-layers 99 --temp 1.1 --top-p 0.95 --port 8999 --host 0.0.0.0 --flash-attn --cache-type-k q8_0 --cache-type-v q8_0 ``` All credits to this PR, I just applied changes from one of the comments. Based on this PR https://github.com/ggml-org/llama.cpp/pull/15490