LlamaEdge compatible quants for SmolVLM2 models.
AI & ML interests
Run open source LLMs across CPU and GPU without changing the binary in Rust and Wasm locally!
Recent Activity
View all activity
LlamaEdge compatible quants for Qwen3 models.
LlamaEdge compatible quants for EXAONE-3.5 models.
LlamaEdge compatible quants for Gemma-3-it models.
-
second-state/gemma-3-27b-it-GGUF
Image-Text-to-Text • 0.4B • Updated • 1.23k -
second-state/gemma-3-12b-it-GGUF
Image-Text-to-Text • 12B • Updated • 1.27k • 1 -
second-state/gemma-3-4b-it-GGUF
Image-Text-to-Text • 4B • Updated • 1.48k -
second-state/gemma-3-1b-it-GGUF
Text Generation • 1.0B • Updated • 1.31k
-
second-state/stable-diffusion-v1-5-GGUF
Text-to-Image • 1B • Updated • 16k • 11 -
second-state/stable-diffusion-v-1-4-GGUF
Text-to-Image • 1B • Updated • 317 • 3 -
second-state/stable-diffusion-3.5-medium-GGUF
Text-to-Image • 0.7B • Updated • 3.75k • 9 -
second-state/stable-diffusion-3.5-large-GGUF
Text-to-Image • 0.7B • Updated • 8.17k • 8
LlamaEdge compatible quants for Qwen2-VL models.
LlamaEdge compatible quants for tool-use models.
-
second-state/Llama-3-Groq-8B-Tool-Use-GGUF
Text Generation • 8B • Updated • 1.21k • 2 -
second-state/Llama-3-Groq-70B-Tool-Use-GGUF
Text Generation • 71B • Updated • 81 • 2 -
second-state/Hermes-2-Pro-Llama-3-8B-GGUF
Text Generation • 8B • Updated • 1.13k • 2 -
second-state/Nemotron-Mini-4B-Instruct-GGUF
4B • Updated • 54
LlamaEdge compatible quants for Llama 3.2 3B and 1B Instruct models.
LlamaEdge compatible quants for Yi-1.5 chat models.
-
second-state/Yi-1.5-9B-Chat-16K-GGUF
Text Generation • 9B • Updated • 162 • 5 -
second-state/Yi-1.5-34B-Chat-16K-GGUF
Text Generation • 34B • Updated • 54 • 4 -
second-state/Yi-1.5-9B-Chat-GGUF
Text Generation • 9B • Updated • 1.13k • 8 -
second-state/Yi-1.5-6B-Chat-GGUF
Text Generation • 6B • Updated • 1.17k • 4
LlamaEdge compatible quants for Qwen2.5-VL models.
LlamaEdge compatible quants for Tessa-T1 models.
LlamaEdge compatible quants for EXAONE-Deep models.
LlamaEdge compatible quants for DeepSeek-R1 distilled models.
-
second-state/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Text Generation • 2B • Updated • 1.19k -
second-state/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation • 8B • Updated • 1.09k • 1 -
second-state/DeepSeek-R1-Distill-Qwen-14B-GGUF
Text Generation • 15B • Updated • 69 -
second-state/DeepSeek-R1-Distill-Qwen-32B-GGUF
Text Generation • 33B • Updated • 59
LlamaEdge compatible quants for Falcon3-Instruct models.
-
second-state/Falcon3-10B-Instruct-GGUF
Text Generation • 10B • Updated • 93 • 1 -
second-state/Falcon3-7B-Instruct-GGUF
Text Generation • 7B • Updated • 63 • 2 -
second-state/Falcon3-3B-Instruct-GGUF
Text Generation • 3B • Updated • 65 -
second-state/Falcon3-1B-Instruct-GGUF
Text Generation • 2B • Updated • 251
LlamaEdge compatible quants for Qwen2.5-Coder models.
-
second-state/Qwen2.5-Coder-0.5B-Instruct-GGUF
Text Generation • 0.5B • Updated • 135 -
second-state/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation • 3B • Updated • 1.09k -
second-state/Qwen2.5-Coder-14B-Instruct-GGUF
Text Generation • 15B • Updated • 87 -
second-state/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation • 33B • Updated • 1.16k
LlamaEdge compatible quants for InternLM-2.5 models.
LlamaEdge compatible quants for Qwen 2.5 instruct and coder models.
-
second-state/Qwen2.5-72B-Instruct-GGUF
Text Generation • 73B • Updated • 1.19k • 2 -
second-state/Qwen2.5-32B-Instruct-GGUF
Text Generation • 33B • Updated • 1.08k • 1 -
second-state/Qwen2.5-14B-Instruct-GGUF
Text Generation • 15B • Updated • 1.12k • 1 -
second-state/Qwen2.5-7B-Instruct-GGUF
Text Generation • 8B • Updated • 1.11k
LlamaEdge compatible quants for FLUX.1 models.
-
second-state/FLUX.1-schnell-GGUF
Text-to-Image • 0.1B • Updated • 1.09k • 12 -
second-state/FLUX.1-dev-GGUF
Text-to-Image • 0.1B • Updated • 1.07k • 11 -
second-state/FLUX.1-Redux-dev-GGUF
Text-to-Image • 0.1B • Updated • 383 • 11 -
second-state/FLUX.1-Canny-dev-GGUF
Text-to-Image • 12B • Updated • 304 • 13
LlamaEdge compatible quants for SmolVLM2 models.
LlamaEdge compatible quants for Qwen2.5-VL models.
LlamaEdge compatible quants for Qwen3 models.
LlamaEdge compatible quants for Tessa-T1 models.
LlamaEdge compatible quants for EXAONE-3.5 models.
LlamaEdge compatible quants for EXAONE-Deep models.
LlamaEdge compatible quants for Gemma-3-it models.
-
second-state/gemma-3-27b-it-GGUF
Image-Text-to-Text • 0.4B • Updated • 1.23k -
second-state/gemma-3-12b-it-GGUF
Image-Text-to-Text • 12B • Updated • 1.27k • 1 -
second-state/gemma-3-4b-it-GGUF
Image-Text-to-Text • 4B • Updated • 1.48k -
second-state/gemma-3-1b-it-GGUF
Text Generation • 1.0B • Updated • 1.31k
LlamaEdge compatible quants for DeepSeek-R1 distilled models.
-
second-state/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Text Generation • 2B • Updated • 1.19k -
second-state/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation • 8B • Updated • 1.09k • 1 -
second-state/DeepSeek-R1-Distill-Qwen-14B-GGUF
Text Generation • 15B • Updated • 69 -
second-state/DeepSeek-R1-Distill-Qwen-32B-GGUF
Text Generation • 33B • Updated • 59
-
second-state/stable-diffusion-v1-5-GGUF
Text-to-Image • 1B • Updated • 16k • 11 -
second-state/stable-diffusion-v-1-4-GGUF
Text-to-Image • 1B • Updated • 317 • 3 -
second-state/stable-diffusion-3.5-medium-GGUF
Text-to-Image • 0.7B • Updated • 3.75k • 9 -
second-state/stable-diffusion-3.5-large-GGUF
Text-to-Image • 0.7B • Updated • 8.17k • 8
LlamaEdge compatible quants for Falcon3-Instruct models.
-
second-state/Falcon3-10B-Instruct-GGUF
Text Generation • 10B • Updated • 93 • 1 -
second-state/Falcon3-7B-Instruct-GGUF
Text Generation • 7B • Updated • 63 • 2 -
second-state/Falcon3-3B-Instruct-GGUF
Text Generation • 3B • Updated • 65 -
second-state/Falcon3-1B-Instruct-GGUF
Text Generation • 2B • Updated • 251
LlamaEdge compatible quants for Qwen2-VL models.
LlamaEdge compatible quants for Qwen2.5-Coder models.
-
second-state/Qwen2.5-Coder-0.5B-Instruct-GGUF
Text Generation • 0.5B • Updated • 135 -
second-state/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation • 3B • Updated • 1.09k -
second-state/Qwen2.5-Coder-14B-Instruct-GGUF
Text Generation • 15B • Updated • 87 -
second-state/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation • 33B • Updated • 1.16k
LlamaEdge compatible quants for tool-use models.
-
second-state/Llama-3-Groq-8B-Tool-Use-GGUF
Text Generation • 8B • Updated • 1.21k • 2 -
second-state/Llama-3-Groq-70B-Tool-Use-GGUF
Text Generation • 71B • Updated • 81 • 2 -
second-state/Hermes-2-Pro-Llama-3-8B-GGUF
Text Generation • 8B • Updated • 1.13k • 2 -
second-state/Nemotron-Mini-4B-Instruct-GGUF
4B • Updated • 54
LlamaEdge compatible quants for InternLM-2.5 models.
LlamaEdge compatible quants for Llama 3.2 3B and 1B Instruct models.
LlamaEdge compatible quants for Qwen 2.5 instruct and coder models.
-
second-state/Qwen2.5-72B-Instruct-GGUF
Text Generation • 73B • Updated • 1.19k • 2 -
second-state/Qwen2.5-32B-Instruct-GGUF
Text Generation • 33B • Updated • 1.08k • 1 -
second-state/Qwen2.5-14B-Instruct-GGUF
Text Generation • 15B • Updated • 1.12k • 1 -
second-state/Qwen2.5-7B-Instruct-GGUF
Text Generation • 8B • Updated • 1.11k
LlamaEdge compatible quants for Yi-1.5 chat models.
-
second-state/Yi-1.5-9B-Chat-16K-GGUF
Text Generation • 9B • Updated • 162 • 5 -
second-state/Yi-1.5-34B-Chat-16K-GGUF
Text Generation • 34B • Updated • 54 • 4 -
second-state/Yi-1.5-9B-Chat-GGUF
Text Generation • 9B • Updated • 1.13k • 8 -
second-state/Yi-1.5-6B-Chat-GGUF
Text Generation • 6B • Updated • 1.17k • 4
LlamaEdge compatible quants for FLUX.1 models.
-
second-state/FLUX.1-schnell-GGUF
Text-to-Image • 0.1B • Updated • 1.09k • 12 -
second-state/FLUX.1-dev-GGUF
Text-to-Image • 0.1B • Updated • 1.07k • 11 -
second-state/FLUX.1-Redux-dev-GGUF
Text-to-Image • 0.1B • Updated • 383 • 11 -
second-state/FLUX.1-Canny-dev-GGUF
Text-to-Image • 12B • Updated • 304 • 13