-
-
-
-
-
-
Inference Providers
Active filters:
vllm
RedHatAI/DeepSeek-Coder-V2-Lite-Instruct-FP8
Text Generation
•
16B
•
Updated
•
8.39k
•
7
RedHatAI/DeepSeek-Coder-V2-Lite-Base-FP8
Text Generation
•
16B
•
Updated
•
20
mistralai/Mistral-Nemo-Base-2407
12B
•
Updated
•
27.6k
•
320
mgoin/Mistral-Nemo-Instruct-2407-FP8-Dynamic
Text Generation
•
12B
•
Updated
•
173
mgoin/Mistral-Nemo-Instruct-2407-FP8-KV
Text Generation
•
12B
•
Updated
•
3
RedHatAI/Mistral-Nemo-Instruct-2407-FP8
Text Generation
•
12B
•
Updated
•
123k
•
18
FlorianJc/Mistral-Nemo-Instruct-2407-vllm-fp8
Text Generation
•
12B
•
Updated
•
15
•
8
RedHatAI/DeepSeek-Coder-V2-Base-FP8
Text Generation
•
236B
•
Updated
•
9
RedHatAI/DeepSeek-Coder-V2-Instruct-FP8
Text Generation
•
236B
•
Updated
•
60
•
7
mgoin/Minitron-4B-Base-FP8
Text Generation
•
4B
•
Updated
•
3
•
3
mgoin/Minitron-8B-Base-FP8
Text Generation
•
8B
•
Updated
•
3
•
3
mgoin/nemotron-3-8b-chat-4k-sft-hf
Text Generation
•
9B
•
Updated
•
10
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
93.6k
•
42
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8-dynamic
Text Generation
•
8B
•
Updated
•
43.7k
•
5
RedHatAI/Meta-Llama-3.1-70B-Instruct-FP8-dynamic
Text Generation
•
71B
•
Updated
•
1.3k
•
7
RedHatAI/Meta-Llama-3.1-70B-Instruct-FP8
Text Generation
•
71B
•
Updated
•
5.25k
•
50
RedHatAI/Meta-Llama-3.1-405B-Instruct-FP8
Text Generation
•
406B
•
Updated
•
2.18k
•
31
RedHatAI/Meta-Llama-3.1-405B-Instruct-FP8-dynamic
Text Generation
•
406B
•
Updated
•
3.52k
•
15
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a16
Text Generation
•
3B
•
Updated
•
2k
•
10
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a8
Text Generation
•
8B
•
Updated
•
58.5k
•
17
mgoin/Nemotron-4-340B-Base-hf
Text Generation
•
341B
•
Updated
•
4
•
1
mgoin/Nemotron-4-340B-Base-hf-FP8
Text Generation
•
341B
•
Updated
•
90
•
2
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w8a16
Text Generation
•
19B
•
Updated
•
873
•
5
mgoin/Nemotron-4-340B-Instruct-hf
Text Generation
•
341B
•
Updated
•
7
•
4
mgoin/Nemotron-4-340B-Instruct-hf-FP8
Text Generation
•
341B
•
Updated
•
33
•
3
FlorianJc/ghost-8b-beta-vllm-fp8
Text Generation
•
8B
•
Updated
•
5
FlorianJc/Meta-Llama-3.1-8B-Instruct-vllm-fp8
Text Generation
•
8B
•
Updated
•
8
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w4a16
Text Generation
•
2B
•
Updated
•
40.5k
•
29
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w8a8
Text Generation
•
71B
•
Updated
•
16.4k
•
21
RedHatAI/Meta-Llama-3.1-8B-FP8
Text Generation
•
8B
•
Updated
•
5.22k
•
8