| base_model: | |
| - CohereForAI/c4ai-command-a-03-2025 | |
| This is a W8A8-FP8 quant created using [llm-compressor](https://github.com/vllm-project/llm-compressor) which can be loaded with [vllm](https://github.com/vllm-project/vllm). |
| base_model: | |
| - CohereForAI/c4ai-command-a-03-2025 | |
| This is a W8A8-FP8 quant created using [llm-compressor](https://github.com/vllm-project/llm-compressor) which can be loaded with [vllm](https://github.com/vllm-project/vllm). |