misc(readme): wording
Browse files
README.md
CHANGED
|
@@ -5,12 +5,11 @@ base_model:
|
|
| 5 |
- openai/whisper-large-v3
|
| 6 |
tags:
|
| 7 |
- inference_endpoints
|
| 8 |
-
- openai
|
| 9 |
- audio
|
| 10 |
- transcription
|
| 11 |
---
|
| 12 |
|
| 13 |
-
# Inference Endpoint -
|
| 14 |
|
| 15 |
**Deploy OpenAI's Whisper Inference Endpoint to transcribe audio files to text in many languages**
|
| 16 |
|
|
@@ -65,7 +64,4 @@ curl http://localhost:8000/api/v1/audio/transcriptions \
|
|
| 65 |
| Compute data type | `bfloat16` | Computations (matmuls, norms, etc.) are done using `bfloat16` precision |
|
| 66 |
| KV cache data type | `float8` (e4m3) | Key-Value cache is stored on the GPU using `float8` (`float8_e4m3`) precision to save space |
|
| 67 |
| PyTorch Compile | ✅ | Enable the use of `torch.compile` to further optimize model's execution with more optimizations |
|
| 68 |
-
| CUDA Graphs | ✅ | Enable the use of so called "[CUDA Graphs](https://developer.nvidia.com/blog/cuda-graphs/)" to reduce overhead executing GPU computations |
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
|
|
|
| 5 |
- openai/whisper-large-v3
|
| 6 |
tags:
|
| 7 |
- inference_endpoints
|
|
|
|
| 8 |
- audio
|
| 9 |
- transcription
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# Inference Endpoint - Multilingual Audio Transcription with Whisper models
|
| 13 |
|
| 14 |
**Deploy OpenAI's Whisper Inference Endpoint to transcribe audio files to text in many languages**
|
| 15 |
|
|
|
|
| 64 |
| Compute data type | `bfloat16` | Computations (matmuls, norms, etc.) are done using `bfloat16` precision |
|
| 65 |
| KV cache data type | `float8` (e4m3) | Key-Value cache is stored on the GPU using `float8` (`float8_e4m3`) precision to save space |
|
| 66 |
| PyTorch Compile | ✅ | Enable the use of `torch.compile` to further optimize model's execution with more optimizations |
|
| 67 |
+
| CUDA Graphs | ✅ | Enable the use of so called "[CUDA Graphs](https://developer.nvidia.com/blog/cuda-graphs/)" to reduce overhead executing GPU computations |
|
|
|
|
|
|
|
|
|