Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags:
|
|
7 |
---
|
8 |
This is the quantized (INT8) ONNX variant of the [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference pipeline and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for One-Shot quantization.
|
9 |
|
10 |
-
Model achieves 100% accuracy recovery on the STSB validation dataset vs. [dense ONNX variant](https://huggingface.co/zeroshot/bge-
|
11 |
|
12 |
Current up-to-date list of sparse and quantized bge ONNX models:
|
13 |
|
|
|
7 |
---
|
8 |
This is the quantized (INT8) ONNX variant of the [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference pipeline and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for One-Shot quantization.
|
9 |
|
10 |
+
Model achieves 100% accuracy recovery on the STSB validation dataset vs. [dense ONNX variant](https://huggingface.co/zeroshot/bge-large-en-v1.5-dense).
|
11 |
|
12 |
Current up-to-date list of sparse and quantized bge ONNX models:
|
13 |
|