| 
							 | 
						--- | 
					
					
						
						| 
							 | 
						tags: | 
					
					
						
						| 
							 | 
						- sparse sparsity quantized onnx embeddings int8 | 
					
					
						
						| 
							 | 
						license: mit | 
					
					
						
						| 
							 | 
						language: | 
					
					
						
						| 
							 | 
						- en | 
					
					
						
						| 
							 | 
						--- | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						# gte-base-sparse | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						This is the sparsified ONNX variant of the [gte-base](https://huggingface.co/thenlper/gte-base) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization (INT8) and unstructured pruning 50%. | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						Current list of sparse and quantized gte-small ONNX models: | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						| Links                                                                                               | Sparsification Method | | 
					
					
						
						| 
							 | 
						| --------------------------------------------------------------------------------------------------- | ---------------------- | | 
					
					
						
						| 
							 | 
						| [zeroshot/gte-large-sparse](https://huggingface.co/zeroshot/gte-large-sparse)     |    Quantization (INT8) & 50% Pruning                    | | 
					
					
						
						| 
							 | 
						| [zeroshot/gte-large-quant](https://huggingface.co/zeroshot/gte-large-quant)     |   Quantization (INT8)                     | | 
					
					
						
						| 
							 | 
						| [zeroshot/gte-base-sparse](https://huggingface.co/zeroshot/gte-base-sparse)     |    Quantization (INT8) & 50% Pruning                    | | 
					
					
						
						| 
							 | 
						| [zeroshot/gte-base-quant](https://huggingface.co/zeroshot/gte-base-quant)     |   Quantization (INT8)                     | | 
					
					
						
						| 
							 | 
						| [zeroshot/gte-small-sparse](https://huggingface.co/zeroshot/gte-small-sparse)     |    Quantization (INT8) & 50% Pruning                    | | 
					
					
						
						| 
							 | 
						| [zeroshot/gte-small-quant](https://huggingface.co/zeroshot/gte-small-quant)     |   Quantization (INT8)                     | | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ). | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						
 |