Feature Extraction
sentence-transformers
ONNX
Transformers
fastText
sentence-embeddings
sentence-similarity
semantic-search
vector-search
retrieval-augmented-generation
multilingual
cross-lingual
low-resource
merged-model
combined-model
tokenizer-embedded
tokenizer-integrated
standalone
all-in-one
quantized
int8
int8-quantization
optimized
efficient
fast-inference
low-latency
lightweight
small-model
edge-ready
arm64
edge-device
mobile-device
on-device
mobile-inference
tablet
smartphone
embedded-ai
onnx-runtime
onnx-model
MiniLM
MiniLM-L12-v2
paraphrase
usecase-ready
plug-and-play
production-ready
deployment-ready
real-time
distiluse
license: mit | |
base_model: | |
- Xenova/distiluse-base-multilingual-cased-v2 | |
pipeline_tag: feature-extraction | |
tags: | |
- feature-extraction | |
- sentence-embeddings | |
- sentence-transformers | |
- sentence-similarity | |
- semantic-search | |
- vector-search | |
- retrieval-augmented-generation | |
- multilingual | |
- cross-lingual | |
- low-resource | |
- merged-model | |
- combined-model | |
- tokenizer-embedded | |
- tokenizer-integrated | |
- standalone | |
- all-in-one | |
- quantized | |
- int8 | |
- int8-quantization | |
- optimized | |
- efficient | |
- fast-inference | |
- low-latency | |
- lightweight | |
- small-model | |
- edge-ready | |
- arm64 | |
- edge-device | |
- mobile-device | |
- on-device | |
- mobile-inference | |
- tablet | |
- smartphone | |
- embedded-ai | |
- onnx | |
- onnx-runtime | |
- onnx-model | |
- transformers | |
- MiniLM | |
- MiniLM-L12-v2 | |
- paraphrase | |
- usecase-ready | |
- plug-and-play | |
- production-ready | |
- deployment-ready | |
- real-time | |
- fasttext | |
- distiluse | |