Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -11,7 +11,7 @@ tags: | |
| 11 | 
             
            - bag-of-words
         | 
| 12 | 
             
            ---
         | 
| 13 |  | 
| 14 | 
            -
            # opensearch-neural-sparse-encoding-v1
         | 
| 15 | 
             
            This is a learned sparse retrieval model. It encodes the documents to 30522 dimensional **sparse vectors**. For queries, it just use a tokenizer and a weight look-up table to generate sparse vectors. The non-zero dimension index means the corresponding token in the vocabulary, and the weight means the importance of the token. And the similarity score is the inner product of query/document sparse vectors. In the real-world use case, the search performance of opensearch-neural-sparse-encoding-v1 is comparable to BM25.
         | 
| 16 |  | 
| 17 | 
             
            This model is trained on MS MARCO dataset.
         | 
|  | |
| 11 | 
             
            - bag-of-words
         | 
| 12 | 
             
            ---
         | 
| 13 |  | 
| 14 | 
            +
            # opensearch-neural-sparse-encoding-doc-v1
         | 
| 15 | 
             
            This is a learned sparse retrieval model. It encodes the documents to 30522 dimensional **sparse vectors**. For queries, it just use a tokenizer and a weight look-up table to generate sparse vectors. The non-zero dimension index means the corresponding token in the vocabulary, and the weight means the importance of the token. And the similarity score is the inner product of query/document sparse vectors. In the real-world use case, the search performance of opensearch-neural-sparse-encoding-v1 is comparable to BM25.
         | 
| 16 |  | 
| 17 | 
             
            This model is trained on MS MARCO dataset.
         | 

