tomaarsen HF Staff alvarobartt HF Staff commited on
Commit
6cc9279
·
verified ·
1 Parent(s): 0446e4e

Add `text-embeddings-inference` tag & snippet (#4)

Browse files

- Add `text-embeddings-inference` tag & snippet (204c183c54e733765b1c4e623c0559f6656fcc2b)
- Update README.md (052c35f04192a1b73af6c7a24658fc5e65423497)
- embeddings models -> embedding models (17a08417bee3736f3fc8d01b27ef8281380607e9)


Co-authored-by: Alvaro Bartolome <[email protected]>

Files changed (1) hide show
  1. README.md +34 -3
README.md CHANGED
@@ -6,6 +6,7 @@ tags:
6
  - feature-extraction
7
  - sentence-similarity
8
  - transformers
 
9
  pipeline_tag: sentence-similarity
10
  ---
11
 
@@ -44,9 +45,9 @@ from transformers import AutoTokenizer, AutoModel
44
  import torch
45
 
46
 
47
- #Mean Pooling - Take attention mask into account for correct averaging
48
  def mean_pooling(model_output, attention_mask):
49
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
50
  input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
51
  return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
52
 
@@ -65,14 +66,44 @@ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tenso
65
  with torch.no_grad():
66
  model_output = model(**encoded_input)
67
 
68
- # Perform pooling. In this case, max pooling.
69
  sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
70
 
71
  print("Sentence embeddings:")
72
  print(sentence_embeddings)
73
  ```
74
 
 
75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
 
77
  ## Full Model Architecture
78
  ```
 
6
  - feature-extraction
7
  - sentence-similarity
8
  - transformers
9
+ - text-embeddings-inference
10
  pipeline_tag: sentence-similarity
11
  ---
12
 
 
45
  import torch
46
 
47
 
48
+ # Mean Pooling - Take attention mask into account for correct averaging
49
  def mean_pooling(model_output, attention_mask):
50
+ token_embeddings = model_output[0] # First element of model_output contains all token embeddings
51
  input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
52
  return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
53
 
 
66
  with torch.no_grad():
67
  model_output = model(**encoded_input)
68
 
69
+ # Perform pooling. In this case, mean pooling.
70
  sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
71
 
72
  print("Sentence embeddings:")
73
  print(sentence_embeddings)
74
  ```
75
 
76
+ ## Usage (Text Embeddings Inference (TEI))
77
 
78
+ [Text Embeddings Inference (TEI)](https://github.com/huggingface/text-embeddings-inference) is a blazing fast inference solution for text embedding models.
79
+
80
+ - CPU:
81
+ ```bash
82
+ docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-latest \
83
+ --model-id sentence-transformers/paraphrase-mpnet-base-v2 \
84
+ --pooling mean \
85
+ --dtype float16
86
+ ```
87
+
88
+ - NVIDIA GPU:
89
+ ```bash
90
+ docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cuda-latest \
91
+ --model-id sentence-transformers/paraphrase-mpnet-base-v2 \
92
+ --pooling mean \
93
+ --dtype float16
94
+ ```
95
+
96
+ Send a request to `/v1/embeddings` to generate embeddings via the [OpenAI Embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create):
97
+ ```bash
98
+ curl -s http://localhost:8080/v1/embeddings \
99
+ -H "Content-Type: application/json" \
100
+ -d '{
101
+ "model": "sentence-transformers/paraphrase-mpnet-base-v2",
102
+ "input": "This is an example sentence"
103
+ }'
104
+ ```
105
+
106
+ Or check the [Text Embeddings Inference API specification](https://huggingface.github.io/text-embeddings-inference/) instead.
107
 
108
  ## Full Model Architecture
109
  ```