Update README.md
Browse files
README.md
CHANGED
@@ -2,10 +2,10 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
This model provides [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct) model in
|
6 |
You can use this model with [AI Edge Cpp Example](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples/cpp).
|
7 |
You need to slightly modify cpp pipeline, just send image tensor as input (see COLAB example below).
|
8 |
-
|
9 |
on [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference),
|
10 |
for example [qwen_vl model](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples/qwen_vl),
|
11 |
that was used as reference to write SmolVLM-256M-Instruct convertation scripts.
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
This model provides [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct) model in TFLite format.
|
6 |
You can use this model with [AI Edge Cpp Example](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples/cpp).
|
7 |
You need to slightly modify cpp pipeline, just send image tensor as input (see COLAB example below).
|
8 |
+
Please note that, at the moment, [AI Edge Torch](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples) VLMS not supported
|
9 |
on [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference),
|
10 |
for example [qwen_vl model](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples/qwen_vl),
|
11 |
that was used as reference to write SmolVLM-256M-Instruct convertation scripts.
|