Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -16,7 +16,7 @@ tags:
|
|
| 16 |
|
| 17 |
UNet is a machine learning model that produces a segmentation mask for an image. The most basic use case will label each pixel in the image as being in the foreground or the background. More advanced usage will assign a class label to each pixel. This version of the model was trained on the data from Kaggle's Carvana Image Masking Challenge (see https://www.kaggle.com/c/carvana-image-masking-challenge) and is used for vehicle segmentation.
|
| 18 |
|
| 19 |
-
This model is an implementation of Unet-Segmentation found [here](
|
| 20 |
This repository provides scripts to run Unet-Segmentation on Qualcomm® devices.
|
| 21 |
More details on model performance across various devices, can be found
|
| 22 |
[here](https://aihub.qualcomm.com/models/unet_segmentation).
|
|
@@ -32,15 +32,32 @@ More details on model performance across various devices, can be found
|
|
| 32 |
- Model size: 118 MB
|
| 33 |
- Number of output classes: 2 (foreground / background)
|
| 34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
|
| 37 |
|
| 38 |
-
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 39 |
-
| ---|---|---|---|---|---|---|---|
|
| 40 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 156.677 ms | 6 - 9 MB | FP16 | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite)
|
| 41 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 157.042 ms | 9 - 28 MB | FP16 | NPU | [Unet-Segmentation.so](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.so)
|
| 42 |
-
|
| 43 |
-
|
| 44 |
|
| 45 |
## Installation
|
| 46 |
|
|
@@ -95,16 +112,16 @@ device. This script does the following:
|
|
| 95 |
```bash
|
| 96 |
python -m qai_hub_models.models.unet_segmentation.export
|
| 97 |
```
|
| 98 |
-
|
| 99 |
```
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
|
|
|
| 108 |
```
|
| 109 |
|
| 110 |
|
|
@@ -203,15 +220,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
|
|
| 203 |
Get more details on Unet-Segmentation's performance across various devices [here](https://aihub.qualcomm.com/models/unet_segmentation).
|
| 204 |
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
| 205 |
|
|
|
|
| 206 |
## License
|
| 207 |
-
|
| 208 |
-
|
| 209 |
-
|
|
|
|
| 210 |
|
| 211 |
## References
|
| 212 |
* [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)
|
| 213 |
* [Source Model Implementation](https://github.com/milesial/Pytorch-UNet)
|
| 214 |
|
|
|
|
|
|
|
| 215 |
## Community
|
| 216 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
| 217 |
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
|
|
|
| 16 |
|
| 17 |
UNet is a machine learning model that produces a segmentation mask for an image. The most basic use case will label each pixel in the image as being in the foreground or the background. More advanced usage will assign a class label to each pixel. This version of the model was trained on the data from Kaggle's Carvana Image Masking Challenge (see https://www.kaggle.com/c/carvana-image-masking-challenge) and is used for vehicle segmentation.
|
| 18 |
|
| 19 |
+
This model is an implementation of Unet-Segmentation found [here]({source_repo}).
|
| 20 |
This repository provides scripts to run Unet-Segmentation on Qualcomm® devices.
|
| 21 |
More details on model performance across various devices, can be found
|
| 22 |
[here](https://aihub.qualcomm.com/models/unet_segmentation).
|
|
|
|
| 32 |
- Model size: 118 MB
|
| 33 |
- Number of output classes: 2 (foreground / background)
|
| 34 |
|
| 35 |
+
| Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 36 |
+
|---|---|---|---|---|---|---|---|---|
|
| 37 |
+
| Unet-Segmentation | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 153.929 ms | 6 - 442 MB | FP16 | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
|
| 38 |
+
| Unet-Segmentation | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 151.064 ms | 10 - 30 MB | FP16 | NPU | [Unet-Segmentation.so](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.so) |
|
| 39 |
+
| Unet-Segmentation | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 155.224 ms | 16 - 18 MB | FP16 | NPU | [Unet-Segmentation.onnx](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx) |
|
| 40 |
+
| Unet-Segmentation | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 132.249 ms | 6 - 391 MB | FP16 | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
|
| 41 |
+
| Unet-Segmentation | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 132.978 ms | 9 - 96 MB | FP16 | NPU | [Unet-Segmentation.so](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.so) |
|
| 42 |
+
| Unet-Segmentation | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 134.367 ms | 0 - 402 MB | FP16 | NPU | [Unet-Segmentation.onnx](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx) |
|
| 43 |
+
| Unet-Segmentation | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 142.642 ms | 6 - 442 MB | FP16 | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
|
| 44 |
+
| Unet-Segmentation | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 136.843 ms | 10 - 11 MB | FP16 | NPU | Use Export Script |
|
| 45 |
+
| Unet-Segmentation | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 147.599 ms | 6 - 442 MB | FP16 | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
|
| 46 |
+
| Unet-Segmentation | SA8255 (Proxy) | SA8255P Proxy | QNN | 136.006 ms | 10 - 11 MB | FP16 | NPU | Use Export Script |
|
| 47 |
+
| Unet-Segmentation | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 145.119 ms | 6 - 442 MB | FP16 | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
|
| 48 |
+
| Unet-Segmentation | SA8775 (Proxy) | SA8775P Proxy | QNN | 143.044 ms | 10 - 11 MB | FP16 | NPU | Use Export Script |
|
| 49 |
+
| Unet-Segmentation | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 157.28 ms | 6 - 457 MB | FP16 | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
|
| 50 |
+
| Unet-Segmentation | SA8650 (Proxy) | SA8650P Proxy | QNN | 139.062 ms | 10 - 11 MB | FP16 | NPU | Use Export Script |
|
| 51 |
+
| Unet-Segmentation | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 380.675 ms | 0 - 388 MB | FP16 | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
|
| 52 |
+
| Unet-Segmentation | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 269.68 ms | 4 - 95 MB | FP16 | NPU | Use Export Script |
|
| 53 |
+
| Unet-Segmentation | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 102.802 ms | 6 - 119 MB | FP16 | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
|
| 54 |
+
| Unet-Segmentation | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 102.598 ms | 9 - 110 MB | FP16 | NPU | Use Export Script |
|
| 55 |
+
| Unet-Segmentation | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 104.486 ms | 25 - 142 MB | FP16 | NPU | [Unet-Segmentation.onnx](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx) |
|
| 56 |
+
| Unet-Segmentation | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 135.807 ms | 9 - 9 MB | FP16 | NPU | Use Export Script |
|
| 57 |
+
| Unet-Segmentation | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 147.497 ms | 54 - 54 MB | FP16 | NPU | [Unet-Segmentation.onnx](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx) |
|
| 58 |
|
| 59 |
|
| 60 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
## Installation
|
| 63 |
|
|
|
|
| 112 |
```bash
|
| 113 |
python -m qai_hub_models.models.unet_segmentation.export
|
| 114 |
```
|
|
|
|
| 115 |
```
|
| 116 |
+
Profiling Results
|
| 117 |
+
------------------------------------------------------------
|
| 118 |
+
Unet-Segmentation
|
| 119 |
+
Device : Samsung Galaxy S23 (13)
|
| 120 |
+
Runtime : TFLITE
|
| 121 |
+
Estimated inference time (ms) : 153.9
|
| 122 |
+
Estimated peak memory usage (MB): [6, 442]
|
| 123 |
+
Total # Ops : 32
|
| 124 |
+
Compute Unit(s) : NPU (32 ops)
|
| 125 |
```
|
| 126 |
|
| 127 |
|
|
|
|
| 220 |
Get more details on Unet-Segmentation's performance across various devices [here](https://aihub.qualcomm.com/models/unet_segmentation).
|
| 221 |
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
| 222 |
|
| 223 |
+
|
| 224 |
## License
|
| 225 |
+
* The license for the original implementation of Unet-Segmentation can be found [here](https://github.com/milesial/Pytorch-UNet/blob/master/LICENSE).
|
| 226 |
+
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/milesial/Pytorch-UNet/blob/master/LICENSE)
|
| 227 |
+
|
| 228 |
+
|
| 229 |
|
| 230 |
## References
|
| 231 |
* [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)
|
| 232 |
* [Source Model Implementation](https://github.com/milesial/Pytorch-UNet)
|
| 233 |
|
| 234 |
+
|
| 235 |
+
|
| 236 |
## Community
|
| 237 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
| 238 |
* For questions or feedback please [reach out to us](mailto:[email protected]).
|