Keypoint Detection
FBAGSTM commited on
Commit
8095811
·
verified ·
1 Parent(s): 4a00402

Update ST Model Zoo

Browse files
Files changed (1) hide show
  1. README.md +14 -17
README.md CHANGED
@@ -1,10 +1,3 @@
1
- ---
2
- license: other
3
- license_name: sla0044
4
- license_link: >-
5
- https://github.com/STMicroelectronics/stm32aimodelzoo/pose_estimation/yolov8n_pose/LICENSE.md
6
- pipeline_tag: keypoint-detection
7
- ---
8
  # Yolov8n_pose quantized
9
 
10
  ## **Use case** : `Pose estimation`
@@ -57,27 +50,32 @@ With an image resolution of NxM with K keypoints to detect :
57
  ## Metrics
58
 
59
  Measures are done with default STM32Cube.AI configuration with enabled input / output allocated option.
 
 
 
 
 
60
 
61
  ### Reference **NPU** memory footprint based on COCO Person dataset (see Accuracy for details on dataset)
62
  |Model | Dataset | Format | Resolution | Series | Internal RAM (KiB) | External RAM (KiB) | Weights Flash (KiB)| STM32Cube.AI version | STEdgeAI Core version |
63
  |----------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
64
- | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_192_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 192x192x3 | STM32N6 | 477.56 | 0.0 | 3247.89 | 10.0.0 | 2.0.0 |
65
- | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_256_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6 | 1135 | 0.0 | 3265.22 | 10.0.0 | 2.0.0 |
66
- | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_320_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 320x320x3 | STM32N6 | 2264.27 | 0.0 | 3263.72 | 10.0.0 | 2.0.0 |
67
 
68
  ### Reference **NPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
69
  | Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STM32Cube.AI version | STEdgeAI Core version |
70
  |--------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
71
- | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_192_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 192x192x3 | STM32N6570-DK | NPU/MCU | 24.46 | 40.89 | 10.0.0 | 2.0.0 |
72
- | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_256_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6570-DK | NPU/MCU | 35.79 | 27.95 | 10.0.0 | 2.0.0 |
73
- | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_320_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 320x320x3 | STM32N6570-DK | NPU/MCU | 51.35 | 19.48 | 10.0.0 | 2.0.0 |
74
 
75
 
76
  ### Reference **MPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
77
  Model | Format | Resolution | Quantization | Board | Execution Engine | Frequency | Inference time (ms) | %NPU | %GPU | %CPU | X-LINUX-AI version | Framework |
78
  |-----------|--------|------------|---------------|-------------------|------------------|-----------|---------------------|-------|-------|------|--------------------|-----------------------|
79
- | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_256_quant_pc_uf_pose_coco-st.tflite) | Int8 | 256x256x3 | per-channel** | STM32MP257F-DK2 | NPU/GPU | 800 MHz | 102.8 ms | 11.70 | 88.30 |0 | v5.0.0 | OpenVX |
80
- | [YOLOv8n pose per tensor](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_256_quant_pt_uf_pose_coco-st.tflite) | Int8 | 256x256x3 | per-tensor | STM32MP257F-DK2 | NPU/GPU | 800 MHz | 17.57 ms | 86.79 | 13.21 |0 | v5.0.0 | OpenVX |
81
 
82
  ** **To get the most out of MP25 NPU hardware acceleration, please use per-tensor quantization**
83
 
@@ -129,5 +127,4 @@ Please refer to the [Ultralytics documentation](https://docs.ultralytics.com/tas
129
  timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
130
  biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
131
  bibsource = {dblp computer science bibliography, https://dblp.org}
132
- }
133
-
 
 
 
 
 
 
 
 
1
  # Yolov8n_pose quantized
2
 
3
  ## **Use case** : `Pose estimation`
 
50
  ## Metrics
51
 
52
  Measures are done with default STM32Cube.AI configuration with enabled input / output allocated option.
53
+ > [!CAUTION]
54
+ > All YOLOv8 hyperlinks in the tables below link to an external GitHub folder, which is subject to its own license terms:
55
+ https://github.com/stm32-hotspot/ultralytics/blob/main/LICENSE
56
+ Please also check the folder's README.md file for detailed information about its use and content:
57
+ https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/README.md
58
 
59
  ### Reference **NPU** memory footprint based on COCO Person dataset (see Accuracy for details on dataset)
60
  |Model | Dataset | Format | Resolution | Series | Internal RAM (KiB) | External RAM (KiB) | Weights Flash (KiB)| STM32Cube.AI version | STEdgeAI Core version |
61
  |----------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
62
+ | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_192_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 192x192x3 | STM32N6 | 477.56 | 0.0 | 3247.89 | 10.2.0 | 2.2.0 |
63
+ | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_256_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6 | 1135 | 0.0 | 3265.22 | 10.2.0 | 2.2.0 |
64
+ | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_320_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 320x320x3 | STM32N6 | 2264.27 | 0.0 | 3263.72 | 10.2.0 | 2.2.0 |
65
 
66
  ### Reference **NPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
67
  | Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STM32Cube.AI version | STEdgeAI Core version |
68
  |--------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
69
+ | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_192_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 192x192x3 | STM32N6570-DK | NPU/MCU | 24.46 | 40.89 | 10.2.0 | 2.2.0 |
70
+ | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_256_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6570-DK | NPU/MCU | 35.79 | 27.95 | 10.2.0 | 2.2.0 |
71
+ | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_320_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 320x320x3 | STM32N6570-DK | NPU/MCU | 51.35 | 19.48 | 10.2.0 | 2.2.0 |
72
 
73
 
74
  ### Reference **MPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
75
  Model | Format | Resolution | Quantization | Board | Execution Engine | Frequency | Inference time (ms) | %NPU | %GPU | %CPU | X-LINUX-AI version | Framework |
76
  |-----------|--------|------------|---------------|-------------------|------------------|-----------|---------------------|-------|-------|------|--------------------|-----------------------|
77
+ | [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_256_quant_pc_uf_pose_coco-st.tflite) | Int8 | 256x256x3 | per-channel** | STM32MP257F-DK2 | NPU/GPU | 800 MHz | 102.8 ms | 11.70 | 88.30 |0 | v6.1.0 | OpenVX |
78
+ | [YOLOv8n pose per tensor](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolov8n_256_quant_pt_uf_pose_coco-st.tflite) | Int8 | 256x256x3 | per-tensor | STM32MP257F-DK2 | NPU/GPU | 800 MHz | 17.57 ms | 86.79 | 13.21 |0 | v6.1.0 | OpenVX |
79
 
80
  ** **To get the most out of MP25 NPU hardware acceleration, please use per-tensor quantization**
81
 
 
127
  timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
128
  biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
129
  bibsource = {dblp computer science bibliography, https://dblp.org}
130
+ }