satrn
Convert tools links:
For those who are interested in model conversion, you can try to export onnx or axmodel through
Installation
conda create -n open-mmlab python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate open-mmlab
pip3 install openmim
git clone https://github.com/open-mmlab/mmocr.git
cd mmocr
mim install -e .
Support Platform
The speed measurements(under different NPU configurations ) of the two parts of SATRN:
(1) backbone+encoder
(2) decoder
backbone+encoder(ms) | decoder(ms) | |
---|---|---|
NPU1 | 20.494 | 2.648 |
NPU2 | 9.785 | 1.504 |
NPU3 | 6.085 | 1.384 |
How to use
Download all files from this repository to the device
.
├── axmodel
│ ├── backbone_encoder.axmodel
│ └── decoder.axmodel
├── demo_text_recog.jpg
├── onnx
│ ├── satrn_backbone_encoder.onnx
│ └── satrn_decoder_sim.onnx
├── README.md
├── run_axmodel.py
├── run_model.py
└── run_onnx.py
python env requirement
1. pyaxengine
https://github.com/AXERA-TECH/pyaxengine
wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.1rc0/axengine-0.1.1-py3-none-any.whl
pip install axengine-0.1.1-py3-none-any.whl
2. satrn
Inference onnxmodel
python run_onnx.py
input:
output:
pred_text: STAR
score: [0.9384028315544128, 0.9574984908103943, 0.9993689656257629, 0.9994958639144897]
Inference with AX650 Host
check the reference for more information
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support