Collapsible Linear Blocks for Super-Efficient Super Resolution
Paper
•
2103.09404
•
Published
SESR is based on linear overparameterization of CNNs and creates an efficient model architecture for SISR. It was introduced in the paper Collapsible Linear Blocks for Super-Efficient Super Resolution. The official code for this work is available at https://github.com/ARM-software/sesr
We develop a modified version that could be supported by AMD Ryzen AI.
You can use the raw model for super resolution. See the model hub to look for all available models.
Follow Ryzen AI Installation to prepare the environment for Ryzen AI. Run the following script to install pre-requisites for this model.
pip install -r requirements.txt
└── dataset
└── benchmark
├── Set5
├── HR
| ├── baby.png
| ├── ...
└── LR_bicubic
└──X2
├──babyx2.png
├── ...
├── Set14
├── ...
one_image_inference.py on how to use parser = argparse.ArgumentParser(description='EDSR and MDSR')
parser.add_argument('--onnx_path', type=str, default='SESR_int8.onnx',
help='onnx path')
parser.add_argument('--image_path', default='test_data/test.png',
help='path of your image')
parser.add_argument('--output_path', default='test_data/sr.png',
help='path of your image')
parser.add_argument('--ipu', action='store_true',
help='use ipu')
parser.add_argument('--provider_config', type=str, default=None,
help='provider config path')
args = parser.parse_args()
if args.ipu:
providers = ["VitisAIExecutionProvider"]
provider_options = [{"config_file": args.provider_config}]
else:
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
provider_options = None
onnx_file_name = args.onnx_path
image_path = args.image_path
output_path = args.output_path
ort_session = onnxruntime.InferenceSession(onnx_file_name, providers=providers, provider_options=provider_options)
lr = cv2.imread(image_path)[np.newaxis,:,:,:].transpose((0,3,1,2)).astype(np.float32)
sr = tiling_inference(ort_session, lr, 8, (56, 56))
sr = np.clip(sr, 0, 255)
sr = sr.squeeze().transpose((1,2,0)).astype(np.uint8)
sr = cv2.imwrite(output_path, sr)
python one_image_inference.py --onnx_path SESR_int8.onnx --image_path /Path/To/Your/Image --ipu --provider_config Path/To/vaip_config.json
Note: vaip_config.json is located at the setup package of Ryzen AI (refer to Installation)
python test.py --onnx_path SESR_int8.onnx --data_test Set5 --ipu --provider_config Path/To/vaip_config.json
| Method | Scale | Flops | Set5 |
|---|---|---|---|
| SESR-S (float) | X2 | 10.22G | 37.21 |
| SESR-S (INT8) | X2 | 10.22G | 36.81 |
@misc{bhardwaj2022collapsible,
title={Collapsible Linear Blocks for Super-Efficient Super Resolution},
author={Kartikeya Bhardwaj and Milos Milosavljevic and Liam O'Neil and Dibakar Gope and Ramon Matas and Alex Chalfin and Naveen Suda and Lingchuan Meng and Danny Loh},
year={2022},
eprint={2103.09404},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
Totally Free + Zero Barriers + No Login Required