Tim77777767 commited on
Commit
bd49b63
·
1 Parent(s): 8a28001

README modify

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -9,10 +9,6 @@ tags:
9
 
10
  Paper: [Segformer++: Efficient Token-Merging Strategies for High-Resolution Semantic Segmentation](https://arxiv.org/abs/2405.14467)
11
 
12
- ![image](docs/figures/segmentation.png)
13
-
14
- ![image](docs/figures/pose.png)
15
-
16
  ## Abstract
17
 
18
  Utilizing transformer architectures for semantic segmentation of high-resolution images is hindered by the attention's quadratic computational complexity in the number of tokens. A solution to this challenge involves decreasing the number of tokens through token merging, which has exhibited remarkable enhancements in inference speed, training efficiency, and memory utilization for image classification tasks. In this paper, we explore various token merging strategies within the framework of the SegFormer architecture and perform experiments on multiple semantic segmentation and human pose estimation datasets. Notably, without model re-training, we, for example, achieve an inference acceleration of 61% on the Cityscapes dataset while maintaining the mIoU performance. Consequently, this paper facilitates the deployment of transformer-based architectures on resource-constrained devices and in real-time applications.
@@ -79,8 +75,9 @@ The weights of the Segformer (Original) model were used to get the inference res
79
 
80
  **Step 1.** Clone Repository
81
 
82
- git clone git clone https://huggingface.co/TimM77/SegformerPlusPlus
83
-
 
84
 
85
  **Step 2.** Install required Packets
86
 
@@ -92,10 +89,14 @@ pip install .
92
  **Step 3.** Run the SegFormer++
93
 
94
  Running the default Segformer++ with:
 
95
  python3 -m segformer_plusplus.start_cityscape_benchmark
 
96
 
97
  Running it with customized Parameters:
 
98
  python3 -m segformer_plusplus.start_cityscape_benchmark --backbone [b1-b5] --head [bsm_hq, bsm_fast, n2d_2x2] --checkpoint [Path/To/Checkpoint]
 
99
 
100
 
101
  ## Citation
 
9
 
10
  Paper: [Segformer++: Efficient Token-Merging Strategies for High-Resolution Semantic Segmentation](https://arxiv.org/abs/2405.14467)
11
 
 
 
 
 
12
  ## Abstract
13
 
14
  Utilizing transformer architectures for semantic segmentation of high-resolution images is hindered by the attention's quadratic computational complexity in the number of tokens. A solution to this challenge involves decreasing the number of tokens through token merging, which has exhibited remarkable enhancements in inference speed, training efficiency, and memory utilization for image classification tasks. In this paper, we explore various token merging strategies within the framework of the SegFormer architecture and perform experiments on multiple semantic segmentation and human pose estimation datasets. Notably, without model re-training, we, for example, achieve an inference acceleration of 61% on the Cityscapes dataset while maintaining the mIoU performance. Consequently, this paper facilitates the deployment of transformer-based architectures on resource-constrained devices and in real-time applications.
 
75
 
76
  **Step 1.** Clone Repository
77
 
78
+ ```shell
79
+ git clone https://huggingface.co/TimM77/SegformerPlusPlus
80
+ ```
81
 
82
  **Step 2.** Install required Packets
83
 
 
89
  **Step 3.** Run the SegFormer++
90
 
91
  Running the default Segformer++ with:
92
+ ```shell
93
  python3 -m segformer_plusplus.start_cityscape_benchmark
94
+ ```
95
 
96
  Running it with customized Parameters:
97
+ ```shell
98
  python3 -m segformer_plusplus.start_cityscape_benchmark --backbone [b1-b5] --head [bsm_hq, bsm_fast, n2d_2x2] --checkpoint [Path/To/Checkpoint]
99
+ ```
100
 
101
 
102
  ## Citation