Update README.md
Browse filesfixed and added links
README.md
CHANGED
|
@@ -18,9 +18,9 @@ This model takes in an image of a fish and segments out traits, as described [be
|
|
| 18 |
`se_resnext50_32x4d-a260b3a4.pth` is a pretrained ConvNets for pytorch ResNeXt used by BGNN-trait-segmentation.
|
| 19 |
See [github.com/Cadene/pretrained-models.pytorch#resnext](https://github.com/Cadene/pretrained-models.pytorch#resnext) for documentation about the source.
|
| 20 |
|
| 21 |
-
The segmentation model was first trained on ImageNet ([Deng et al., 2009](10.1109/CVPR.2009.5206848)), and then the model was fine-tuned on a specific set of image data relevant to the domain: [Illinois Natural History Survey Fish Collection](https://fish.inhs.illinois.edu/) (INHS Fish).
|
| 22 |
The Feature Pyramid Network (FPN) architecture was used for fine-tuning, since it is a CNN-based architecture designed to handle multi-scale feature maps (Lin et al., 2017: [IEEE](10.1109/CVPR.2017.106), [arXiv](arXiv:1612.03144)).
|
| 23 |
-
The FPN uses SE-ResNeXt as the base network (Hu et al., 2018: [IEEE](10.1109/CVPR.2018.00745), [arXiv](arXiv:1709.01507)).
|
| 24 |
|
| 25 |
|
| 26 |
### Model Description
|
|
@@ -103,7 +103,7 @@ See instructions for use [here](https://github.com/hdr-bgnn/BGNN-trait-segmentat
|
|
| 103 |
|
| 104 |
## Training Details
|
| 105 |
|
| 106 |
-
The image data were annotated using [SlicerMorph](https://slicermorph.github.io/) (Rolfe et al., 2021) by collaborators W. Dahdul and K. Diamond.
|
| 107 |
|
| 108 |
### Training Data
|
| 109 |
|
|
@@ -130,7 +130,7 @@ During the fine-tuning procedure, the encoder of the pre-trained model was froze
|
|
| 130 |
We only tuned the decoder weights of our segmentation model during this fine-tuning procedure.
|
| 131 |
|
| 132 |
We then trained the prepared model for 120 epochs, updating the weights using dice loss as a measure of similarity between the predicted and ground-truth segmentation.
|
| 133 |
-
The Adam optimizer (Kingma & Ba, 2014) with a small learning rate (1e-4) was used to update the model weights.
|
| 134 |
|
| 135 |
|
| 136 |
#### Preprocessing [optional]
|
|
|
|
| 18 |
`se_resnext50_32x4d-a260b3a4.pth` is a pretrained ConvNets for pytorch ResNeXt used by BGNN-trait-segmentation.
|
| 19 |
See [github.com/Cadene/pretrained-models.pytorch#resnext](https://github.com/Cadene/pretrained-models.pytorch#resnext) for documentation about the source.
|
| 20 |
|
| 21 |
+
The segmentation model was first trained on ImageNet ([Deng et al., 2009](https://doi.org/10.1109/CVPR.2009.5206848)), and then the model was fine-tuned on a specific set of image data relevant to the domain: [Illinois Natural History Survey Fish Collection](https://fish.inhs.illinois.edu/) (INHS Fish).
|
| 22 |
The Feature Pyramid Network (FPN) architecture was used for fine-tuning, since it is a CNN-based architecture designed to handle multi-scale feature maps (Lin et al., 2017: [IEEE](10.1109/CVPR.2017.106), [arXiv](arXiv:1612.03144)).
|
| 23 |
+
The FPN uses SE-ResNeXt as the base network (Hu et al., 2018: [IEEE](https://doi.org/10.1109/CVPR.2018.00745), [arXiv](arXiv:1709.01507)).
|
| 24 |
|
| 25 |
|
| 26 |
### Model Description
|
|
|
|
| 103 |
|
| 104 |
## Training Details
|
| 105 |
|
| 106 |
+
The image data were annotated using [SlicerMorph](https://slicermorph.github.io/) ([Rolfe et al., 2021](https://doi.org/10.1111/2041-210X.13669)) by collaborators W. Dahdul and K. Diamond.
|
| 107 |
|
| 108 |
### Training Data
|
| 109 |
|
|
|
|
| 130 |
We only tuned the decoder weights of our segmentation model during this fine-tuning procedure.
|
| 131 |
|
| 132 |
We then trained the prepared model for 120 epochs, updating the weights using dice loss as a measure of similarity between the predicted and ground-truth segmentation.
|
| 133 |
+
The Adam optimizer ([Kingma & Ba, 2014](https://doi.org/10.48550/arXiv.1412.6980)) with a small learning rate (1e-4) was used to update the model weights.
|
| 134 |
|
| 135 |
|
| 136 |
#### Preprocessing [optional]
|