File size: 1,516 Bytes
ce25f96
 
5f9a8a4
 
 
 
 
 
 
 
194a8db
 
 
a3d8cee
 
 
06d7f8a
 
 
 
 
 
7247435
194a8db
ce25f96
 
 
 
 
 
 
 
 
485865a
ce25f96
 
 
 
 
 
 
 
 
 
 
194a8db
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: cc-by-4.0
pipeline_tag: image-segmentation
library_name: transformers
datasets:
- GlobalWheat/GWFSS_v1.0
metrics:
- mean_iou
base_model:
- nvidia/segformer-b1-finetuned-ade-512-512
tags:
  - scientific
  - research
  - agricultural research
  - wheat
  - segmentation
  - crop phenotyping
  - global wheat
  - crop
  - plant
  - canopy
  - field
source: https://doi.org/10.1016/j.plaphe.2025.100084

---

## Usage
```python
from transformers import AutoImageProcessor, SegformerForSemanticSegmentation
import torch, torch.nn.functional as F
from PIL import Image
import numpy as np

repo = "GlobalWheat/GWFSS_model_v1.0"
processor = AutoImageProcessor.from_pretrained(repo)
model = SegformerForSemanticSegmentation.from_pretrained(repo).eval()

img = Image.open("example.jpg").convert("RGB")
inputs = processor(images=img, return_tensors="pt")
with torch.no_grad():
    logits = model(**inputs).logits
    up = F.interpolate(logits, size=(img.height, img.width), mode="bilinear", align_corners=False)
pred = up.argmax(1)[0].cpu().numpy()  # (H, W) class IDs
```

This version is based on huggingface Segformer which could be slightly different from the one we used for our paper. The paper version was implemented based on the mmsegmentation. You can find the model weight for mmsegmentation library in this repo as well.

## Related Paper
This dataset is associated with the following paper:
The Global Wheat Full Semantic Organ Segmentation (GWFSS) Dataset
https://doi.org/10.1016/j.plaphe.2025.100084