File size: 1,385 Bytes
75f8615
 
 
 
 
 
 
 
 
 
 
 
 
1245a36
f8ce759
1245a36
75f8615
 
 
 
 
1245a36
5312968
1245a36
75f8615
 
 
 
 
1245a36
5312968
1245a36
75f8615
 
5312968
1245a36
75f8615
5312968
1245a36
5312968
 
b2dbe0e
1245a36
5312968
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
library_name: transformers
tags:
  - medical
license: mit
language:
  - en
base_model:
  - ShahRishi/OphthaBERT-v2
pipeline_tag: text-classification
---

# OphtaBERT Glaucoma Classifier

Binary classification for glaucoma diagnosis extraction from unstructured clinical notes.

---

## Model Details

### Model Description

This model is a fine-tuned variant of OphthaBERT, which was pretrained on over 2 million clinical notes. It has been fine-tuned for binary classification on labeled clinical notes from Massachusetts Eye and Ear Infirmary.

- **Finetuned from model:** [OphthaBERT-v2]

---

## Uses

We suggest utilizing this model in a zero-shot manner to generate binary glaucoma labels for each clinical note. For continued training on limited data, we recommend freezing the first 10 layers of the model.

### Direct Use

Use the code below to get started with the model:

```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer

# Load the fine-tuned model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("ShahRishi/OphthaBERT-v2-glaucoma-binary")
tokenizer = AutoTokenizer.from_pretrained("ShahRishi/OphthaBERT-v2")

# Example: Classify a clinical note
clinical_note = "Example clinical note text..."
inputs = tokenizer(clinical_note, return_tensors="pt", truncation=True, max_length=512)
outputs = model(**inputs)