ShahRishi's picture
Update README.md
75f8615 verified
|
raw
history blame
1.4 kB
---
library_name: transformers
tags:
- medical
license: mit
language:
- en
base_model:
- ShahRishi/OphthaBERT-v2
pipeline_tag: text-classification
---
# OphtaBERT Glaucoma Classifier
Binary classification for glaucoma diagnosis extraction from unstructured clinical notes.
---
## Model Details
### Model Description
This model is a fine-tuned variant of OphthaBERT, which was pretrained on over 2 million clinical notes. It has been fine-tuned for binary classification on labeled clinical notes from Massachusetts Eye and Ear Infirmary.
- **Finetuned from model:** [OphthaBERT-v2]
---
## Uses
We suggest utilizing this model in a zero-shot manner to generate binary glaucoma labels for each clinical note. For continued training on limited data, we recommend freezing the first 10 layers of the model.
### Direct Use
Use the code below to get started with the model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# Load the fine-tuned model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("ShahRishi/OphthaBERT-v2-glaucoma-binary")
tokenizer = AutoTokenizer.from_pretrained("ShahRishi/OphthaBERT-v2-glaucoma-binary")
# Example: Classify a clinical note
clinical_note = "Example clinical note text..."
inputs = tokenizer(clinical_note, return_tensors="pt", truncation=True, max_length=512)
outputs = model(**inputs)