PII & De-Identification
Collection
Models for extracting PII entities and de-identifying clinical text, with support for HIPAA and GDPR compliance.
•
146 items
•
Updated
•
32
Italian PII Detection Model | 355M Parameters | Open Source
OpenMed-PII-Italian-SuperMedical-Large-355M-v1 is a transformer-based token classification model fine-tuned for Personally Identifiable Information (PII) detection in Italian text. This model identifies and classifies 54 types of sensitive information including names, addresses, social security numbers, medical record numbers, and more.
Evaluated on the Italian subset of AI4Privacy dataset:
| Metric | Score |
|---|---|
| Micro F1 | 0.9663 |
| Precision | 0.9640 |
| Recall | 0.9686 |
| Macro F1 | 0.9577 |
| Weighted F1 | 0.9653 |
| Accuracy | 0.9950 |
| Rank | Model | F1 | Precision | Recall |
|---|---|---|---|---|
| 1 | OpenMed-PII-Italian-SuperClinical-Large-434M-v1 | 0.9728 | 0.9707 | 0.9750 |
| 2 | OpenMed-PII-Italian-EuroMed-210M-v1 | 0.9685 | 0.9663 | 0.9707 |
| 3 | OpenMed-PII-Italian-ClinicalBGE-568M-v1 | 0.9678 | 0.9653 | 0.9703 |
| 4 | OpenMed-PII-Italian-SnowflakeMed-Large-568M-v1 | 0.9678 | 0.9653 | 0.9702 |
| 5 | OpenMed-PII-Italian-BigMed-Large-560M-v1 | 0.9671 | 0.9645 | 0.9697 |
| 6 | OpenMed-PII-Italian-SuperMedical-Large-355M-v1 | 0.9663 | 0.9640 | 0.9686 |
| 7 | OpenMed-PII-Italian-mClinicalE5-Large-560M-v1 | 0.9659 | 0.9633 | 0.9684 |
| 8 | OpenMed-PII-Italian-NomicMed-Large-395M-v1 | 0.9656 | 0.9631 | 0.9682 |
| 9 | OpenMed-PII-Italian-ClinicalBGE-Large-335M-v1 | 0.9605 | 0.9575 | 0.9635 |
| 10 | OpenMed-PII-Italian-SuperClinical-Base-184M-v1 | 0.9596 | 0.9573 | 0.9620 |
This model detects 54 PII entity types organized into categories:
| Entity | Description |
|---|---|
ACCOUNTNAME |
Accountname |
BANKACCOUNT |
Bankaccount |
BIC |
Bic |
BITCOINADDRESS |
Bitcoinaddress |
CREDITCARD |
Creditcard |
CREDITCARDISSUER |
Creditcardissuer |
CVV |
Cvv |
ETHEREUMADDRESS |
Ethereumaddress |
IBAN |
Iban |
IMEI |
Imei |
| ... | and 12 more |
| Entity | Description |
|---|---|
AGE |
Age |
DATEOFBIRTH |
Dateofbirth |
EYECOLOR |
Eyecolor |
FIRSTNAME |
Firstname |
GENDER |
Gender |
HEIGHT |
Height |
LASTNAME |
Lastname |
MIDDLENAME |
Middlename |
OCCUPATION |
Occupation |
PREFIX |
Prefix |
| ... | and 1 more |
| Entity | Description |
|---|---|
EMAIL |
|
PHONE |
Phone |
| Entity | Description |
|---|---|
BUILDINGNUMBER |
Buildingnumber |
CITY |
City |
COUNTY |
County |
GPSCOORDINATES |
Gpscoordinates |
ORDINALDIRECTION |
Ordinaldirection |
SECONDARYADDRESS |
Secondaryaddress |
STATE |
State |
STREET |
Street |
ZIPCODE |
Zipcode |
| Entity | Description |
|---|---|
JOBDEPARTMENT |
Jobdepartment |
JOBTITLE |
Jobtitle |
ORGANIZATION |
Organization |
| Entity | Description |
|---|---|
AMOUNT |
Amount |
CURRENCY |
Currency |
CURRENCYCODE |
Currencycode |
CURRENCYNAME |
Currencyname |
CURRENCYSYMBOL |
Currencysymbol |
| Entity | Description |
|---|---|
DATE |
Date |
TIME |
Time |
from transformers import pipeline
# Load the PII detection pipeline
ner = pipeline("ner", model="OpenMed/OpenMed-PII-Italian-SuperMedical-Large-355M-v1", aggregation_strategy="simple")
text = """
Paziente Marco Bianchi (nato il 15/03/1985, CF: BNCMRC85C15H501Z) è stato visitato oggi.
Contatto: [email protected], Telefono: +39 333 123 4567.
Indirizzo: Via Garibaldi 42, 20121 Milano.
"""
entities = ner(text)
for entity in entities:
print(f"{entity['entity_group']}: {entity['word']} (score: {entity['score']:.3f})")
def redact_pii(text, entities, placeholder='[REDACTED]'):
"""Replace detected PII with placeholders."""
# Sort entities by start position (descending) to preserve offsets
sorted_entities = sorted(entities, key=lambda x: x['start'], reverse=True)
redacted = text
for ent in sorted_entities:
redacted = redacted[:ent['start']] + f"[{ent['entity_group']}]" + redacted[ent['end']:]
return redacted
# Apply de-identification
redacted_text = redact_pii(text, entities)
print(redacted_text)
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
model_name = "OpenMed/OpenMed-PII-Italian-SuperMedical-Large-355M-v1"
model = AutoModelForTokenClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
texts = [
"Paziente Marco Bianchi (nato il 15/03/1985, CF: BNCMRC85C15H501Z) è stato visitato oggi.",
"Contatto: [email protected], Telefono: +39 333 123 4567.",
]
inputs = tokenizer(texts, return_tensors='pt', padding=True, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1)
Important: This model is intended as an assistive tool, not a replacement for human review.
@misc{openmed-pii-2026,
title = {OpenMed-PII-Italian-SuperMedical-Large-355M-v1: Italian PII Detection Model},
author = {OpenMed Science},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/OpenMed/OpenMed-PII-Italian-SuperMedical-Large-355M-v1}
}
Base model
FacebookAI/roberta-largeTotally Free + Zero Barriers + No Login Required