Khmer TrOCR OCR 📝🇰🇭

This is a fine-tuned version of microsoft/trocr-base-stage1 for Khmer name recognition using synthetic image-text pairs of Khmer personal names.

📌 Model Details

  • Architecture: VisionEncoderDecoderModel (ViT + RoBERTa)
  • Base Model: microsoft/trocr-base-stage1
  • Language: Khmer (km)
  • Task: OCR (Optical Character Recognition) — specifically for Khmer script

🧠 Training

The model was fine-tuned on a synthetic dataset of rendered Khmer names using a Khmer Unicode font (KhmerOS_muol.ttf). Each image is paired with a corresponding text label for supervised training.

  • Input: RGB image (512x64) of a Khmer name
  • Output: Unicode Khmer text
  • Dataset: Custom-generated dataset of Khmer names (10,000+ samples)
  • Preprocessing: Images were rendered from text using PIL and paired with their ground-truth labels

🚀 Usage

Install required packages

pip install transformers torch pillow

Python Inference Example

import torch
from PIL import Image
from transformers import TrOCRProcessor, VisionEncoderDecoderModel

# Load model and processor
model = VisionEncoderDecoderModel.from_pretrained("your_username/khmer-trocr-ocr")
processor = TrOCRProcessor.from_pretrained("your_username/khmer-trocr-ocr")

# Load and process image
image = Image.open("khmer_name_images/khmer_name_00001.png").convert("RGB")
pixel_values = processor(images=image, return_tensors="pt").pixel_values

# Move to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
pixel_values = pixel_values.to(device)

# Generate prediction
generated_ids = model.generate(pixel_values)
predicted_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]

print("🔤 Predicted:", predicted_text)

📊 Evaluation

Coming soon — evaluation on a labeled test set with CER/WER metrics.

✅ Applications

  • Khmer ID OCR

📄 License

Apache-2.0 — Free to use for research and commercial applications.

🤝 Acknowledgements


Downloads last month
148
Safetensors
Model size
385M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support