davanstrien HF Staff commited on
Commit
89f58ba
·
1 Parent(s): cdbefb7

Add Nanonets OCR script with vLLM support

Browse files

- UV script for document OCR using Nanonets-OCR-s model
- Features: LaTeX equations, tables, document structure
- Supports batch processing with vLLM
- Includes HF Jobs examples for running on cloud
- Added proper CUDA checks and error handling

Files changed (1) hide show
  1. README.md +134 -0
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ viewer: false
3
+ tags: [uv-script, ocr, vision-language-model, document-processing]
4
+ ---
5
+
6
+ # UV Scripts - OCR Collection
7
+
8
+ This repository contains UV scripts for OCR (Optical Character Recognition) tasks using various models.
9
+
10
+ ## 🚧 Early Testing Version
11
+
12
+ This is an early version for testing. Documentation and examples will be expanded based on feedback.
13
+
14
+ ## Available Scripts
15
+
16
+ ### 1. Nanonets OCR (`nanonets-ocr.py`)
17
+
18
+ Converts document images to structured markdown using the Nanonets-OCR-s model.
19
+
20
+ **Features:**
21
+ - LaTeX equation recognition
22
+ - Table extraction and formatting
23
+ - Document structure preservation
24
+ - Batch processing with vLLM
25
+
26
+ **Requirements:**
27
+ - GPU with CUDA support
28
+ - Python 3.11+
29
+
30
+ ## Quick Test
31
+
32
+ To test the script with a sample dataset:
33
+
34
+ ```bash
35
+ # Test with 5 samples from a document dataset
36
+ uv run nanonets-ocr.py \
37
+ davanstrien/scientific-papers-small \
38
+ my-test-ocr-output \
39
+ --max-samples 5
40
+
41
+ # Or if you have a specific dataset with images
42
+ uv run nanonets-ocr.py \
43
+ your-username/your-image-dataset \
44
+ your-username/test-ocr-results \
45
+ --image-column image \
46
+ --max-samples 10
47
+ ```
48
+
49
+ ## Example Output
50
+
51
+ The script adds a `markdown` column to your dataset containing the extracted text in markdown format, preserving:
52
+ - Headers and document structure
53
+ - Tables with proper formatting
54
+ - Mathematical equations in LaTeX
55
+ - Lists and other formatting
56
+
57
+ ## GPU Memory
58
+
59
+ If you encounter GPU memory issues, adjust the batch size and memory utilization:
60
+
61
+ ```bash
62
+ uv run nanonets-ocr.py input output \
63
+ --batch-size 4 \
64
+ --gpu-memory-utilization 0.5
65
+ ```
66
+
67
+ ## Running on HuggingFace Jobs
68
+
69
+ Run this script on HF infrastructure without needing your own GPU!
70
+
71
+ ### Command Line
72
+
73
+ ```bash
74
+ # Basic usage
75
+ hf jobs uv run --flavor l4x1 \
76
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
77
+ input-dataset-id output-dataset-id
78
+
79
+ # Full example with options
80
+ hf jobs uv run --flavor l4x1 \
81
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
82
+ NationalLibraryOfScotland/Scottish-School-Exam-Papers \
83
+ your-username/scottish-exams-ocr \
84
+ --image-column image \
85
+ --max-model-len 16384 \
86
+ --batch-size 16
87
+
88
+ # With HF token for private repos
89
+ hf jobs uv run --flavor l4x1 --secret HF_TOKEN=$HF_TOKEN \
90
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
91
+ input-dataset output-dataset \
92
+ --private
93
+
94
+ # With vLLM Docker image for optimized performance
95
+ hf jobs uv run \
96
+ --flavor l4x1 \
97
+ --image vllm/vllm-openai:latest \
98
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
99
+ input-dataset output-dataset \
100
+ --batch-size 32
101
+ ```
102
+
103
+ ### Python API
104
+
105
+ ```python
106
+ from huggingface_hub import run_uv_job
107
+
108
+ # Run the OCR script
109
+ job = run_uv_job(
110
+ "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py",
111
+ args=[
112
+ "input-dataset-id",
113
+ "output-dataset-id",
114
+ "--image-column", "image",
115
+ "--max-model-len", "16384"
116
+ ],
117
+ flavor="l4x1",
118
+ secrets={"HF_TOKEN": "your-token"} # if needed
119
+ )
120
+ ```
121
+
122
+ ### Recommended GPU Flavors
123
+
124
+ - **`l4x1`** (24GB) - Recommended for most OCR tasks
125
+ - **`t4-small`** (16GB) - For smaller batches or lower resolution
126
+ - **`a10g-small`** (24GB) - Alternative to L4
127
+ - **`l40sx1`** (48GB) - For very large batches
128
+ - **`a100-large`** (80GB) - Maximum performance
129
+
130
+ ## Coming Soon
131
+
132
+ - Additional OCR models (RolmOCR, OlmOCR)
133
+ - Performance benchmarks
134
+ - More examples and use cases