davanstrien HF Staff commited on
Commit
400d1bf
·
1 Parent(s): 89f58ba

Refactor README.md for clarity and quick start instructions for OCR scripts

Browse files
Files changed (1) hide show
  1. README.md +64 -89
README.md CHANGED
@@ -3,100 +3,68 @@ viewer: false
3
  tags: [uv-script, ocr, vision-language-model, document-processing]
4
  ---
5
 
6
- # UV Scripts - OCR Collection
7
 
8
- This repository contains UV scripts for OCR (Optical Character Recognition) tasks using various models.
9
 
10
- ## 🚧 Early Testing Version
11
 
12
- This is an early version for testing. Documentation and examples will be expanded based on feedback.
13
-
14
- ## Available Scripts
15
-
16
- ### 1. Nanonets OCR (`nanonets-ocr.py`)
17
-
18
- Converts document images to structured markdown using the Nanonets-OCR-s model.
19
-
20
- **Features:**
21
- - LaTeX equation recognition
22
- - Table extraction and formatting
23
- - Document structure preservation
24
- - Batch processing with vLLM
25
-
26
- **Requirements:**
27
- - GPU with CUDA support
28
- - Python 3.11+
29
-
30
- ## Quick Test
31
-
32
- To test the script with a sample dataset:
33
 
34
  ```bash
35
- # Test with 5 samples from a document dataset
36
- uv run nanonets-ocr.py \
37
- davanstrien/scientific-papers-small \
38
- my-test-ocr-output \
39
- --max-samples 5
40
-
41
- # Or if you have a specific dataset with images
42
- uv run nanonets-ocr.py \
43
- your-username/your-image-dataset \
44
- your-username/test-ocr-results \
45
- --image-column image \
46
- --max-samples 10
47
  ```
48
 
49
- ## Example Output
50
 
51
- The script adds a `markdown` column to your dataset containing the extracted text in markdown format, preserving:
52
- - Headers and document structure
53
- - Tables with proper formatting
54
- - Mathematical equations in LaTeX
55
- - Lists and other formatting
56
 
57
- ## GPU Memory
58
 
59
- If you encounter GPU memory issues, adjust the batch size and memory utilization:
60
 
61
- ```bash
62
- uv run nanonets-ocr.py input output \
63
- --batch-size 4 \
64
- --gpu-memory-utilization 0.5
65
- ```
66
 
67
- ## Running on HuggingFace Jobs
 
 
 
 
68
 
69
- Run this script on HF infrastructure without needing your own GPU!
70
 
71
- ### Command Line
 
 
72
 
73
  ```bash
74
- # Basic usage
75
  hf jobs uv run --flavor l4x1 \
76
  https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
77
- input-dataset-id output-dataset-id
78
 
79
- # Full example with options
80
- hf jobs uv run --flavor l4x1 \
 
 
 
81
  https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
82
- NationalLibraryOfScotland/Scottish-School-Exam-Papers \
83
- your-username/scottish-exams-ocr \
84
  --image-column image \
85
  --max-model-len 16384 \
86
- --batch-size 16
87
 
88
- # With HF token for private repos
89
- hf jobs uv run --flavor l4x1 --secret HF_TOKEN=$HF_TOKEN \
 
90
  https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
91
- input-dataset output-dataset \
92
- --private
93
-
94
- # With vLLM Docker image for optimized performance
95
- hf jobs uv run \
96
- --flavor l4x1 \
97
- --image vllm/vllm-openai:latest \
98
- https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
99
- input-dataset output-dataset \
100
  --batch-size 32
101
  ```
102
 
@@ -105,30 +73,37 @@ hf jobs uv run \
105
  ```python
106
  from huggingface_hub import run_uv_job
107
 
108
- # Run the OCR script
109
  job = run_uv_job(
110
  "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py",
111
- args=[
112
- "input-dataset-id",
113
- "output-dataset-id",
114
- "--image-column", "image",
115
- "--max-model-len", "16384"
116
- ],
117
- flavor="l4x1",
118
- secrets={"HF_TOKEN": "your-token"} # if needed
119
  )
120
  ```
121
 
122
- ### Recommended GPU Flavors
 
 
 
 
 
 
 
 
 
 
 
123
 
124
- - **`l4x1`** (24GB) - Recommended for most OCR tasks
125
- - **`t4-small`** (16GB) - For smaller batches or lower resolution
126
- - **`a10g-small`** (24GB) - Alternative to L4
127
- - **`l40sx1`** (48GB) - For very large batches
128
- - **`a100-large`** (80GB) - Maximum performance
129
 
130
- ## Coming Soon
 
 
 
 
 
 
 
 
 
131
 
132
- - Additional OCR models (RolmOCR, OlmOCR)
133
- - Performance benchmarks
134
- - More examples and use cases
 
3
  tags: [uv-script, ocr, vision-language-model, document-processing]
4
  ---
5
 
6
+ # OCR UV Scripts
7
 
8
+ Ready-to-run OCR scripts that work with `uv run` - no setup required!
9
 
10
+ ## 🚀 Quick Start with HuggingFace Jobs
11
 
12
+ Run OCR on any dataset without a GPU:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  ```bash
15
+ hf jobs uv run --flavor l4x1 \
16
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
17
+ your-input-dataset your-output-dataset
 
 
 
 
 
 
 
 
 
18
  ```
19
 
20
+ That's it! The script will:
21
 
22
+ - Process all images in your dataset
23
+ - Add OCR results as a new `markdown` column
24
+ - Push the results to a new dataset
 
 
25
 
26
+ ## 📋 Available Scripts
27
 
28
+ ### Nanonets OCR (`nanonets-ocr.py`)
29
 
30
+ State-of-the-art document OCR using [nanonets/Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) that handles:
 
 
 
 
31
 
32
+ - 📐 **LaTeX equations** - Mathematical formulas preserved
33
+ - 📊 **Tables** - Extracted as HTML format
34
+ - 📝 **Document structure** - Headers, lists, formatting maintained
35
+ - 🖼️ **Images** - Captions and descriptions included
36
+ - ☑️ **Forms** - Checkboxes rendered as ☐/☑
37
 
38
+ ## 💻 Usage Examples
39
 
40
+ ### Run on HuggingFace Jobs (Recommended)
41
+
42
+ No GPU? No problem! Run on HF infrastructure:
43
 
44
  ```bash
45
+ # Basic OCR job
46
  hf jobs uv run --flavor l4x1 \
47
  https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
48
+ your-input-dataset your-output-dataset
49
 
50
+ # Real example with UFO dataset 🛸
51
+ hf jobs uv run \
52
+ --flavor a10g-large \
53
+ --image vllm/vllm-openai:latest \
54
+ -e HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
55
  https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
56
+ davanstrien/ufo-ColPali \
57
+ your-username/ufo-ocr \
58
  --image-column image \
59
  --max-model-len 16384 \
60
+ --batch-size 64
61
 
62
+ # Private dataset with custom settings
63
+ hf jobs uv run --flavor l40sx1 \
64
+ -e HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
65
  https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
66
+ private-input private-output \
67
+ --private \
 
 
 
 
 
 
 
68
  --batch-size 32
69
  ```
70
 
 
73
  ```python
74
  from huggingface_hub import run_uv_job
75
 
 
76
  job = run_uv_job(
77
  "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py",
78
+ args=["input-dataset", "output-dataset", "--batch-size", "16"],
79
+ flavor="l4x1"
 
 
 
 
 
 
80
  )
81
  ```
82
 
83
+ ### Run Locally (Requires GPU)
84
+
85
+ ```bash
86
+ # Clone and run
87
+ git clone https://huggingface.co/datasets/uv-scripts/ocr
88
+ cd ocr
89
+ uv run nanonets-ocr.py input-dataset output-dataset
90
+
91
+ # Or run directly from URL
92
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
93
+ input-dataset output-dataset
94
+ ```
95
 
96
+ ## 🎛️ Configuration Options
 
 
 
 
97
 
98
+ | Option | Default | Description |
99
+ | -------------------------- | ------- | --------------------------- |
100
+ | `--image-column` | `image` | Column containing images |
101
+ | `--batch-size` | `8` | Images processed together |
102
+ | `--max-model-len` | `8192` | Max context length |
103
+ | `--max-tokens` | `4096` | Max output tokens |
104
+ | `--gpu-memory-utilization` | `0.7` | GPU memory usage |
105
+ | `--split` | `train` | Dataset split to process |
106
+ | `--max-samples` | None | Limit samples (for testing) |
107
+ | `--private` | False | Make output dataset private |
108
 
109
+ More OCR VLM Scripts coming soon! Stay tuned for updates!