mohamedmak123 commited on
Commit
839cfff
·
verified ·
1 Parent(s): ea2c281

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +89 -19
README.md CHANGED
@@ -1,21 +1,91 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: image
5
- dtype: image
6
- - name: markdown
7
- dtype: string
8
- - name: inference_info
9
- dtype: string
10
- splits:
11
- - name: train
12
- num_bytes: 246058.0
13
- num_examples: 1
14
- download_size: 216875
15
- dataset_size: 246058.0
16
- configs:
17
- - config_name: default
18
- data_files:
19
- - split: train
20
- path: data/train-*
21
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ viewer: false
3
+ tags:
4
+ - ocr
5
+ - document-processing
6
+ - nanonets
7
+ - markdown
8
+ - uv-script
9
+ - generated
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
+
12
+ # Document OCR using Nanonets-OCR-s
13
+
14
+ This dataset contains markdown-formatted OCR results from images in [/content/my_dataset](https://huggingface.co/datasets//content/my_dataset) using Nanonets-OCR-s.
15
+
16
+ ## Processing Details
17
+
18
+ - **Source Dataset**: [/content/my_dataset](https://huggingface.co/datasets//content/my_dataset)
19
+ - **Model**: [nanonets/Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s)
20
+ - **Number of Samples**: 1
21
+ - **Processing Time**: 4.6 minutes
22
+ - **Processing Date**: 2025-08-11 09:33 UTC
23
+
24
+ ### Configuration
25
+
26
+ - **Image Column**: `image`
27
+ - **Output Column**: `markdown`
28
+ - **Dataset Split**: `train`
29
+ - **Batch Size**: 1
30
+ - **Max Model Length**: 8,192 tokens
31
+ - **Max Output Tokens**: 4,096
32
+ - **GPU Memory Utilization**: 80.0%
33
+
34
+ ## Model Information
35
+
36
+ Nanonets-OCR-s is a state-of-the-art document OCR model that excels at:
37
+ - 📐 **LaTeX equations** - Mathematical formulas preserved in LaTeX format
38
+ - 📊 **Tables** - Extracted and formatted as HTML
39
+ - 📝 **Document structure** - Headers, lists, and formatting maintained
40
+ - 🖼️ **Images** - Captions and descriptions included in `<img>` tags
41
+ - ☑️ **Forms** - Checkboxes rendered as ☐/☑
42
+ - 🔖 **Watermarks** - Wrapped in `<watermark>` tags
43
+ - 📄 **Page numbers** - Wrapped in `<page_number>` tags
44
+
45
+ ## Dataset Structure
46
+
47
+ The dataset contains all original columns plus:
48
+ - `markdown`: The extracted text in markdown format with preserved structure
49
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
50
+
51
+ ## Usage
52
+
53
+ ```python
54
+ from datasets import load_dataset
55
+ import json
56
+
57
+ # Load the dataset
58
+ dataset = load_dataset("{output_dataset_id}", split="train")
59
+
60
+ # Access the markdown text
61
+ for example in dataset:
62
+ print(example["markdown"])
63
+ break
64
+
65
+ # View all OCR models applied to this dataset
66
+ inference_info = json.loads(dataset[0]["inference_info"])
67
+ for info in inference_info:
68
+ print(f"Column: {info['column_name']} - Model: {info['model_id']}")
69
+ ```
70
+
71
+ ## Reproduction
72
+
73
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) Nanonets OCR script:
74
+
75
+ ```bash
76
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
77
+ /content/my_dataset \
78
+ <output-dataset> \
79
+ --image-column image \
80
+ --batch-size 1 \
81
+ --max-model-len 8192 \
82
+ --max-tokens 4096 \
83
+ --gpu-memory-utilization 0.8
84
+ ```
85
+
86
+ ## Performance
87
+
88
+ - **Processing Speed**: ~0.0 images/second
89
+ - **GPU Configuration**: vLLM with 80% GPU memory utilization
90
+
91
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)