pierreguillou commited on
Commit
33b568d
·
1 Parent(s): 5d79ee8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -14
README.md CHANGED
@@ -58,16 +58,6 @@ Until today, the dataset can be downloaded through direct links or as a dataset
58
 
59
  Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022)
60
 
61
- ### About PDFs languages
62
-
63
- Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
64
- "We did not control the document selection with regard to language. **The vast majority of documents contained in DocLayNet (close to 95%) are published in English language.** However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features."
65
-
66
- ### About PDFs categories distribution
67
-
68
- Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
69
- "The pages in DocLayNet can be grouped into **six distinct categories**, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes."
70
-
71
  ### Processing into a format facilitating its use by HF notebooks
72
 
73
  These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.
@@ -78,17 +68,29 @@ At last, in order to use Hugging Face notebooks on fine-tuning layout models lik
78
 
79
  For all these reasons, I decided to process the DocLayNet dataset:
80
  - into 3 datasets of different sizes:
81
- - [DocLayNet large](https://huggingface.co/datasets/pierreguillou/DocLayNet-large) (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)
82
  - [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)
83
- - DocLayNet large with full dataset (to be done)
84
  - with associated texts,
85
  - and in a format facilitating their use by HF notebooks.
86
 
87
  *Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!*
88
 
 
 
 
 
 
 
 
 
 
 
89
  ### Download & overview
90
 
91
- The size of the DocLayNet large is about 99% of the DocLayNet dataset (random selection respectively in the train, val and test files).
 
 
92
 
93
  ```
94
  # !pip install -q datasets
@@ -99,7 +101,20 @@ dataset_large = load_dataset("pierreguillou/DocLayNet-large")
99
 
100
  # overview of dataset_large
101
 
102
-
 
 
 
 
 
 
 
 
 
 
 
 
 
103
  ```
104
 
105
  ### Annotated bounding boxes
 
58
 
59
  Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022)
60
 
 
 
 
 
 
 
 
 
 
 
61
  ### Processing into a format facilitating its use by HF notebooks
62
 
63
  These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.
 
68
 
69
  For all these reasons, I decided to process the DocLayNet dataset:
70
  - into 3 datasets of different sizes:
71
+ - [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)
72
  - [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)
73
+ - [DocLayNet large](https://huggingface.co/datasets/pierreguillou/DocLayNet-large) (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test)
74
  - with associated texts,
75
  - and in a format facilitating their use by HF notebooks.
76
 
77
  *Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!*
78
 
79
+ ### About PDFs languages
80
+
81
+ Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
82
+ "We did not control the document selection with regard to language. **The vast majority of documents contained in DocLayNet (close to 95%) are published in English language.** However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features."
83
+
84
+ ### About PDFs categories distribution
85
+
86
+ Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
87
+ "The pages in DocLayNet can be grouped into **six distinct categories**, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes."
88
+
89
  ### Download & overview
90
 
91
+ The size of the DocLayNet large is about 100% of the DocLayNet dataset (random selection respectively in the train, val and test files).
92
+
93
+ **WARNING** The following code allows to download DocLayNet large but it can not run until the end in Google Colab because of the size needed to store cache data and the CPU RAM to download the data (for example, the cache data in /home/ubuntu/.cache/huggingface/datasets/ needs almost 120 GB).
94
 
95
  ```
96
  # !pip install -q datasets
 
101
 
102
  # overview of dataset_large
103
 
104
+ DatasetDict({
105
+ train: Dataset({
106
+ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
107
+ num_rows: 69103
108
+ })
109
+ validation: Dataset({
110
+ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
111
+ num_rows: 6480
112
+ })
113
+ test: Dataset({
114
+ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
115
+ num_rows: 4994
116
+ })
117
+ })
118
  ```
119
 
120
  ### Annotated bounding boxes