helizac commited on
Commit
47dead2
·
verified ·
1 Parent(s): ac861f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -106
README.md CHANGED
@@ -1,107 +1,103 @@
1
- ---
2
- license: mit
3
- library_name: transformers
4
- tags:
5
- - dots_ocr
6
- - image-to-text
7
- - ocr
8
- - document-parse
9
- - layout
10
- - table
11
- - formula
12
- - quantized
13
- - 4-bit
14
- base_model: rednote-hilab/dots.ocr
15
- ---
16
-
17
- <div align="center">
18
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/logo.png" width="300"/>
19
- </div>
20
-
21
- # dots.ocr-4bit: A 4-bit Quantized Version
22
-
23
- This repository contains a 4-bit quantized version of the powerful `dots.ocr` model by the **Rednote HiLab**. The quantization was performed using `bitsandbytes` (NF4 precision), providing significant memory and speed improvements with minimal performance loss, making this state-of-the-art model accessible on consumer-grade GPUs.
24
-
25
- This work is entirely a derivative of the original model. All credit for the model architecture, training, and groundbreaking research goes to the original authors. A huge thank you to them for open-sourcing their work.
26
-
27
- * **Original Model:** [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr)
28
- * **Original GitHub:** [https://github.com/rednote-hilab/dots.ocr](https://github.com/rednote-hilab/dots.ocr)
29
- * **Live Demo (Original):** [https://dotsocr.xiaohongshu.com](https://dotsocr.xiaohongshu.com)
30
-
31
- ## Model Description (from original authors)
32
- > **dots.ocr** is a powerful, multilingual document parser that unifies layout detection and content recognition within a single vision-language model while maintaining good reading order. Despite its compact 1.7B-parameter LLM foundation, it achieves state-of-the-art(SOTA) performance.
33
-
34
- ## How to Use This 4-bit Version
35
-
36
- First, ensure you have the necessary dependencies installed. Because this model uses custom code, you **must** clone the original repository and install it.
37
-
38
- ```bash
39
- # It's recommended to clone the original repo to get all utility scripts
40
- git clone https://github.com/rednote-hilab/dots.ocr.git
41
- cd dots.ocr
42
-
43
- # Install the custom code and dependencies
44
- pip install -e .
45
- pip install torch transformers accelerate bitsandbytes peft sentencepiece
46
- ```
47
-
48
- You can then use the 4-bit model with the following Python script. Note the inclusion of generation parameters (repetition_penalty, do_sample, etc.), which are recommended to prevent potential looping with the quantized model.
49
-
50
- ```python
51
- import torch
52
- from transformers import AutoModelForCausalLM, AutoProcessor
53
- from PIL import Image
54
- import os
55
- import traceback
56
-
57
- # This assumes the utility script is available in your environment
58
- from qwen_vl_utils import process_vision_info
59
-
60
- # Replace with your Hugging Face username
61
- MODEL_ID = "[YOUR-HF-USERNAME]/dots.ocr-4bit"
62
-
63
- print("Loading 4-bit quantized model from the Hub...")
64
- model = AutoModelForCausalLM.from_pretrained(
65
- MODEL_ID,
66
- device_map="auto",
67
- trust_remote_code=True,
68
- torch_dtype=torch.bfloat16,
69
- )
70
- processor = AutoProcessor.from_pretrained(
71
- MODEL_ID,
72
- trust_remote_code=True
73
- )
74
- print("✅ Model and processor loaded successfully!")
75
-
76
- # --- Inference ---
77
- image_path = "demo/demo_image1.jpg" # Make sure you have this image
78
- image = Image.open(image_path)
79
- prompt_text = "Parse all layout info, both detection and recognition"
80
-
81
- messages = [
82
- {"role": "user", "content": [{"type": "image", "image": image_path}, {"type": "text", "text": prompt_text}]}
83
- ]
84
-
85
- # Prepare inputs using the official workflow
86
- text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
87
- image_inputs, _ = process_vision_info(messages)
88
- inputs = processor(
89
- text=[text], images=image_inputs, padding=True, return_tensors="pt"
90
- ).to(model.device)
91
-
92
- # Generate with parameters to prevent looping with the 4-bit model
93
- generated_ids = model.generate(
94
- **inputs, max_new_tokens=4096, do_sample=True, temperature=0.6, top_p=0.9, repetition_penalty=1.15
95
- )
96
-
97
- # Trim and decode output
98
- generated_ids_trimmed = [out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
99
- output_text = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
100
-
101
- print("\n--- Inference Result ---")
102
- print(output_text)
103
- ```
104
-
105
- ## License
106
-
107
  This model is released under the MIT License, same as the original model.
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ tags:
5
+ - dots_ocr
6
+ - image-to-text
7
+ - ocr
8
+ - document-parse
9
+ - layout
10
+ - table
11
+ - formula
12
+ - quantized
13
+ - 4-bit
14
+ base_model: rednote-hilab/dots.ocr
15
+ ---
16
+
17
+ # dots.ocr-4bit: A 4-bit Quantized Version
18
+
19
+ This repository contains a 4-bit quantized version of the powerful `dots.ocr` model by the **Rednote HiLab**. The quantization was performed using `bitsandbytes` (NF4 precision), providing significant memory and speed improvements with minimal performance loss, making this state-of-the-art model accessible on consumer-grade GPUs.
20
+
21
+ This work is entirely a derivative of the original model. All credit for the model architecture, training, and groundbreaking research goes to the original authors. A huge thank you to them for open-sourcing their work.
22
+
23
+ * **Original Model:** [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr)
24
+ * **Original GitHub:** [https://github.com/rednote-hilab/dots.ocr](https://github.com/rednote-hilab/dots.ocr)
25
+ * **Live Demo (Original):** [https://dotsocr.xiaohongshu.com](https://dotsocr.xiaohongshu.com)
26
+
27
+ ## Model Description (from original authors)
28
+ > **dots.ocr** is a powerful, multilingual document parser that unifies layout detection and content recognition within a single vision-language model while maintaining good reading order. Despite its compact 1.7B-parameter LLM foundation, it achieves state-of-the-art(SOTA) performance.
29
+
30
+ ## How to Use This 4-bit Version
31
+
32
+ First, ensure you have the necessary dependencies installed. Because this model uses custom code, you **must** clone the original repository and install it.
33
+
34
+ ```bash
35
+ # It's recommended to clone the original repo to get all utility scripts
36
+ git clone https://github.com/rednote-hilab/dots.ocr.git
37
+ cd dots.ocr
38
+
39
+ # Install the custom code and dependencies
40
+ pip install -e .
41
+ pip install torch transformers accelerate bitsandbytes peft sentencepiece
42
+ ```
43
+
44
+ You can then use the 4-bit model with the following Python script. Note the inclusion of generation parameters (repetition_penalty, do_sample, etc.), which are recommended to prevent potential looping with the quantized model.
45
+
46
+ ```python
47
+ import torch
48
+ from transformers import AutoModelForCausalLM, AutoProcessor
49
+ from PIL import Image
50
+ import os
51
+ import traceback
52
+
53
+ # This assumes the utility script is available in your environment
54
+ from qwen_vl_utils import process_vision_info
55
+
56
+ # Replace with your Hugging Face username
57
+ MODEL_ID = "[YOUR-HF-USERNAME]/dots.ocr-4bit"
58
+
59
+ print("Loading 4-bit quantized model from the Hub...")
60
+ model = AutoModelForCausalLM.from_pretrained(
61
+ MODEL_ID,
62
+ device_map="auto",
63
+ trust_remote_code=True,
64
+ torch_dtype=torch.bfloat16,
65
+ )
66
+ processor = AutoProcessor.from_pretrained(
67
+ MODEL_ID,
68
+ trust_remote_code=True
69
+ )
70
+ print("✅ Model and processor loaded successfully!")
71
+
72
+ # --- Inference ---
73
+ image_path = "demo/demo_image1.jpg" # Make sure you have this image
74
+ image = Image.open(image_path)
75
+ prompt_text = "Parse all layout info, both detection and recognition"
76
+
77
+ messages = [
78
+ {"role": "user", "content": [{"type": "image", "image": image_path}, {"type": "text", "text": prompt_text}]}
79
+ ]
80
+
81
+ # Prepare inputs using the official workflow
82
+ text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
83
+ image_inputs, _ = process_vision_info(messages)
84
+ inputs = processor(
85
+ text=[text], images=image_inputs, padding=True, return_tensors="pt"
86
+ ).to(model.device)
87
+
88
+ # Generate with parameters to prevent looping with the 4-bit model
89
+ generated_ids = model.generate(
90
+ **inputs, max_new_tokens=4096, do_sample=True, temperature=0.6, top_p=0.9, repetition_penalty=1.15
91
+ )
92
+
93
+ # Trim and decode output
94
+ generated_ids_trimmed = [out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
95
+ output_text = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
96
+
97
+ print("\n--- Inference Result ---")
98
+ print(output_text)
99
+ ```
100
+
101
+ ## License
102
+
 
 
 
 
103
  This model is released under the MIT License, same as the original model.