helizac commited on
Commit
e2b53a5
·
verified ·
1 Parent(s): 642ff82

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -55
README.md CHANGED
@@ -1,49 +1,3 @@
1
- ---
2
- license: mit
3
- library_name: transformers
4
- tags:
5
- - dots_ocr
6
- - image-to-text
7
- - ocr
8
- - document-parse
9
- - layout
10
- - table
11
- - formula
12
- - quantized
13
- - 4-bit
14
- base_model: rednote-hilab/dots.ocr
15
- ---
16
-
17
- # dots.ocr-4bit: A 4-bit Quantized Version
18
-
19
- This repository contains a 4-bit quantized version of the powerful `dots.ocr` model by the **Rednote HiLab**. The quantization was performed using `bitsandbytes` (NF4 precision), providing significant memory and speed improvements with minimal performance loss, making this state-of-the-art model accessible on consumer-grade GPUs.
20
-
21
- This work is entirely a derivative of the original model. All credit for the model architecture, training, and groundbreaking research goes to the original authors. A huge thank you to them for open-sourcing their work.
22
-
23
- * **Original Model:** [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr)
24
- * **Original GitHub:** [https://github.com/rednote-hilab/dots.ocr](https://github.com/rednote-hilab/dots.ocr)
25
- * **Live Demo (Original):** [https://dotsocr.xiaohongshu.com](https://dotsocr.xiaohongshu.com)
26
-
27
- ## Model Description (from original authors)
28
- > **dots.ocr** is a powerful, multilingual document parser that unifies layout detection and content recognition within a single vision-language model while maintaining good reading order. Despite its compact 1.7B-parameter LLM foundation, it achieves state-of-the-art(SOTA) performance.
29
-
30
- ## How to Use This 4-bit Version
31
-
32
- First, ensure you have the necessary dependencies installed. Because this model uses custom code, you **must** clone the original repository and install it.
33
-
34
- ```bash
35
- # It's recommended to clone the original repo to get all utility scripts
36
- git clone https://github.com/rednote-hilab/dots.ocr.git
37
- cd dots.ocr
38
-
39
- # Install the custom code and dependencies
40
- pip install -e .
41
- pip install torch transformers accelerate bitsandbytes peft sentencepiece
42
- ```
43
-
44
- You can then use the 4-bit model with the following Python script. Note the inclusion of generation parameters (repetition_penalty, do_sample, etc.), which are recommended to prevent potential looping with the quantized model.
45
-
46
- ```python
47
  import torch
48
  from transformers import AutoModelForCausalLM, AutoProcessor
49
  from PIL import Image
@@ -55,8 +9,8 @@ MODEL_ID = "helizac/dots.ocr-4bit"
55
 
56
  local_model_path = snapshot_download(repo_id=MODEL_ID)
57
 
58
- model = AutoModelForCausalLM.from_pretrained(local_model_path, device_map="auto", trust_remote_code=True, torch_dtype=torch.bfloat16)
59
- processor = AutoProcessor.from_pretrained(local_model_path, trust_remote_code=True)
60
 
61
  image_path = "test.jpg"
62
  image = Image.open(image_path)
@@ -82,14 +36,9 @@ text = processor.apply_chat_template(messages, tokenize=False, add_generation_pr
82
  image_inputs, _ = process_vision_info(messages)
83
  inputs = processor(text=[text], images=image_inputs, padding=True, return_tensors="pt").to(model.device)
84
 
85
- generated_ids = model.generate(**inputs, max_new_tokens=1048, do_sample=True, temperature=0.6, top_p=0.9, repetition_penalty=1.15)
86
 
87
  generated_ids_trimmed = [out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
88
  output_text = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
89
 
90
- print(output_text)
91
- ```
92
-
93
- ## License
94
-
95
- This model is released under the MIT License, same as the original model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  import torch
2
  from transformers import AutoModelForCausalLM, AutoProcessor
3
  from PIL import Image
 
9
 
10
  local_model_path = snapshot_download(repo_id=MODEL_ID)
11
 
12
+ model = AutoModelForCausalLM.from_pretrained(local_model_path, device_map="auto", trust_remote_code=True, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2")
13
+ processor = AutoProcessor.from_pretrained(local_model_path, trust_remote_code=True, use_fast=True)
14
 
15
  image_path = "test.jpg"
16
  image = Image.open(image_path)
 
36
  image_inputs, _ = process_vision_info(messages)
37
  inputs = processor(text=[text], images=image_inputs, padding=True, return_tensors="pt").to(model.device)
38
 
39
+ generated_ids = model.generate(**inputs, max_new_tokens=256, do_sample=True, temperature=0.6, top_p=0.9, repetition_penalty=1.15)
40
 
41
  generated_ids_trimmed = [out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
42
  output_text = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
43
 
44
+ print(output_text)