| # LayoutLMv3 | |
| [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlmv3) | |
| ## Model description | |
| LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis. | |
| [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) | |
| Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, Preprint 2022. | |
| ## Citation | |
| If you find LayoutLM useful in your research, please cite the following paper: | |
| ``` | |
| @article{huang2022layoutlmv3, | |
| title={LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking}, | |
| author={Yupan Huang and Tengchao Lv and Lei Cui and Yutong Lu and Furu Wei}, | |
| journal={arXiv preprint arXiv:2204.08387}, | |
| year={2022} | |
| } | |
| ``` | |
| ## License | |
| The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) | |
| Portions of the source code are based on the [transformers](https://github.com/huggingface/transformers) project. | |
| [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct) | |