KBioXLM / README.md
ngwlh's picture
Update README.md
ceb5e6a
|
raw
history blame
1.14 kB
metadata
language:
  - multilingual
  - en
  - zh
license: apache-2.0
pipeline_tag: fill-mask
tags:
  - medical

KBioXLM

The aligned corpus constructed using the knowledge-anchored method is combined with a multi task training strategy to continue training XLM-R, thus obtaining KBioXLM. It is the first multilingual biomedical pre-trained language model we know that has cross-lingual understanding capabilities in medical domain. It was introduced in the paper KBioXLM: A Knowledge-anchored Biomedical Multilingual Pretrained Language Model by geng et al. and first released in this repository

Model description

KBioXLM model can be fintuned on downstream tasks. The downstream tasks here refer to biomedical cross-lingual understanding tasks, such as biomedical entity recognition, biomedical relationship extraction and biomedical text classification.

Usage

You can follow the prompts below to load our model parameters:

from transformers import RobertaModel
model=RobertaModel.from_pretrained('ngwlh/KBioXLM')

BibTeX entry and citation info

Coming soon.