Abstract
We introduce exBERT, a training method to extend BERT pre-trained models from a general domain to a new pre-trained model for a specific domain with a new additive vocabulary under constrained training resources (i.e., constrained computation and data). exBERT uses a small extension module to learn to adapt an augmenting embedding for the new domain in the context of the original BERT’s embedding of a general vocabulary. The exBERT training method is novel in learning the new vocabulary and the extension module while keeping the weights of the original BERT model fixed, resulting in a substantial reduction in required training resources. We pre-train exBERT with biomedical articles from ClinicalKey and PubMed Central, and study its performance on biomedical downstream benchmark tasks using the MTLBioinformatics-2016 dataset. We demonstrate that exBERT consistently outperforms prior approaches when using limited corpus and pretraining computation resources.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics Findings of ACL |
Subtitle of host publication | EMNLP 2020 |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 1433-1439 |
Number of pages | 7 |
ISBN (Electronic) | 9781952148903 |
State | Published - 2020 |
Event | Findings of the Association for Computational Linguistics, ACL 2020: EMNLP 2020 - Virtual, Online Duration: 16 11 2020 → 20 11 2020 |
Publication series
Name | Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 |
---|
Conference
Conference | Findings of the Association for Computational Linguistics, ACL 2020: EMNLP 2020 |
---|---|
City | Virtual, Online |
Period | 16/11/20 → 20/11/20 |
Bibliographical note
Publisher Copyright:©2020 Association for Computational Linguistics