Transformer has been widely applied in Natural Language Processing (NLP) field, and it also results in an amount of pre-trained language models like BioBERT, SciBERT, NCBI_Bluebert, and PubMedBERT. In this paper, we introduce our system for the BioASQ Task 9b Phase B. We employed various pre-trained biomedical language models, including BioBERT, BioBERT-MNLI, and PubMedBERT, to generate “exact” answers for the questions, and a linear regression model with our sentence embedding to construct the top-n sentences as a prediction for “ideal” answers.
|頁（從 - 到）||360-368|
|期刊||CEUR Workshop Proceedings|
|出版狀態||已出版 - 2021|
|事件||2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021 - Virtual, Bucharest, Romania|
持續時間: 21 9月 2021 → 24 9月 2021