NCU-IISR/AS-GIS: Results of various pre-trained biomedical language models and linear regression model in BioASQ task 9b Phase B

Yu Zhang, Jen Chieh Han, Richard Tzong Han Tsai

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

Transformer has been widely applied in Natural Language Processing (NLP) field, and it also results in an amount of pre-trained language models like BioBERT, SciBERT, NCBI_Bluebert, and PubMedBERT. In this paper, we introduce our system for the BioASQ Task 9b Phase B. We employed various pre-trained biomedical language models, including BioBERT, BioBERT-MNLI, and PubMedBERT, to generate “exact” answers for the questions, and a linear regression model with our sentence embedding to construct the top-n sentences as a prediction for “ideal” answers.

Original languageEnglish
Pages (from-to)360-368
Number of pages9
JournalCEUR Workshop Proceedings
Volume2936
StatePublished - 2021
Event2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021 - Virtual, Bucharest, Romania
Duration: 21 Sep 202124 Sep 2021

Keywords

  • Biomedical question answering
  • Linear regression
  • Pre-trained language model

Fingerprint

Dive into the research topics of 'NCU-IISR/AS-GIS: Results of various pre-trained biomedical language models and linear regression model in BioASQ task 9b Phase B'. Together they form a unique fingerprint.

Cite this