Ncu-iisr: Pre-trained language model for cantemist named entity recognition

Jen Chieh Han, Richard Tzong Han Tsai

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

Since BERT has brought a huge improvement in various NLP tasks, the great constructed pre-trained language model shows its power of being fine-tuned in other downstream tasks either. In this paper, NCU-IISR team adopted the Spanish BERT, BETO, as our pre-trained model, and the model was finetuned on CANTEMIST Named Entity Recognition (NER) data. Besides, we also compared it with another fine-tuned version, which was trained on an external Spanish medical text. Finally, our best score achieved an F1-measure of 0.85 in the official test set result for CANTEMIST-NER task.

Original languageEnglish
Pages (from-to)347-351
Number of pages5
JournalCEUR Workshop Proceedings
Volume2664
StatePublished - 2020
Event2020 Iberian Languages Evaluation Forum, IberLEF 2020 - Malaga, Spain
Duration: 23 Sep 2020 → …

Keywords

  • Deep learning
  • Electronic health records
  • Named entity recognition
  • Pre-trained language model

Fingerprint

Dive into the research topics of 'Ncu-iisr: Pre-trained language model for cantemist named entity recognition'. Together they form a unique fingerprint.

Cite this