TY - JOUR
T1 - Ncu-iisr
T2 - 2020 Iberian Languages Evaluation Forum, IberLEF 2020
AU - Han, Jen Chieh
AU - Tsai, Richard Tzong Han
N1 - Publisher Copyright:
© 2020 Copyright for this paper by its authors. Use permitted under.
PY - 2020
Y1 - 2020
N2 - Since BERT has brought a huge improvement in various NLP tasks, the great constructed pre-trained language model shows its power of being fine-tuned in other downstream tasks either. In this paper, NCU-IISR team adopted the Spanish BERT, BETO, as our pre-trained model, and the model was finetuned on CANTEMIST Named Entity Recognition (NER) data. Besides, we also compared it with another fine-tuned version, which was trained on an external Spanish medical text. Finally, our best score achieved an F1-measure of 0.85 in the official test set result for CANTEMIST-NER task.
AB - Since BERT has brought a huge improvement in various NLP tasks, the great constructed pre-trained language model shows its power of being fine-tuned in other downstream tasks either. In this paper, NCU-IISR team adopted the Spanish BERT, BETO, as our pre-trained model, and the model was finetuned on CANTEMIST Named Entity Recognition (NER) data. Besides, we also compared it with another fine-tuned version, which was trained on an external Spanish medical text. Finally, our best score achieved an F1-measure of 0.85 in the official test set result for CANTEMIST-NER task.
KW - Deep learning
KW - Electronic health records
KW - Named entity recognition
KW - Pre-trained language model
UR - http://www.scopus.com/inward/record.url?scp=85092213391&partnerID=8YFLogxK
M3 - 會議論文
AN - SCOPUS:85092213391
SN - 1613-0073
VL - 2664
SP - 347
EP - 351
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
Y2 - 23 September 2020
ER -