Since BERT has brought a huge improvement in various NLP tasks, the great constructed pre-trained language model shows its power of being fine-tuned in other downstream tasks either. In this paper, NCU-IISR team adopted the Spanish BERT, BETO, as our pre-trained model, and the model was finetuned on CANTEMIST Named Entity Recognition (NER) data. Besides, we also compared it with another fine-tuned version, which was trained on an external Spanish medical text. Finally, our best score achieved an F1-measure of 0.85 in the official test set result for CANTEMIST-NER task.
|頁（從 - 到）||347-351|
|期刊||CEUR Workshop Proceedings|
|出版狀態||已出版 - 2020|
|事件||2020 Iberian Languages Evaluation Forum, IberLEF 2020 - Malaga, Spain|
持續時間: 23 9月 2020 → …