Ncu-iisr: Pre-trained language model for cantemist named entity recognition

Jen Chieh Han, Richard Tzong Han Tsai

研究成果: 雜誌貢獻會議論文同行評審

1 引文 斯高帕斯(Scopus)

摘要

Since BERT has brought a huge improvement in various NLP tasks, the great constructed pre-trained language model shows its power of being fine-tuned in other downstream tasks either. In this paper, NCU-IISR team adopted the Spanish BERT, BETO, as our pre-trained model, and the model was finetuned on CANTEMIST Named Entity Recognition (NER) data. Besides, we also compared it with another fine-tuned version, which was trained on an external Spanish medical text. Finally, our best score achieved an F1-measure of 0.85 in the official test set result for CANTEMIST-NER task.

原文???core.languages.en_GB???
頁(從 - 到)347-351
頁數5
期刊CEUR Workshop Proceedings
2664
出版狀態已出版 - 2020
事件2020 Iberian Languages Evaluation Forum, IberLEF 2020 - Malaga, Spain
持續時間: 23 9月 2020 → …

指紋

深入研究「Ncu-iisr: Pre-trained language model for cantemist named entity recognition」主題。共同形成了獨特的指紋。

引用此