Abstract
Since BERT has brought a huge improvement in various NLP tasks, the great constructed pre-trained language model shows its power of being fine-tuned in other downstream tasks either. In this paper, NCU-IISR team adopted the Spanish BERT, BETO, as our pre-trained model, and the model was finetuned on CANTEMIST Named Entity Recognition (NER) data. Besides, we also compared it with another fine-tuned version, which was trained on an external Spanish medical text. Finally, our best score achieved an F1-measure of 0.85 in the official test set result for CANTEMIST-NER task.
Original language | English |
---|---|
Pages (from-to) | 347-351 |
Number of pages | 5 |
Journal | CEUR Workshop Proceedings |
Volume | 2664 |
State | Published - 2020 |
Event | 2020 Iberian Languages Evaluation Forum, IberLEF 2020 - Malaga, Spain Duration: 23 Sep 2020 → … |
Keywords
- Deep learning
- Electronic health records
- Named entity recognition
- Pre-trained language model