Ensemble Pre-trained Transformer Models for Writing Style Change Detection

Tzu Mi Lin, Chao Yi Chen, Yu Wen Tzeng, Lung Hao Lee

Research output: Contribution to journalConference articlepeer-review

8 Scopus citations

Abstract

This paper describes a proposed system design for Style Change Detection (SCD) tasks for PAN at CLEF 2022. We propose a unified architecture of ensemble neural networks to solve three SCD-2022 edition tasks. We fine-tune the BERT, RoBERTa and ALBERT transformers and their connecting classifiers to measure the similarity of two given paragraphs or sentences for authorship analysis. Each transformer model is regarded as a standalone method to detect differences in the writing styles of each testing pair. The final output prediction is then combined using the majority voting ensemble mechanism. For SCD-2022 Task 1, which requires finding the only one position of a single style at the paragraph level, our approach achieves a macro F1-score of 0.7540. For SCD-2022 Task 2 to detect the actual authors of each written paragraph, our method achieves a macro F1-score of 0.5097, a Diarization error rate of 0.1941 and a Jaccard error rate of 0.3095. For SCD-2022 Task 3 to find located writing style changes at the sentence level, our model achieves a macro F1-score of 0.7156. In summary, our method is the winning approach in the list of all intrinsic approaches.

Original languageEnglish
Pages (from-to)2565-2573
Number of pages9
JournalCEUR Workshop Proceedings
Volume3180
StatePublished - 2022
Event2022 Conference and Labs of the Evaluation Forum, CLEF 2022 - Bologna, Italy
Duration: 5 Sep 20228 Sep 2022

Keywords

  • Authorship Analysis
  • Ensemble Learning
  • Plagiarism Detection
  • Pre-trained Models

Fingerprint

Dive into the research topics of 'Ensemble Pre-trained Transformer Models for Writing Style Change Detection'. Together they form a unique fingerprint.

Cite this