Improving colloquial case legal judgment prediction via abstractive text summarization

Yu Xiang Hong, Chia Hui Chang

研究成果: 雜誌貢獻期刊論文同行評審

摘要

Most studies on Legal Judgment Prediction (LJP) use court verdicts or indictments as the training data source. Such models could assist judicial professionals who can use legal jargon to efficiently predict sentences. However, for ordinary and non-professional users, who can only provide a vague and incomplete description of the situation due to the lack of legal background, the predictive ability of the model will be greatly limited. To address this issue, we propose a colloquial case-based LJP framework called PekoNet, which incorporates Abstractive Text Summarization (ATS) into training for the LJP task to improve prediction accuracy for colloquial case description. We considered two approaches: independent training and joint training. The former train two separate model independently, while the latter jointly train both the ATS and LJP modules with either ATS-Freezing or ATS-Finetuning. The performance of these models is evaluated on two automated summarized testing datasets - BART and ChatGPT as well as human-provided case summaries. The results of the experiments demonstrate that the models developed with PekoNet outperform the typical LJP model for colloquial case description by up to 3.6%-10.8%.

原文???core.languages.en_GB???
文章編號105863
期刊Computer Law and Security Review
51
DOIs
出版狀態已出版 - 11月 2023

指紋

深入研究「Improving colloquial case legal judgment prediction via abstractive text summarization」主題。共同形成了獨特的指紋。

引用此