TY - JOUR
T1 - Improving colloquial case legal judgment prediction via abstractive text summarization
AU - Hong, Yu Xiang
AU - Chang, Chia Hui
N1 - Publisher Copyright:
© 2023 Yu-Xiang Hong and Chia-Hui Chang
PY - 2023/11
Y1 - 2023/11
N2 - Most studies on Legal Judgment Prediction (LJP) use court verdicts or indictments as the training data source. Such models could assist judicial professionals who can use legal jargon to efficiently predict sentences. However, for ordinary and non-professional users, who can only provide a vague and incomplete description of the situation due to the lack of legal background, the predictive ability of the model will be greatly limited. To address this issue, we propose a colloquial case-based LJP framework called PekoNet, which incorporates Abstractive Text Summarization (ATS) into training for the LJP task to improve prediction accuracy for colloquial case description. We considered two approaches: independent training and joint training. The former train two separate model independently, while the latter jointly train both the ATS and LJP modules with either ATS-Freezing or ATS-Finetuning. The performance of these models is evaluated on two automated summarized testing datasets - BART and ChatGPT as well as human-provided case summaries. The results of the experiments demonstrate that the models developed with PekoNet outperform the typical LJP model for colloquial case description by up to 3.6%-10.8%.
AB - Most studies on Legal Judgment Prediction (LJP) use court verdicts or indictments as the training data source. Such models could assist judicial professionals who can use legal jargon to efficiently predict sentences. However, for ordinary and non-professional users, who can only provide a vague and incomplete description of the situation due to the lack of legal background, the predictive ability of the model will be greatly limited. To address this issue, we propose a colloquial case-based LJP framework called PekoNet, which incorporates Abstractive Text Summarization (ATS) into training for the LJP task to improve prediction accuracy for colloquial case description. We considered two approaches: independent training and joint training. The former train two separate model independently, while the latter jointly train both the ATS and LJP modules with either ATS-Freezing or ATS-Finetuning. The performance of these models is evaluated on two automated summarized testing datasets - BART and ChatGPT as well as human-provided case summaries. The results of the experiments demonstrate that the models developed with PekoNet outperform the typical LJP model for colloquial case description by up to 3.6%-10.8%.
KW - Abstractive text summarization
KW - Legal artificial intelligence
KW - Legal judgment prediction
KW - Legal text summarization
UR - http://www.scopus.com/inward/record.url?scp=85169036466&partnerID=8YFLogxK
U2 - 10.1016/j.clsr.2023.105863
DO - 10.1016/j.clsr.2023.105863
M3 - 期刊論文
AN - SCOPUS:85169036466
SN - 0267-3649
VL - 51
JO - Computer Law and Security Review
JF - Computer Law and Security Review
M1 - 105863
ER -