Improving colloquial case legal judgment prediction via abstractive text summarization

Yu Xiang Hong, Chia Hui Chang

Research output: Contribution to journalArticlepeer-review


Most studies on Legal Judgment Prediction (LJP) use court verdicts or indictments as the training data source. Such models could assist judicial professionals who can use legal jargon to efficiently predict sentences. However, for ordinary and non-professional users, who can only provide a vague and incomplete description of the situation due to the lack of legal background, the predictive ability of the model will be greatly limited. To address this issue, we propose a colloquial case-based LJP framework called PekoNet, which incorporates Abstractive Text Summarization (ATS) into training for the LJP task to improve prediction accuracy for colloquial case description. We considered two approaches: independent training and joint training. The former train two separate model independently, while the latter jointly train both the ATS and LJP modules with either ATS-Freezing or ATS-Finetuning. The performance of these models is evaluated on two automated summarized testing datasets - BART and ChatGPT as well as human-provided case summaries. The results of the experiments demonstrate that the models developed with PekoNet outperform the typical LJP model for colloquial case description by up to 3.6%-10.8%.

Original languageEnglish
Article number105863
JournalComputer Law and Security Review
StatePublished - Nov 2023


  • Abstractive text summarization
  • Legal artificial intelligence
  • Legal judgment prediction
  • Legal text summarization


Dive into the research topics of 'Improving colloquial case legal judgment prediction via abstractive text summarization'. Together they form a unique fingerprint.

Cite this