Using Parameter Efficient Fine-Tuning on Legal Artificial Intelligence

Kuo Chun Chien, Chia Hui Chang, Ren Der Sun

Research output: Contribution to journalConference articlepeer-review

Abstract

Legal AI has a wide range of applications such as predicting whether a prosecution will be punished, or whether the punishment will be a prison sentence or a fine. However, current advances in natural language processing have resulted in an ever-increasing number of language models. The cost of fine-tuning the pre-trained language model and storing these fine-tuned language models becomes more and more expensive. To address this issue, we adopted the concept of Parameter Efficient Fine-Tuning (PEFT) and applied it to the field of Legal AI. By leveraging PEFT techniques, particularly through the implementation of the Low-Rank Adaptation (LoRA) architecture, we have achieved promising results in fine-tuning pre-trained language models. This approach enables us to achieve comparable, if not superior, performance while significantly reducing the time required for model adjustments. It demonstrates the potential of PEFT techniques in adapting language models to different legal frameworks, enhancing the accuracy and relevance of legal knowledge services, and making Legal AI more accessible to individuals without legal backgrounds.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume3637
StatePublished - 2023
EventJoint Ontology Workshops 2023, Episode IX: The Quebec Summer of Ontology, JOWO 2023 - Sherbrooke, Canada
Duration: 19 Jul 202320 Jul 2023

Keywords

  • Legal AI
  • Legal Judgment Prediction
  • Parameter-Efficient Fine-Tuning

Fingerprint

Dive into the research topics of 'Using Parameter Efficient Fine-Tuning on Legal Artificial Intelligence'. Together they form a unique fingerprint.

Cite this