Using Parameter Efficient Fine-Tuning on Legal Artificial Intelligence

Kuo Chun Chien, Chia Hui Chang, Ren Der Sun

研究成果: 雜誌貢獻會議論文同行評審

摘要

Legal AI has a wide range of applications such as predicting whether a prosecution will be punished, or whether the punishment will be a prison sentence or a fine. However, current advances in natural language processing have resulted in an ever-increasing number of language models. The cost of fine-tuning the pre-trained language model and storing these fine-tuned language models becomes more and more expensive. To address this issue, we adopted the concept of Parameter Efficient Fine-Tuning (PEFT) and applied it to the field of Legal AI. By leveraging PEFT techniques, particularly through the implementation of the Low-Rank Adaptation (LoRA) architecture, we have achieved promising results in fine-tuning pre-trained language models. This approach enables us to achieve comparable, if not superior, performance while significantly reducing the time required for model adjustments. It demonstrates the potential of PEFT techniques in adapting language models to different legal frameworks, enhancing the accuracy and relevance of legal knowledge services, and making Legal AI more accessible to individuals without legal backgrounds.

原文???core.languages.en_GB???
期刊CEUR Workshop Proceedings
3637
出版狀態已出版 - 2023
事件Joint Ontology Workshops 2023, Episode IX: The Quebec Summer of Ontology, JOWO 2023 - Sherbrooke, Canada
持續時間: 19 7月 202320 7月 2023

指紋

深入研究「Using Parameter Efficient Fine-Tuning on Legal Artificial Intelligence」主題。共同形成了獨特的指紋。

引用此