TY - JOUR
T1 - Using Parameter Efficient Fine-Tuning on Legal Artificial Intelligence
AU - Chien, Kuo Chun
AU - Chang, Chia Hui
AU - Sun, Ren Der
N1 - Publisher Copyright:
© 2023 Copyright for this paper by its authors.
PY - 2023
Y1 - 2023
N2 - Legal AI has a wide range of applications such as predicting whether a prosecution will be punished, or whether the punishment will be a prison sentence or a fine. However, current advances in natural language processing have resulted in an ever-increasing number of language models. The cost of fine-tuning the pre-trained language model and storing these fine-tuned language models becomes more and more expensive. To address this issue, we adopted the concept of Parameter Efficient Fine-Tuning (PEFT) and applied it to the field of Legal AI. By leveraging PEFT techniques, particularly through the implementation of the Low-Rank Adaptation (LoRA) architecture, we have achieved promising results in fine-tuning pre-trained language models. This approach enables us to achieve comparable, if not superior, performance while significantly reducing the time required for model adjustments. It demonstrates the potential of PEFT techniques in adapting language models to different legal frameworks, enhancing the accuracy and relevance of legal knowledge services, and making Legal AI more accessible to individuals without legal backgrounds.
AB - Legal AI has a wide range of applications such as predicting whether a prosecution will be punished, or whether the punishment will be a prison sentence or a fine. However, current advances in natural language processing have resulted in an ever-increasing number of language models. The cost of fine-tuning the pre-trained language model and storing these fine-tuned language models becomes more and more expensive. To address this issue, we adopted the concept of Parameter Efficient Fine-Tuning (PEFT) and applied it to the field of Legal AI. By leveraging PEFT techniques, particularly through the implementation of the Low-Rank Adaptation (LoRA) architecture, we have achieved promising results in fine-tuning pre-trained language models. This approach enables us to achieve comparable, if not superior, performance while significantly reducing the time required for model adjustments. It demonstrates the potential of PEFT techniques in adapting language models to different legal frameworks, enhancing the accuracy and relevance of legal knowledge services, and making Legal AI more accessible to individuals without legal backgrounds.
KW - Legal AI
KW - Legal Judgment Prediction
KW - Parameter-Efficient Fine-Tuning
UR - http://www.scopus.com/inward/record.url?scp=85185202081&partnerID=8YFLogxK
M3 - 會議論文
AN - SCOPUS:85185202081
SN - 1613-0073
VL - 3637
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - Joint Ontology Workshops 2023, Episode IX: The Quebec Summer of Ontology, JOWO 2023
Y2 - 19 July 2023 through 20 July 2023
ER -