Abstract
Legal AI has a wide range of applications such as predicting whether a prosecution will be punished, or whether the punishment will be a prison sentence or a fine. However, current advances in natural language processing have resulted in an ever-increasing number of language models. The cost of fine-tuning the pre-trained language model and storing these fine-tuned language models becomes more and more expensive. To address this issue, we adopted the concept of Parameter Efficient Fine-Tuning (PEFT) and applied it to the field of Legal AI. By leveraging PEFT techniques, particularly through the implementation of the Low-Rank Adaptation (LoRA) architecture, we have achieved promising results in fine-tuning pre-trained language models. This approach enables us to achieve comparable, if not superior, performance while significantly reducing the time required for model adjustments. It demonstrates the potential of PEFT techniques in adapting language models to different legal frameworks, enhancing the accuracy and relevance of legal knowledge services, and making Legal AI more accessible to individuals without legal backgrounds.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 3637 |
State | Published - 2023 |
Event | Joint Ontology Workshops 2023, Episode IX: The Quebec Summer of Ontology, JOWO 2023 - Sherbrooke, Canada Duration: 19 Jul 2023 → 20 Jul 2023 |
Keywords
- Legal AI
- Legal Judgment Prediction
- Parameter-Efficient Fine-Tuning