Enhancing Accessibility of Government Notices Through LLM-Based Multilingual Translation
Enhancing Accessibility of Government Notices Through LLM-Based Multilingual Translation
Dr.M.U.Karande1, Mr. Rohit. V. Talele2, Miss.Shruti.S.Ujjainkar3, Miss.Shiva.D.Hinge4, Mr.Saurabh.R.Patil5,
1 Professor, Department of Computer Science Engg,
2 Students, Department of Computer Science Engg, Dr. V. B. Kolte College of Engineering, Malkapur, India
3 Students, Department of Computer Science Engg, Dr. V. B. Kolte College of Engineering, Malkapur, India
4 Students, Department of Computer Science Engg, Dr. V. B. Kolte College of Engineering, Malkapur, India
5Students, Department of Computer Science Engg, Dr. V. B. Kolte College of Engineering, Malkapur, India
Abstract –In the field of computational linguistics, addressing machine translation (MT) challenges for low-resource languages remains crucial, as these languages often lack extensive data compared to high resource languages. General large language models (LLMs), such as GPT-4 and Llama, primarily trained on monolingual corpora, face significant challenges in translating low-resource languages, often resulting in subpar translation quality. This study introduces Language-Specific Fine-Tuning with Low-rank adaptation (LSFTL), a method that enhances translation for low-resource languages by optimizing the multi-head attention and feed-forward networks of Transformer layers through low-rank matrix adaptation. LSFTL preserves the majority of the model parameters while selectively fine-tuning key components, thereby maintaining stability and enhancing translation quality. Experiments on non-English centered low-resource Asian languages demonstrated that LSFTL improved COMET scores by 1-3 points compared to specialized multilingual machine translation models. Additionally, LSFTL’s parameter-efficient approach allows smaller models to achieve performance comparable to their larger counterparts, highlighting its significance in making machine translation systems more accessible and effective for low-resource languages.Key Words: Machine translation, low-resource languages, large language models, parameter-efficient fine-tuning, LoRA.