A Framework to Assess Multilingual Vulnerabilities of LLMs

Loading...
Thumbnail Image

Date

2025-05-23

DOI

Open Access Location

Journal Title

Journal ISSN

Volume Title

Publisher

Association for Computing Machinery

Rights

(c) 2025 The Author/s
CC BY 4.0

Abstract

Large Language Models (LLMs) are acquiring a wider range of capabilities, including understanding and responding in multiple languages. While they undergo safety training to prevent them from answering illegal questions, imbalances in training data and human evaluation resources can make these models more susceptible to attacks in low-resource languages (LRL). This paper proposes a framework to automatically assess the multilingual vulnerabilities of commonly used LLMs. Using our framework, we evaluated six LLMs across eight languages representing varying levels of resource availability. We validated the assessments generated by our automated framework through human evaluation in two languages, demonstrating that the framework's results align with human judgments in most cases. Our findings reveal vulnerabilities in LRL; however, these may pose minimal risk as they often stem from the model's poor performance, resulting in incoherent responses.

Description

Keywords

Large Language Models, LLM Red Teaming, Jail Breaking

Citation

Tang L, Bogahawatta N, Ginige Y, Xu J, Sun S, Ranathunga S, Seneviratne S. (2025). A Framework to Assess Multilingual Vulnerabilities of LLMs. Www Companion 2025 Companion Proceedings of the ACM Web Conference 2025. (pp. 1331-1335). New York, NY, United States. Association for Computing Machinery.

Endorsement

Review

Supplemented By

Referenced By

Creative Commons license

Except where otherwised noted, this item's license is described as (c) 2025 The Author/s