Browsing by Author "Peramuna S"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- ItemSiTSE: Sinhala Text Simplification Dataset and Evaluation(Association for Computing Machinery, 2025-05-08) Ranathunga S; Sirithunga R; Rathnayake H; De Silva L; Aluthwala T; Peramuna S; Shekhar R; Zitouni IText Simplification is a task that has been minimally explored for low-resource languages. Consequently, there are only a few manually curated datasets. In this article, we present a human-curated sentence-level text simplification dataset for the Sinhala language. Our evaluation dataset contains 1,000 complex sentences and 3,000 corresponding simplified sentences produced by three different human annotators. We model the text simplification task as a zero-shot and zero-resource sequence-to-sequence (seq-seq) task on the multilingual language models mT5 and mBART. We exploit auxiliary data from related seq-seq tasks and explore the possibility of using intermediate task transfer learning (ITTL). Our analysis shows that ITTL outperforms the previously proposed zero-resource methods for text simplification. Our findings also highlight the challenges in evaluating text simplification systems and support the calls for improved metrics for measuring the quality of automated text simplification systems that would suit low-resource languages as well. Our code and data are publicly available: https://github.com/brainsharks-fyp17/Sinhala-Text-Simplification-Dataset-andEvaluation.
- ItemWord embedding evaluation for Sinhala(European Language Resources Association, 2020-01-01) Lakmal D; Ranathunga S; Peramuna S; Herath I; Calzolari N; Béchet F; Blache P; Choukri K; Cieri C; Declerck T; Goggi S; Isahara H; Maegaard B; Mariani J; Mazo H; Moreno A; Odijk J; Piperidis SThis paper presents the first ever comprehensive evaluation of different types of word embeddings for Sinhala language. Three standard word embedding models, namely, Word2Vec (both Skipgram and CBOW), FastText, and Glove are evaluated under two types of evaluation methods: intrinsic evaluation and extrinsic evaluation. Word analogy and word relatedness evaluations were performed in terms of intrinsic evaluation, while sentiment analysis and part-of-speech (POS) tagging were conducted as the extrinsic evaluation tasks. Benchmark datasets used for intrinsic evaluations were carefully crafted considering specific linguistic features of Sinhala. In general, FastText word embeddings with 300 dimensions reported the finest accuracies across all the evaluation tasks, while Glove reported the lowest results.