Browsing by Author "Fernando A"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item A Social Assessment Framework to Derive a Social Score for Green Material Selection: A Case Study from the Sri Lankan Cement Industry(MDPI (Basel, Switzerland), 2024-08-02) Fernando A; Siriwardana C; Gunasekara C; Law DW; Zhang G; Gamage JCPH; Caggiano AAssessing the sustainability of material-based products now encompasses social sustainability, a vital aspect often overlooked. Even though the existing frameworks provide a starting point, they do not often differentiate between the assessment criteria when making comparisons within one specific material category, which has made sustainability assessments more focused on environmental and economic aspects. This study addresses this critical gap by pioneering a social assessment framework curated to help practitioners to choose the most sustainable cement type out of the standard cement types used in the industry. Utilizing the Fuzzy Analytic Hierarchy Process (FAHP) and linear-scoring method, criteria weights were systematically assigned based on scoring by industry and academic experts. The findings highlight the importance of integrating social sustainability with environmental and economic factors in cement selection. Unlike traditional material selection, which primarily considers cost and performance, green material selection emphasizes the holistic impact of materials, including social factors. Variations in weightage decisions among experts highlight the influence of practical experience, research interests, and context. Functionality emerges as a crucial criterion. The ranking of cement types based on social scores places CEM II/B-M at the top, followed by CEM IV/A, CEM II/A-S, CEM II/A-V, CEM I, and CEM II/A-LL. The evolving nature of sustainability necessitates ongoing research to refine and expand existing frameworks for a more sustainable construction industry.Item Linguistic entity masking to improve cross-lingual representation of multilingual language models for low-resource languages(Springer-Verlag London Ltd, 2025-07-19) Fernando A; Ranathunga SMultilingual Pre-trained Language models (multiPLMs), trained on the Masked Language Modelling (MLM) objective are commonly being used for cross-lingual tasks such as bitext mining. However, the performance of these models is still suboptimal for low-resource languages (LRLs). To improve the language representation of a given multiPLM, it is possible to further pre-train it. This is known as continual pre-training. Previous research has shown that continual pre-training with MLM and subsequently with Translation Language Modelling (TLM) improves the cross-lingual representation of multiPLMs. However, during masking, both MLM and TLM give equal weight to all tokens in the input sequence, irrespective of the linguistic properties of the tokens. In this paper, we introduce a novel masking strategy, Linguistic Entity Masking (LEM) to be used in the continual pre-training step to further improve the cross-lingual representations of existing multiPLMs. In contrast to MLM and TLM, LEM limits masking to the linguistic entity types nouns, verbs and named entities, which hold a higher prominence in a sentence. Secondly, we limit masking to a single token within the linguistic entity span thus keeping more context, whereas, in MLM and TLM, tokens are masked randomly. We evaluate the effectiveness of LEM using three downstream tasks, namely bitext mining, parallel data curation and code-mixed sentiment analysis using three low-resource language pairs English-Sinhala, English-Tamil, and Sinhala-Tamil. Experiment results show that continually pre-training a multiPLM with LEM outperforms a multiPLM continually pre-trained with MLM+TLM for all three tasks.
