Dataset and Baseline for Automatic Student Feedback Analysis

dc.contributor.authorNilanga K
dc.contributor.authorHerath M
dc.contributor.authorMaduwantha H
dc.contributor.authorRanathunga S
dc.contributor.editorCalzolari N
dc.contributor.editorBéchet F
dc.contributor.editorBlache P
dc.contributor.editorChoukri K
dc.contributor.editorCieri C
dc.contributor.editorDeclerck T
dc.contributor.editorGoggi S
dc.contributor.editorIsahara H
dc.contributor.editorMaegaard B
dc.contributor.editorMariani J
dc.contributor.editorMazo H
dc.contributor.editorOdijk J
dc.contributor.editorPiperidis S
dc.coverage.spatialMarseille, France
dc.date.accessioned2025-06-11T00:54:39Z
dc.date.available2025-06-11T00:54:39Z
dc.date.finish-date2022-06-25
dc.date.issued2022-01-01
dc.date.start-date2022-06-20
dc.description.abstractIn this paper, we present a student feedback corpus that contains 3000 instances of feedback written by university students. This dataset has been annotated for aspect terms, opinion terms, polarities of the opinion terms towards targeted aspects, and document-level opinion polarities. We developed a hierarchical taxonomy for aspect categorisation, which covers many aspects of the teaching-learning process. We annotated both implicit and explicit aspects using this taxonomy. Annotation methodology, difficulties faced during the annotation, and the details of the aspect term categorization are discussed in detail. Using state-of-the-art techniques, we have built baseline models for the following tasks: Target oriented Opinion Extraction, Aspect Level Sentiment Analysis, and Document Level Sentiment Analysis. These models reported 64%, 75%, and 86% F1 scores (respectively) for the considered tasks. These results illustrate the reliability and usability of the corpus for different tasks related to sentiment analysis.
dc.description.confidentialfalse
dc.format.pagination2042-2049
dc.identifier.citationNilanga K, Herath M, Maduwantha H, Ranathunga S. (2022). Dataset and Baseline for Automatic Student Feedback Analysis. Calzolari N, Béchet F, Blache P, Choukri K, Cieri C, Declerck T, Goggi S, Isahara H, Maegaard B, Mariani J, Mazo H, Odijk J, Piperidis S. 2022 Language Resources and Evaluation Conference, LREC 2022. (pp. 2042-2049). European Language Resources Association (ELRA).
dc.identifier.elements-typec-conference-paper-in-proceedings
dc.identifier.isbn979-10-95546-72-6
dc.identifier.urihttps://mro.massey.ac.nz/handle/10179/73025
dc.publisherEuropean Language Resources Association (ELRA)
dc.publisher.urihttp://aclanthology.org/2022.lrec-1.219/
dc.source.journal2022 Language Resources and Evaluation Conference, LREC 2022
dc.source.name-of-conference13th Conference on Language Resources and Evaluation (LREC 2022)
dc.subjectTarget-oriented Opinion Word Extraction
dc.subjectAspect-level Sentiment Analysis
dc.subjectDocument-level Sentiment Analysis
dc.subjectPre-Trained Language Models (PLM)
dc.subjectStudent Feedback
dc.titleDataset and Baseline for Automatic Student Feedback Analysis
dc.typeconference
pubs.elements-id488638
pubs.organisational-groupOther

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
488638 PDF.pdf
Size:
233.38 KB
Format:
Adobe Portable Document Format
Description:
Published version.pdf

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
9.22 KB
Format:
Plain Text
Description: