Doctoral colloquium - Endre Hamerlik (6.11.2023)
Monday 6.11.2023 at 13:10, Lecture room I/9
By: Damas Gruska
Exploring Morphosyntactic Insights in Masked Language Models through Probing and Perturbations
Masked language models (MLMs) have gained significant popularity in the field of natural language processing due to their impressive performance across various tasks. To better understand how MLMs work internally, probing techniques are widely used. These techniques involve training classifiers to predict specific language features based on the hidden representations produced by MLMs.
In our research, we've chosen to focus on the morphosyntactic features found in input texts. Our main question is whether Large Language Models (LLMs), trained solely on MLM tasks, develop representations that capture aspects like part-of-speech and morphological categories.
To shed light on the dependencies of these representations, we've introduced controlled conditions where we provide modified inputs to the pretrained language model being studied. The quality and impact of these modifications have led to intriguing findings, which we'll be presenting at the PhD colloquium.