Investigating Linguistic Abilities of LLMs for Native Language Identification
dc.contributor.author | Uluslu, Ahmet Yavuz | |
dc.contributor.author | Schneider, Gerold | |
dc.contributor.editor | Muñoz Sánchez, Ricardo | |
dc.contributor.editor | Alfter, David | |
dc.contributor.editor | Volodina, Elena | |
dc.contributor.editor | Kallas, Jelena | |
dc.coverage.spatial | Tallinn, Estonia | |
dc.date.accessioned | 2025-02-17T10:46:46Z | |
dc.date.available | 2025-02-17T10:46:46Z | |
dc.date.issued | 2025-03 | |
dc.description.abstract | Large language models (LLMs) have achieved state-of-the-art results in native language identification (NLI). However, these models often depend on superficial features, such as cultural references and self-disclosed information in the document, rather than capturing the underlying linguistic structures. In this work, we evaluate the linguistic abilities of opensource LLMs by evaluating their performance in NLI through content-independent features, such as POS n-grams, function words, and punctuation marks, and compare their performance against traditional machine learning approaches. Our experiments reveal that while LLM’s initial performance on structural features (55.2% accuracy) falls significantly below their performance on full text (96.5%), fine-tuning significantly improves their capabilities, enabling state-of-the-art results with strong cross-domain generalization. | |
dc.identifier.uri | https://hdl.handle.net/10062/107172 | |
dc.language.iso | en | |
dc.publisher | University of Tartu Library | |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.title | Investigating Linguistic Abilities of LLMs for Native Language Identification | |
dc.type | Article |
Failid
Originaal pakett
1 - 1 1