Sirvi Autor "Bassignana, Elisa" järgi
Nüüd näidatakse 1 - 3 3
- Tulemused lehekülje kohta
- Sorteerimisvalikud
Kirje MorSeD: Morphological Segmentation of Danish and its Effect on Language Modeling(University of Tartu Library, 2025-03) Goot, Rob van der; Jensen, Anette; Schledermann, Emil Allerslev; Kildeberg, Mikkel Wildner; Larsen, Nicolaj; Zhang, Mike; Bassignana, Elisa; Johansson, Richard; Stymne, SaraCurrent language models (LMs) mostly exploit subwords as input units based on statistical co-occurrences of characters. Adjacently, previous work has shown that modeling morphemes can aid performance for Natural Language Processing (NLP) models. However, morphemes are challenging to obtain as there is no annotated data in most languages. In this work, we release a wide-coverage Danish morphological segmentation evaluation set. We evaluate a range of unsupervised token segmenters and evaluate the downstream effect of using morphemes as input units for transformer-based LMs. Our results show that popular subword algorithms perform poorly on this task, scoring at most an F1 of 57.6 compared to 68.0 for an unsupervised morphological segmenter (Morfessor). Furthermore, evaluate a range of segmenters on the task of language modeling.Kirje MULTI-CROSSRE A Multi-Lingual Multi-Domain Dataset for Relation Extraction(University of Tartu Library, 2023-05) Bassignana, Elisa; Ginter, Filip; Pyysalo, Sampo; Goot, Rob van der; Plank, BarbaraKirje SnakModel: Lessons Learned from Training an Open Danish Large Language Model(University of Tartu Library, 2025-03) Zhang, Mike; Müller-Eberstein, Max; Bassignana, Elisa; Goot, Rob van der; Johansson, Richard; Stymne, SaraWe present SnakModel, a Danish large language model (LLM) based on Llama2-7B, which we continuously pre-train on 13.6B Danish words, and further tune on 3.7M Danish instructions. As best practices for creating LLMs for smaller language communities have yet to be established, we examine the effects of early modeling and training decisions on downstream performance throughout the entire training pipeline, including (1) the creation of a strictly curated corpus of Danish text from diverse sources; (2) the language modeling and instruction-tuning training process itself, including the analysis of intermediate training dynamics, and ablations across different hyperparameters; (3) an evaluation on eight language and culturally-specific tasks. Across these experiments SnakModel achieves the highest overall performance, outperforming multiple contemporary Llama2-7B-based models. By making SnakModel, the majority of our pre-training corpus, and the associated code available under open licenses, we hope to foster further research and development in Danish Natural Language Processing, and establish training guidelines for languages with similar resource constraints.