Profiling Bias in LLMs: Stereotype Dimensions in Contextual Word Embeddings

dc.contributor.authorSchuster, Carolin M.
dc.contributor.authorRoman, Maria-Alexandra
dc.contributor.authorGhatiwala, Shashwat
dc.contributor.authorGroh, Georg
dc.contributor.editorJohansson, Richard
dc.contributor.editorStymne, Sara
dc.coverage.spatialTallinn, Estonia
dc.date.accessioned2025-02-19T08:24:36Z
dc.date.available2025-02-19T08:24:36Z
dc.date.issued2025-03
dc.description.abstractLarge language models (LLMs) are the foundation of the current successes of artificial intelligence (AI), however, they are unavoidably biased. To effectively communicate the risks and encourage mitigation efforts these models need adequate and intuitive descriptions of their discriminatory properties, appropriate for all audiences of AI. We suggest bias profiles with respect to stereotype dimensions based on dictionaries from social psychology research. Along these dimensions we investigate gender bias in contextual embeddings, across contexts and layers, and generate stereotype profiles for twelve different LLMs, demonstrating their intuition and use case for exposing and visualizing bias.
dc.identifier.urihttps://hdl.handle.net/10062/107258
dc.language.isoen
dc.publisherUniversity of Tartu Library
dc.relation.ispartofseriesNEALT Proceedings Series, No. 57
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.titleProfiling Bias in LLMs: Stereotype Dimensions in Contextual Word Embeddings
dc.typeArticle

Failid

Originaal pakett

Nüüd näidatakse 1 - 1 1
Laen...
Pisipilt
Nimi:
2025_nodalida_1_65.pdf
Suurus:
368.17 KB
Formaat:
Adobe Portable Document Format