I perform research on Natural Language Processing under the supervision of Richard Johansson at the Data Science and AI division at Chalmers. I'm interested in language model controllability, interpretability and prediction provenance. In my research I investigate factual knowledge in language models and methods for adding knowledge to a model.
Language Model Re-rankers are Steered by Lexical Similarities
Lovisa Hagström, Ercong Nie, Ruben Halifa, Helmut Schmid, Richard Johansson, Alexander Junge
Arxiv
Fact Recall, Heuristics or Pure Guesswork? Precise Interpretations of Language Models for Fact Completion
Denitsa Saynova, Lovisa Hagström, Moa Johansson, Richard Johansson, Marco Kuhlmann
Arxiv
A Reality Check on Context Utilisation for Retrieval-Augmented Generation
Lovisa Hagström, Sara Vera Marjanović, Haeun Yu, Arnav Arora, Christina Lioma, Maria Maistro, Pepa Atanasova, Isabelle Augenstein
Arxiv
The Effect of Scaling, Retrieval Augmentation and Form on the Factual Consistency of Language Models
Lovisa Hagström, Denitsa Saynova, Tobias Norlund, Moa Johansson and Richard Johansson
EMNLP
A Picture is Worth a Thousand Words: Natural Language Processing in Context
Lovisa Hagström
Licentiate thesis at Chalmers University of Technology
How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?
Lovisa Hagström and Richard Johansson
COLING
What do Models Learn From Training on More Than Text? Measuring Visual Commonsense Knowledge
Lovisa Hagström and Richard Johansson
ACL Student Research Workshop
Can We Use Small Models to Investigate Multimodal Fusion Methods?
Lovisa Hagström, Tobias Norlund and Richard Johansson
CLASP Conference on (Dis)embodiment
Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?
Tobias Norlund, Lovisa Hagström and Richard Johansson
BlackboxNLP Workshop
Knowledge Distillation for Swedish NER models: A Search for Performance and Efficiency
Lovisa Hagström and Richard Johansson
Nordic Conference on Computational Linguistics (NoDaLiDa)
I supervise master's theses at Chalmers and would be happy to discuss potential thesis projects with students or organizations interested in NLP. Simply send me an email!
I also work as a teaching assistant for the courses Applied mathematical thinking, Algorithms for machine learning and inference and Applied Machine Learning at Chalmers.
Title: Language Models and Knowledge Representations.
When: September 2025.
Where: Room TBA, Chalmers University of Technology, Gothenburg, Sweden.
My doctoral studies have mainly focused on the intersection between language models (LMs) and knowledge representations. Given that LMs are increasingly used as simpler interfaces to factual knowledge, we need models that not only are accurate, but also factually consistent, updatable and, ultimately, reliable. The body of work that will be discussed during my PhD defense touch upon these topics in different manners: