Anyone who meets Laura Crompton quickly realises that she is more interested in the impact that AI has on people than in its technical possibilities. Her research begins where algorithms meet everyday life. How do they shape judgements? What role do they play in decisions? And why do people react so differently to these technologies?
Two phenomena are at the centre of her research: algorithmic appreciation, the excessive trust in machine results, and algorithmic aversion, the spontaneous rejection. "AI doesn't invent anything new," she says. "It relies on data and therefore also on social values, some of which are often outdated." A well-known example from the USA, in which an algorithm disadvantaged people of colour, is symptomatic of her: "If the data is distorted, the AI will also react in a distorted way."
Crompton's scientific basis lies in philosophy. During her doctorate and postdoctoral phase at the University of Vienna, she researched how people make normative judgements - and how these processes change when algorithmic systems are involved. In the medical field or in jurisprudence and social work, she was able to observe how subtle the influence of automated recommendations can be.
Further positions took her to the Technical University of Munich and the Ludwig Maximilian University of Munich, where she helped to organise projects on ethics and technology. She then moved into applied practice: as an AI ethics expert at byte - Bayerische Agentur für Digitales, she advised Bavarian ministries and their subordinate departments on how AI can and should be used in public administration in a meaningful and value-creating way.
Today, Crompton wants to sensitise students to not only understand AI technically, but also to reflect on it socially: What assumptions are in the data? What norms do models reproduce? And for whom do algorithmic systems work - or not? "To design AI responsibly, we first need to understand how we humans make decisions," she emphasises.
At THI, she aims to explore these questions further, critically and scientifically, to support technological developments in a way that benefits people rather than marginalizing them. With Laura Crompton, a chapter begins at the university in which AI is not only calculated, but also scrutinised even more closely - and in which the real question is: Who determines the future - us or the algorithm?



![[Translate to English:] Logo Akkreditierungsrat: Systemakkreditiert](/fileadmin/_processed_/2/8/csm_AR-Siegel_Systemakkreditierung_bc4ea3377d.webp)










