Dr. Kevin Baum

Will AI ruin everything? Probably. I’m a Philosopher and Computer Scientist asking: But how exactly? And what can we do about it?

Hi, I’m Kevin Baum, a passionate ethicist (Dr. phil.) and computer scientist (M.Sc.) at the forefront of AI research. As a senior researcher at the German Research Center for Artificial Intelligence (DFKI), I am dedicated to exploring the myriad of technical, philosophical, and particularly ethical challenges arising from the interplay between technology and society, immersing myself in the theory and practice of Responsible AI, AI Ethics, and AI Alignment.

Since October 2024, I have been leading my own research group, Responsible AI and Machine Ethics (RAIME). From December 2023 until recently, I headed the Center for European Research in Trusted AI (CERTAIN) and continue to serve on its executive board. Additionally, I am deputy head and lab manager of the Neuro-Mechanistic Modeling (NMM) research department.

Research Interests:

🔍🤖 My interdisciplinary work explores the complex interplay between AI’s perspicuity attributes—such as transparency and explainability—and societal desiderata, including addressing gaps in responsibility, enabling effective human oversight, and detecting algorithmic unfairness. Beyond my role at DFKI, I engage with these challenges through two major initiatives. I am a member of the Transregional Collaborative Research Centre 248 “Foundations of Perspicuous Software Systems” within the Center for Perspicuous Computing (CPEC), and I remain closely involved with the interdisciplinary project Explainable Intelligent Systems (EIS), funded by the Volkswagen Foundation. Additionally, broader questions of hybrid intelligence—specifically, how experts can effectively infuse their knowledge (including, but not restricted to moral expertise) into reinforcement learning (RL) systems and derive insights from RL-driven behaviors—are central to my our project, Multi-level Abstractions on Causal Modelling for Enhanced Reinforcement Learning (MAC-MERLin).

🖥️📊 My research takes a philosophically informed approach to topics within computer science, focusing on algorithmic fairness and the pursuit of effective human oversight. This work builds on methodologies from explainable artificial intelligence (XAI) and the emerging field of mechanistic interpretability. At its core, this research addresses a broader question: how can normative requirements and technical methods/procedures be meaningfully integrated, a challenge that demands conceptual, normative, and empirical evaluation. Together with my colleagues at CERTAIN, I am particularly interested in exploring how the properties of AI systems commonly grouped under the term “Trustworthy AI” can be operationalized and certified, and how these efforts contribute to fostering appropriate trust in AI and cultivating a healthy trust infrastructure.

🤖📜 My research also delves into machine ethics, focusing on integrating moral considerations into AI agents’ decision-making frameworks. A key aspect of this work involves embedding normative reasoning into reinforcement learning architectures to develop AI agents sensitive and responsive to normative reasons. More recently, I have revisited the field of AI alignment, exploring ways to ensure AI systems align with human values and ethical principles through means of practical reasoning and justification.

🤔⚖️ My work also extends to the broader field of AI ethics, where, for instance, I contribute to developing ethical guidelines for responsible AI design and deployment. A key focus of my current research is addressing how we should make decisions under normative—and particularly moral—uncertainty in both the design and deployment of AI systems. To navigate these challenges, I draw on insights from practical reasoning to facilitate decisions that are as reasonable and defensible as possible. Central to my approach is placing the concept of justifications at the core of application-oriented AI ethics research, emphasizing their importance in fostering accountability and ethical rigor.

👨‍🏫📘 Besides research, I am deeply involved in ethical education for computer science students and professionals, continuously exploring ways to enhance effective teaching methodologies in this area. Notably, our course Ethics for Nerds was awarded the “Hochschulperle des Jahres” by the German Stifterverband in 2019, a prestigious recognition for innovative approaches in higher education.

What else I do:

In addition to my academic and research activities, I have gained practical experience in the ethical assessment of research projects. This includes serving as a member and deputy chairman of the Commission for the Ethics of Security-Relevant Research (“Kommission für die Ethik sicherheitsrelevanter Forschung”, KEF) at Saarland University (UdS) from 2020 to 2022, as a member of the Ethical Review Board (ERB) of the Saarbrücken Informatics Campus (SIC), and as an ethical advisor for the DFKI Ethics Team as well as an EU Horizon 2020 project (undisclosed). I am also co-founder of the non-profit association Algoright, an interdisciplinary think tank for responsible digitalization, focusing on science communication and ethical consulting. Additionally, I served as a permanent expert member for digital ethics in the Saarland State Parliament’s Enquete Commission on Digitalization in Saarland.

Beyond these roles, I actively engage in consultancy and keynote speaking, whether in my professional capacity, as a member of Algoright, or as a freelancer, sharing insights on topics related to ethics, AI, and digitalization.

You really want to know more?

You can find all my publications on Google Scholar, DBLP, and my PhilPeople profile. For a detailed overview of my academic and professional journey, feel free to browse my curriculum vitae.