Kevin Baum

Ethicist, Philosopher, and Computer Scientist | Postdoctoral Researcher in Ethical, Responsible, and Trustworthy AI at the German Research Center for Artificial Intelligence (DFKI), Saarbrücken, Germany | Head of the Center for European Research in Trusted AI (CERTAIN) | Lead of the Responsible AI and Machine Ethics (RAIME) research group | Deputy Head of the Neuro-Mechanistic Modeling (NMM) research department.

Hey there! I’m Kevin Baum, a passionate ethicist and computer scientist at the forefront of exploring the ethical dimensions of AI.

At DFKI, I lead initiatives to ensure AI development aligns with societal values and ethical standards. In January 2023, I took on the role of deputy head of the Neuro-Mechanistic Modeling research department at the DFKI. By December 2023, I became head of the Center for European Research in Trusted AI (CERTAIN). Since October 2024, I have been leading my own research group, Responsible AI and Machine Ethics (RAIME).

Research Interests:

🔍🤖 My interdisciplinary work explores the complex interplay between AI’s perspicuity attributes—such as transparency and explainability—and societal desiderata, including addressing gaps in responsibility, enabling effective human oversight, and detecting algorithmic unfairness. Beyond my role at DFKI, I engage with these challenges through two major initiatives. I am a member of the Transregional Collaborative Research Centre 248 “Foundations of Perspicuous Software Systems” within the Center for Perspicuous Computing (CPEC), and I remain closely involved with the interdisciplinary project Explainable Intelligent Systems (EIS), funded by the Volkswagen Foundation. Additionally, broader questions of hybrid intelligence—specifically, how experts can effectively infuse their knowledge (including, but not restricted to moral expertise) into reinforcement learning (RL) systems and derive insights from RL-driven behaviors—are central to my our project, Multi-level Abstractions on Causal Modelling for Enhanced Reinforcement Learning (MAC-MERLin).

🖥️📊 My research takes a philosophically informed approach to topics within computer science, focusing on algorithmic fairness and the pursuit of effective human oversight. This work builds on methodologies from explainable artificial intelligence (XAI) and the emerging field of mechanistic interpretability. At its core, this research addresses a broader question: how can normative requirements and technical methods/procedures be meaningfully integrated, a challenge that demands conceptual, normative, and empirical evaluation. Together with my colleagues at CERTAIN, I am particularly interested in exploring how the properties of AI systems commonly grouped under the term “Trustworthy AI” can be operationalized and certified, and how these efforts contribute to fostering appropriate trust in AI and cultivating a healthy trust infrastructure.

🤖📜 My research also delves into machine ethics, focusing on integrating moral considerations into AI agents’ decision-making frameworks. A key aspect of this work involves embedding normative reasoning into reinforcement learning architectures to develop AI agents sensitive and responsive to normative reasons. More recently, I have revisited the field of AI alignment, exploring ways to ensure AI systems align with human values and ethical principles through means of practical reasoning and justification.

🤔⚖️ My work also extends to the broader field of AI ethics, where, for instance, I contribute to developing ethical guidelines for responsible AI design and deployment. A key focus of my current research is addressing how we should make decisions under normative—and particularly moral—uncertainty in both the design and deployment of AI systems. To navigate these challenges, I draw on insights from practical reasoning to facilitate decisions that are as reasonable and defensible as possible. Central to my approach is placing the concept of justifications at the core of application-oriented AI ethics research, emphasizing their importance in fostering accountability and ethical rigor.

👨‍🏫📘 Besides research, I am deeply involved in ethical education for computer science students and professionals, continuously exploring ways to enhance effective teaching methodologies in this area. Notably, our course Ethics for Nerds was awarded the “Hochschulperle des Jahres” by the German Stifterverband in 2019, a prestigious recognition for innovative approaches in higher education.

What else I do:

In addition to my academic and research activities, I have gained practical experience in the ethical assessment of research projects. This includes serving as a member and deputy chairman of the Commission for the Ethics of Security-Relevant Research (“Kommission für die Ethik sicherheitsrelevanter Forschung”, KEF) at Saarland University (UdS) from 2020 to 2022, as a member of the Ethical Review Board (ERB) of the Saarbrücken Informatics Campus (SIC), and as an ethical advisor for the DFKI Ethics Team as well as an EU Horizon 2020 project (undisclosed). I am also co-founder of the non-profit association Algoright, an interdisciplinary think tank for responsible digitalization, focusing on science communication and ethical consulting. Additionally, I served as a permanent expert member for digital ethics in the Saarland State Parliament’s Enquete Commission on Digitalization in Saarland.

Beyond these roles, I actively engage in consultancy and keynote speaking, whether in my professional capacity, as a member of Algoright, or as a freelancer, sharing insights on topics related to ethics, AI, and digitalization.

You really want to know more?

You can find all my publications on Google Scholar, DBLP, and my PhilPeople profile. For a detailed overview of my academic and professional journey, feel free to browse my curriculum vitae.