Kevin Baum
Ethicist, Philosopher, Computer Scientist | Post-Doctoral Researcher in Ethical, Responsible, and Trustworthy AI | Head of CERTAIN and Deputy Head at NMM, DFKI, Saarbrücken, Germany.
Hey there! I’m Kevin Baum, a passionate ethicist and computer scientist at the forefront of exploring the ethical dimensions of AI.
At DFKI, I lead efforts to ensure AI development aligns with societal standards.
In January 2023, I became deputy head of the research department for Neuro-Mechanistic Modeling at the German Research Center for Artifical Intelligence (DFKI). In December 2023, I became head of the Center for European Research in Trusted AI (CERTAIN). Since October 2024, I have been leading my own research group on Responsible AI and Machine Ethics (RAIME).
Research Interests:
🔍🤖 Much of my interdisciplinary work navigates the intricate relationship between AI’s perspicuity attributes — such as transparency and explainability — and societal desiderata including closing responsible gaps, enabling effective human oversight, and allowing to detect algorithmic unfairness. In addition to my work at DFKI, I am pursuing these questions primarily within two projects I am associated with, on the one hand the Transregional Collaborative Research Centre 248 “Foundations of Perspicuous Software Systems” within the Center for Perspicuous Computing (CPEC), and on the other hand the highly interdisciplinary project Explainable Intelligent Systems (EIS), which is funded by the Volkswagen Foundation. More general challenges of hybrid intelligence, especially how experts can effectively inject their knowledge into reinforcement learning (RL) and in turn draw conclusions from RL-based behaviour, are the subject of my new project Multi-level Abstractions on Causal Modelling for Enhance Reinforcement Learning (MAC-MERLin).
🖥️📊 More specifically, I started to do philosophically informed research in the computer science side of things with respect to algorithmic fairness and the quest for effective human oversight, building on methods from explainable artificial intelligence (XAI) and the emerging field of mechanistic interpretability. In a sense, this is part of a broader research question that revolves around the interlocking of normative requirements and technical methods/procedures, which ultimately demand conceptual, normative, and empirical evaluation. In particular, together with my colleagues at CERTAIN, I am researching how those AI system properties that are commonly subsumed under the label “Trustworthy AI” are operationalized and certified and how this can contribute to appropriate trust in AI and a healthy trust infrastructure.
🤖📜 My research extends into machine ethics, i.e., the quest of integrating moral considerations within AI agents’ decision-making frameworks. A significant area of inquiry is embedding normative reasoning into reinforcement learning architectures, aiming to create AI agents that learn sensitivity to normative reasons. Recently, I revistited the AI alignment research.
🤔⚖️ I also engage with the field of AI ethics more generally. I contribute to the formulation of ethical guidelines for AI development. In doing so, I am currently increasingly addressing the question of how we should make decisions under normative and especially moral uncertainty, with regard to both design and deployment decisions. To this end, I am trying to make productive use of considerations from the field of practical reasoning in order to arrive at decisions that are as reasonable and defendable as possible. To do this, I am placing the concept of justifications at the methodological centre of application-oriented AI ethics research.
👨🏫📘 Further, I ponder on effective methodologies for ethical education for computer science students and professionals. Notably, our course Ethics for Nerds won the “Hochschulperle des Jahres” award from the German Stifterverband in 2019, which recognizes innovative teaching in higher education.
What else I do:
Besides all that, I have also gained practical experience in the ethical assessment of research projects, be it as a member and deputy chairman of the Commission for the Ethics of Security-Relevant Research (“Kommission für die Ethik sicherheitsrelevanter Forschung”, KEF) at Saarland University (UdS) (from 2020 to 2022), as a member of the Ethical Review Board (ERB) of Saarbrücken Informatics Campus (SIC), or as ethical advisor of the DFKI Ethics Team and for an (undisclosed) EU Horizon 2020 project. I am also co-founder of the non-profit association Algoright, an interdisciplinary think tank for good digitalization, which is primarily dedicated to science communication and ethical consulting. In addition, I was a permanent expert member for digital ethics of the Saarland State Parliament’s Enquete Commission on Digitalization in Saarland.
You really want to know more?
All my publications can be found on Google Scholar, in the DBLP, and on my PhilPeople profile. For a comprehensive overview of my academic and professional journey, here is my curriculum vitae.