Hey there! I’m Kevin Baum, a passionate ethicist (M.A.) and computer scientist (M.Sc.) at the forefront of exploring the ethical dimensions of AI. At DFKI, I lead efforts to ensure AI development aligns with societal standards.
Since January 2023 I am the deputy head of the research department on Neuro-Mechanistic Modeling at the German Research Center for Artifical Intelligence (DFKI) and since December 2023 I am the new head of the Center for European Research in Trusted AI (CERTAIN).
🔍🤖 Much of my interdisciplinary work navigates the intricate relationship between AI’s perspicuity attributes — such as transparency and explainability — and societal desiderata including closing responsible gaps, enabling effective human oversight, and allowing to detect algorithmic unfairness. In addition to my work at DFKI, I am pursuing these questions primarily within two projects I am associated with, on the one hand the Transregional Collaborative Research Centre 248 “Foundations of Perspicuous Software Systems” within the Center for Perspicuous Computing (CPEC), and on the other hand the highly interdisciplinary project Explainable Intelligent Systems (EIS), which is funded by the Volkswagen Foundation.
🖥️📊 More recently, I started to do research in the computer science side of things with respect to algorightmic fairness and the quest for explainable artificial intelligence (XAI), with a keen interest in the emerging field of mechanistic interpretability. In particular, together with my colleagues at CERTAIN, I am researching how those AI system properties that are commonly subsumed under the label “Trustworthy AI” are operationalized and certified and how this can contribute to appropriate trust in AI and a healthy trust infrastructure.
🤖📜 My research extends into machine ethics, i.e., on the quest of integrating moral considerations within AI agents’ decision-making frameworks. A significant area of inquiry is embedding normative reasoning into reinforcement learning architectures, aiming to create AI agents that learn sensitivity to normative reasons. Recently, I revistited the AI alignment research.
👨🏫📘 I also engage with the field of AI ethics understood as a subfield of applied ethics more generally. I contribute to the formulation of ethical guidelines for AI development and ponder on effective methodologies for ethical education for computer science students and professionals. Notably, our course Ethics for Nerds won the “Hochschulperle des Jahres” award from the German Stifterverband in 2019, recognizing innovative teaching in higher education.
What else I do:
Besides all that, I have also gained practical experience in the ethical assessment of research projects, be it as a member and deputy chairman of the Commission for the Ethics of Security-Relevant Research (“Kommission für die Ethik sicherheitsrelevanter Forschung”, KEF) at Saarland University (UdS) (from 2020 to 2022), as a member of the Ethical Review Board (ERB) of Saarbrücken Informatics Campus (SIC), or as ethical advisor of the DFKI Ethics Team and for an (undisclosed) EU Horizon 2020 project. I am also a co-founder of the non-profit association Algoright, an interdisciplinary think tank for good digitalization, which is primarily dedicated to science communication and ethical consulting. In addition, I was a permanent expert member for digital ethics of the Saarland State Parliament’s Enquete Commission on Digitalization in Saarland.
You really want to know more?