I am Erik, I am 28 and a master's student in AI and researcher at the Cognitive Systems group at the University of Bamberg. My interests are interpretability and explainable AI (xAI). I am currently working on my master's thesis where I want to explore the latent representations of neural networks with the aim to uncover meaningful, interpretable features that can be manipulated and controlled, improving the transparency and fairness of AI systems.
- 🔬 Mechanistic Interpretability: Understanding how neural networks represent knowledge and how those representations can be controlled to improve model transparency.
- 🖌️ Generative Models: Investigating the use of latent space manipulation for steering generative models, with applications to fairness and bias mitigation.
- 🧠 Sparse Autoencoders and Variational Autoencoders (VAEs): Using sparse and variational autoencoders to induce monosemantic features and simplify latent space manipulation.
- 🩺 AI in Medical Imaging: Applying xAI to medical domains to address bias and underrepresentation in AI-generated datasets.
In an age where AI is transforming industries and more importantly: society, it is important that these systems remain ethical, transparent, and fair. I want to do my part in achieving that.
Always feel free to reach out - I am always interested in collaborating or just exchanging ideas! 🤓