I’m Marco Molinari. I live in London, where I take apart AIs.

I work at the intersection of data, technology, and reasoning, with a long-standing interest in building models that generalise, remain interpretable, and behave reliably beyond narrow training settings. My background spans competitive computer science, AI research, and data science, and I am currently studying at the London School of Economics, where my work focuses on principled and robust machine learning.

My research interests revolve around three core themes: generalisation, discovery, and agency. I am particularly interested in domain generalisation, robustness, uncertainty quantification, and interpretability, with the goal of understanding not just when models fail, but why. I am also motivated by discovery in scientific, mathematical, and medical settings, where interpretable learning systems can support insight generation rather than opaque optimisation.

In parallel, I work on agentic AI, especially at the pretraining stage of large language models, with an emphasis on controllability and interpretability. I lead lseai.org, an AI research lab I founded to explore these questions collaboratively. Looking ahead, my goal is to develop AI systems that are robust, interpretable, and capable of meaningful exploration, bridging theory, research, and real-world impact.