
Connect with me
About Me
I am a computer scientist and expert in AI safety, working on the evaluation, robustness, and security of large-scale AI systems, including large language models. I develop technical methods and evaluation frameworks to identify and mitigate high-risk behaviors such as jailbreaks, harmful outputs, and unsafe system behavior, with the goal of enabling the safe and trustworthy deployment of AI in real-world settings. I am currently an Assistant Professor in the Department of Public Health and Health Sciences and Technical Lead for the Responsible AI Practice at Northeastern University. I also am the founder and director of MeronymLabs, a research group focused on AI safety and evaluation, and I am the co-founder of MoiRAI Consultancy.
Beyond my core research, I work across academic, industry, public-sector and not-for-profit settings to generate evidence that supports non-technical stakeholders and policymakers in making informed decisions that reduce algorithmic harm and inform organizational decision-making. I am currently a Visiting Scientist at MaineHealth and the University of Southampton, a Faculty Fellow at IHESJR, and serve as a Scientific Expert Advisor at Meta on AI safety.
I hold a PhD in Computer Science from the University of Hull (UK). During my doctoral training, I conducted research on machine and deep learning methods for analyzing complex real-world text and interned at IBM Research (UK), continuing this collaboration throughout my PhD and postdoctoral work. I completed my postdoctoral training at the University of Manchester’s National Centre for Text Mining (NaCTeM) and in Department of Computer Science, where I worked on natural language processing in health-related contexts. Prior to my current role, I was a Research Scientist at the Institute for Experiential AI (EAI), working primarily on research involving the development and evaluation of AI methods in applied health contexts.