I am a second year CS PhD student at Khoury College of Computer Science at Northeastern University in Boston. I am advised by Prof Mai ElSherief.
Before starting my PhD, I was a Software Engineer with the Selection Monitoring and Catalog Systems organization at Amazon in Seattle. Prior to that I was an MS CS student at Georgia Tech. At Georgia Tech, I was advised by Prof Srijan Kumar in the CLAWS Lab. I also dabble as a Research mentor with SimPPL.
I am interested in designing, evaluating, and aligning AI systems to maximize social benefit while minimizing risks. My work sits at the intersection of natural language processing (NLP), AI alignment, AI safety, and the study of online communities, with a focus on how AI can be more empathetic, fair, and socially aware.
I study how LLMs can be safer and more aligned with human values. My work examines how personas and context shape model empathy and behavior—across explicit identities like culture, age and gender and implicit cues like dialect. I’m currently exploring how interjections can control models toward safer, aligned outputs with Geodesic Research.
I focus on aligning AI systems with human values, emphasizing emotional safety and empathy. My recent EMNLP Paper shows how user context drives variation in model empathy. I’m developing methods to define what undertakes emotionally-safe behaviour and how to reduce this gap, ensure models express emotionally safe behaviour, and that their responses are consistent, empathetic, and aligned with human expectations.
In the past I have explored how AI can support healthier online communities by analyzing misinformation, hate speech reasoning, and identifying dog whistles. My goal is to build tools that make digital spaces safer, more inclusive, and socially aware.
Malik, Ananya, Nazanin Sabri, Melissa Karnaze, and Mai Elsherief. Are LLMs Empathetic to All? Investigating the Influence of Multi-Demographic Personas on a Model's Empathy. 📄 Paper Link
Malik, Ananya, Sharma, Kartik, Ng Lynette Hui Xian, Bhatt, Shaily. Who Speaks Matters: Analysing the Influence of the Speaker’s Ethnicity on Hate Classification. 📄 Paper Link
Malik, Ananya Evaluating Large Language Models through Gender and Racial Stereotypes. 📄 Paper Link
Amogh Parab, Ananya Malik, Arish Damania, Arnav Parekhji Successive Image Generation from a Single Sentence. 📄 Paper Link
A.Malik, Y. Javeri, M. Shah, R. Mangrulkar, Impact Analysis of Covid 19 News Headlines on Global Economy Cyber-Physical Systems for COVID-19, Elsevier 📄 Paper Link
A.Malik Survey paper on applications of generative adversarial networks in the field of social media 📄 Paper Link
TA for CS 4100: Foundations of AI (Spring 2025) with Prof Chris Amato
Slides of my lecture on Advanced Topics in AI
Presentation and Slides at SafeGenAI workship at NeurIPS on Who Speaks Matters: Analysing the Influence of the Speaker’s Ethnicity on Hate Classification