Profile photo
CS PhD Student, Northeastern University

Updates

  • [August 2025] Two of my papers have been accepted to EMNLP Findings 🎉.
  • [July 2025] Selected as a fellow/mentee for MARS.
  • [December 2024] Paper on evaluating models for cultural robustness accepted at SafeGenAI workshop @NEURIPS 2024
  • [September 2024] Started my PhD at Northeastern University.
  • [August 2023] Started working as a Software Develpoment Engineer at ASCS @ Amazon
  • [February 2023] Started as a researcher and data engineer CLAWS Lab Georgia Tech
  • [December 2022] Graduated with MS CS from Georgia Tech

About Me

I am a second year CS PhD student at Khoury College of Computer Science at Northeastern University in Boston. I am advised by Prof Mai ElSherief.

Before starting my PhD, I was a Software Engineer with the Selection Monitoring and Catalog Systems organization at Amazon in Seattle. Prior to that I was an MS CS student at Georgia Tech. At Georgia Tech, I was advised by Prof Srijan Kumar in the CLAWS Lab. I also dabble as a Research mentor with SimPPL.

Research Interest

I am interested in designing, evaluating, and aligning AI systems to maximize social benefit while minimizing risks. My work sits at the intersection of natural language processing (NLP), AI alignment, AI safety, and the study of online communities, with a focus on how AI can be more empathetic, fair, and socially aware.

🤖👤 AI Safety and Personalization

I study how LLMs can be safer and more aligned with human values. My work examines how personas and context shape model empathy and behavior—across explicit identities like culture, age and gender and implicit cues like dialect. I’m currently exploring how interjections can control models toward safer, aligned outputs with Geodesic Research.

🤖🤝✨ Value Based Alignment

I focus on aligning AI systems with human values, emphasizing emotional safety and empathy. My recent EMNLP Paper shows how user context drives variation in model empathy. I’m developing methods to define what undertakes emotionally-safe behaviour and how to reduce this gap, ensure models express emotionally safe behaviour, and that their responses are consistent, empathetic, and aligned with human expectations.

🤝🌐 Understanding and Analyzing for Social Good Applications

In the past I have explored how AI can support healthier online communities by analyzing misinformation, hate speech reasoning, and identifying dog whistles. My goal is to build tools that make digital spaces safer, more inclusive, and socially aware.

Publications

EMNLP Findings 2025

Malik, Ananya, Nazanin Sabri, Melissa Karnaze, and Mai Elsherief. Are LLMs Empathetic to All? Investigating the Influence of Multi-Demographic Personas on a Model's Empathy. 📄 Paper Link

EMNLP Findings 2025 NeurIPS SafeGenAI Workshop Oral Presentation

Malik, Ananya, Sharma, Kartik, Ng Lynette Hui Xian, Bhatt, Shaily. Who Speaks Matters: Analysing the Influence of the Speaker’s Ethnicity on Hate Classification. 📄 Paper Link

Pre-print

Malik, Ananya Evaluating Large Language Models through Gender and Racial Stereotypes. 📄 Paper Link

ITM Web Conference

Amogh Parab, Ananya Malik, Arish Damania, Arnav Parekhji Successive Image Generation from a Single Sentence. 📄 Paper Link

Elsvier

A.Malik, Y. Javeri, M. Shah, R. Mangrulkar, Impact Analysis of Covid 19 News Headlines on Global Economy Cyber-Physical Systems for COVID-19, Elsevier 📄 Paper Link

IJCA

A.Malik Survey paper on applications of generative adversarial networks in the field of social media 📄 Paper Link

Academic Service and Groups

Groups

Teaching

Talks