Profile photo
CS PhD Student, Northeastern University

Updates

  • [April 2026] Gave a talk for Khoury Student Seminar! Find slides here
  • [March 2026] Chain of Thought Steering paper accepted to Logical Reasoning Workshop @ ICLR 2026!
  • [Jan 2026] New Blog out on my substack: LLM Systems: An Untapped Lever for AI Safety
  • [August 2025] Two of my papers have been accepted to EMNLP Findings 🎉.
  • [July 2025] Selected as a fellow/mentee for MARS.
  • [December 2024] Paper on evaluating models for cultural robustness accepted at SafeGenAI workshop @NEURIPS 2024
  • [September 2024] Started my PhD at Northeastern University.
  • [August 2023] Started working as a Software Develpoment Engineer at ASCS @ Amazon
  • [February 2023] Started as a researcher and data engineer CLAWS Lab Georgia Tech
  • [December 2022] Graduated with MS CS from Georgia Tech

About Me

I am a second year CS PhD student at Khoury College of Computer Science at Northeastern University in Boston. I am advised by Prof Mai ElSherief.

Before starting my PhD, I was a Software Engineer with the Selection Monitoring and Catalog Systems organization at Amazon in Seattle. Prior to that I was an MS CS student at Georgia Tech. At Georgia Tech, I was advised by Prof Srijan Kumar in the CLAWS Lab. I also dabble as a Research mentor with SimPPL.

Research Interest

I am interested in designing AI that are safer, more controllable, and socially aligned. My research areas broadly focus on the following:

🤹🏼 Personalization

  • How do models internalize and represent human personas? I explore how latent subspaces can be used to enable more precise behavioral control.
  • How does user context influence model behavior? My research examines how context drives variation in empathy and how models respond to implicit cues such as dialect or identity.
  • How do LLMs build internal “maps” of the world? I investigate how geographic and spatial knowledge is structured within a model’s weights and how this impacts downstream reasoning.

🛡️ AI Safety and Social Alignment

  • Can we steer models toward safer behavior without retraining? I use activation patching and causal mediation analysis to identify internal circuits and develop inference-time methods for bias mitigation.
  • How can we make model reasoning more robust? I research ways to steer chain-of-thought processes using light weight interventions to ensure that a model’s logical “thought” traces are consistent and aligned with human expectations.
  • How do we identify adversarial and negative LLM behavior? I study what triggers and how to stop model’s harmful social behavior.

🤝🌐 Computational Social Science

  • How can AI support healthier online communities? My work focuses on building tools that identify nuanced social phenomena, such as detecting dog whistles and analyzing the reasoning behind hate speech.
  • How does AI interpret intent in digital spaces? I explore how NLP frameworks and prompt optimization can be applied to study rampant motherhood burnout as expressed in reddit communities

Publications

EMNLP Findings 2025

Malik, Ananya, Nazanin Sabri, Melissa Karnaze, and Mai Elsherief. Are LLMs Empathetic to All? Investigating the Influence of Multi-Demographic Personas on a Model's Empathy. 📄 Paper Link

EMNLP Findings 2025 NeurIPS SafeGenAI Workshop Oral Presentation

Malik, Ananya, Sharma, Kartik, Ng Lynette Hui Xian, Bhatt, Shaily. Who Speaks Matters: Analysing the Influence of the Speaker’s Ethnicity on Hate Classification. 📄 Paper Link

Pre-print

Malik, Ananya Evaluating Large Language Models through Gender and Racial Stereotypes. 📄 Paper Link

ITM Web Conference

Amogh Parab, Ananya Malik, Arish Damania, Arnav Parekhji Successive Image Generation from a Single Sentence. 📄 Paper Link

Elsvier

A.Malik, Y. Javeri, M. Shah, R. Mangrulkar, Impact Analysis of Covid 19 News Headlines on Global Economy Cyber-Physical Systems for COVID-19, Elsevier 📄 Paper Link

IJCA

A.Malik Survey paper on applications of generative adversarial networks in the field of social media 📄 Paper Link

Academic Service and Groups

Groups

Teaching

Talks