
Qing XIAO
PhD Student, Human-Computer Interaction Institute (HCII), School of Computer Science (SCS), Carnegie Mellon University
Contact Me: qingx at andrew dot cmu dot edu
How to pronounce my name: Qīng, pronounced like “Ching” (rhymes with “sing”), Xiāo, pronounced like “Shyao” (rhymes with “meow” but starts with a ‘sh’ sound).
I’m Qing Xiao, a second-year PhD student at the Human-Computer Interaction Institute (HCII) within the School of Computer Science at Carnegie Mellon University (CMU), advised by Dr. Hong Shen and part of the CARE (Collective AI Research and Evaluation) Lab.
My research focuses on the evolving relationship between humans and Al. As AI systems become increasingly capable, they are entering domains of professional expertise, cognitive capacity, and social relationships that have long been central to human life. As these boundaries blur, fundamental questions arise about how human life and society are being reshaped. My research examines how people collaborate with these increasingly capable AI systems, how such systems can be designed to act as effective and responsible partners, and what design and policy principles should guide the evolving human-AI relationship.
For a list of publications see Google Scholar.
I study how people work with AI systems as they increasingly enter domains of professional expertise and authority that have long defined human work and organizational life. This line of research asks: how do workers negotiate responsibility and professional authority as AI takes on core professional capacities, and how can these systems be designed and governed to function as responsible collaborators in the workplace?
Selected Projects: Can GenAI Move from Individual Use to Collaborative Newswork? (CHI’26); Do Teachers Dream of GenAI Widening Educational (In) equality? (CHI’26); Cross-Functional Collaboration around AI within the News Industry (CHI’25)I study how people relate to AI systems as they increasingly enter domains of emotion, intimacy, and companionship that have long been central to human social relationship. This line of research asks: how do users build and experience relationships with AI, how do value conflicts emerge in emotionally engaging interactions, and how can these systems be designed to support care, safety, and human well-being?
Selected Projects: Minion: Negotiating Harmful Value Conflicts with AI Companions (Arxiv’26); Robots that Evolve with Us (CHI’26); User-Driven Value Alignment for Biased AI Companions (CHI’25)I study how people interact with AI systems as they increasingly take on cognitive capacities that have long been central to human thought, including information seeking, reasoning, and decision-making. This line of research asks: what happens when people delegate core cognitive work to AI agents, where do breakdowns and risks emerge, and how can these systems be designed to support effective, accountable, and secure human-agent collaboration?
Selected Projects: LLMs Dark Patterns (CHI’26)