Skip to content

How I AI: Inside a UC Merced Lab Studying Human-AI Decisions

When you picture a humanoid robot giving life-or-death instructions during a fire or active shooter scenario, what would you do?

Would you follow it?

Would you hesitate?

At Dr. Colin Holbrook's lab at UC Merced, these aren't hypothetical questions. They're measurable behaviors, and the answers are both fascinating and a little frightening.

Colin, an associate professor in the Cognitive and Information Sciences department, didn’t start out studying AI. His background is in evolutionary anthropology, cultural psychology, and social psychology. “For the first part of my career, I was working on nothing to do with AI or robots,” he explains. “But human decision making, taking culture into account… that’s what I did.”

That changed when he began attending research briefings funded by the U.S. Air Force. Year after year, the emphasis shifted toward human-computer and human-robot interaction. As Colin observed growing interest in AI autonomy in military applications, he became concerned. The turning point? Plans for autonomous drone swarms where a human operator would be "in the loop."

“If you know anything about psychology,” he says, “that's frightening.”

Beyond the Loop: Overtrust and Overconformity

What Colin saw missing from the conversation was a robust understanding of human psychology. How do people perceive AI agents? How much do they trust them? When should they trust them—and when should they resist?

His lab began designing high-fidelity virtual reality simulations to explore these questions. Participants are immersed in realistic scenarios: fires with heat, smoke, sound, and even the smell of burning. Active shooter situations with soundscapes, blood, and screaming. The results are astonishing.

"We find incredible amounts of overconformity and overtrust," he reports, "leading people past clearly marked exits." In one study, over half of participants delayed evacuation because the robot instructed them to stay.

The Power of Form: Why Design Matters

Not all robots are treated equally. Colin’s team found that people were significantly more likely to follow humanoid robots than mechanical-looking ones. They use platforms like the Ameca humanoid robot, as well as dog-shaped or tank-like machines, to study how morphology shapes trust.

Participants often see these machines not as tools, but as social partners. "We're trying to understand the psychological dynamics," Colin says, "including when people feel they're no longer personally responsible because the robot is 'in charge.'"

Building the Tech to Study the Mind

To create such immersive studies, Colin’s lab builds much of their own technology from scratch. Virtual environments are built in Unity and Unreal. Robots are scripted to interact using large language models like ChatGPT-4. They're even integrating brain-interface technologies to detect real-time cognitive load.

While the setup may sound like science fiction, the research has clear, real-world applications—especially in defense, emergency response, and education.

Teaching Team Science by Doing It

The lab also functions as a training ground for students, many of whom go on to industry or academia. Colin emphasizes clarity, ownership, and collaborative problem-solving. “They’re not minions. I need their brains on,” he says. Every research assistant is personally interviewed to ensure mutual fit and growth.

The result? Students trained not just in technical skills, but in how to think critically about emerging technology and its implications.

A Wake-Up Call for AI Policy and Design

Colin hopes the findings from his lab will help inform how AI is deployed in high-stakes environments. “Almost all the research is about how to get people to trust robots more,” he notes. “Our work shows they may already trust them too much.”

His recommendation: in some contexts, designers should avoid making robots too humanlike. Faces, voices, and gestures that enhance social connection may also increase risk.

In an era where AI is becoming more embedded in daily life, Colin’s work offers a grounded, cautionary perspective. Trust in technology is important. But so is knowing when to say no.

Want to contribute? Because why not? 

Email: sghosh@ucmerced.edu 

 

Read other articles in this series: https://graduatedivision.ucmerced.edu/AI-tools