New research from the Hebrew University of Jerusalem shows that large language models (LLMs) form structured ‘trust’ assessments much like humans do, yet apply them more mechanically and, sometimes, with stronger, more consistent demographic bias. Large language models implement a coherent but rigid and sometimes biased model of interpersonal trust that only partially aligns with human judgment. As LLMs and LLM-based agents increasingly interact...
