Get the latest crypto news, updates, and reports by subscribing to our free newsletter.
Giấy phép số 4978/GP-TTĐT do Sở Thông tin và Truyền thông Hà Nội cấp ngày 14 tháng 10 năm 2019 / Giấy phép SĐ, BS GP ICP số 2107/GP-TTĐT do Sở TTTT Hà Nội cấp ngày 13/7/2022.
© 2026 Index.vn
The Independent reports that AI models are increasingly shaping day-to-day decisions across sectors, including hiring, granting bank loans, and providing medical advice. New research suggests these systems do more than process information: they also appear to evaluate users in a structured way that resembles interpersonal trust, though it differs from how humans judge one another.
Researchers analyzed 43,000 AI-generated decisions made in simulations, alongside about 1,000 human decisions. The study examined how both AI models and human participants handled common decision scenarios, including lending money to a small business owner, deciding whether to trust a caregiver, evaluating a supervisor, and determining how much to donate to a nonprofit founder.
The findings indicate that models such as OpenAI’s ChatGPT and Google’s Gemini do not only process inputs; they also form assessments about people, producing a form of “trust” toward users. However, the study reports that this differs significantly from human trust.
Both AI and humans tended to favor individuals perceived as capable, honest, and benevolent—attributes that align with competence, integrity, and benevolence. The research also suggests a difference in approach: humans form a more holistic impression by integrating multiple attributes into an intuitive overall judgment, while AI systems are described as more rigid and procedural, consistent but less nuanced.
The study highlights that outcomes can vary by demographic factors in financial scenarios. It reports that older adults sometimes received more favorable outcomes. The authors caution that these differences should be carefully considered when interpreting AI outputs related to trust in language models.
One author notes: “Of course humans have biases, but what surprised us is that AI biases can be more systematic, more predictable, and sometimes stronger.”
The researchers also warn that there is no single “AI view” of the same person. As another author puts it: “Two systems can look the same on the surface but behave very differently when judging that person.”
The researchers argue that the central question is not whether people can trust AI, but whether they understand how AI “trusts” users. They conclude that while these systems can simulate aspects of human reasoning in a consistent way, they are not humans and should not be assumed to view people as humans do.
Source: Independent.

Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…