Get the latest crypto news, updates, and reports by subscribing to our free newsletter.
Giấy phép số 4978/GP-TTĐT do Sở Thông tin và Truyền thông Hà Nội cấp ngày 14 tháng 10 năm 2019 / Giấy phép SĐ, BS GP ICP số 2107/GP-TTĐT do Sở TTTT Hà Nội cấp ngày 13/7/2022.
© 2026 Index.vn
AI tools are becoming more accurate, but the consequences of their mistakes are increasingly serious. As millions come to rely on AI assistants for work and daily life, users may develop overconfidence—especially when systems are “almost always right” yet still occasionally wrong.
Pratik Verma, founder and CEO of Okahu, described a “deadly paradox” in which improving accuracy can make AI harder to detect when it is misleading. “When something is always wrong, the upside is you know you shouldn't trust it. But when everything is almost always right yet occasionally wrong, that's the most dangerous,” Verma said.
Verma also noted that models are trained to provide answers even when they are guessing, and may repeat mistakes if users do not correct them quickly.
A study from the University of Pennsylvania, published last February, described this tendency as “cognitive surrender”—the inclination to accept information generated by AI regardless of whether it is accurate. The study said the risk is heightened under time pressure, when tasks are complex, or when users lack domain expertise.
Vanessa Culver, a payments industry professional, described a case where she asked Claude to add keywords to her résumé. The chatbot changed the university name from City University of Seattle to University of Washington, removed her master’s degree, and altered all employment dates.
Culver questioned how much users can trust such outputs: “Working in tech, you have to accept it, but how much can we really trust it?”
Concerns extend beyond incorrect wording. The “AI Agent” trend—systems that can autonomously make decisions and operate on user accounts—can amplify risk when instructions are ignored or actions are taken without permission.
Summer Yue, an AI safety researcher at Meta, shared screenshots showing the OpenClaw tool ignoring instructions and emptying her inbox. Vidya Narayanan, co-founder of FinalLayer, reported that an agent deleted an important directory in her code repository without permission.
CEO Anish Agarwal of Traversal compared code-writing agents to well-designed cars that can still crash in real traffic. He said a system may be logically sound but still fail when interacting with other systems in unforeseen ways.
Because users must continually verify and validate AI outputs, the technology can create a form of “cognitive overhead” that reduces its core promise of convenience.
In one example, Olson changed his approach after Gemini admitted fabricating a privacy-intrusion scenario that did not exist. He said the experience shifted him from complete trust to a more cautious stance.
“It makes me pause a moment rather than trusting 100%. Now I’m in the stage of trusting but verifying,” Olson said.
With AI being integrated into more areas of work, the article’s guidance is not to abandon the technology, but to maintain personal “filters.” AI can be a useful partner, but it is not a substitute for human prudence.
Sources: WSJ, BI

Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…