•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

Top technology leaders have repeatedly claimed that humanity is approaching “artificial superintelligence.” Yet increasingly cognitive scientists and neuroscientists are issuing a contrary warning: the AI industry may be building an empire on a faulty assumption about the nature of human intelligence.
Mark Zuckerberg believes that “superintelligence” is within reach. Dario Amodei predicts AI could surpass humans in most related fields as early as 2026. Sam Altman argues that OpenAI has learned how to build AGI, artificial general intelligence, and that these systems will drive scientific discoveries far beyond human capabilities.
However, many modern cognitive studies point to a core issue: language is not the same as thought. If that is correct, the foundational assumptions behind today’s generative AI models may be overstated.
A common thread among OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and Meta’s AI systems is that they are based on large language models (LLMs). These systems are trained on enormous amounts of internet data to identify correlations between words or tokens, then predict which word is likely to appear next in a given context.
Despite their growing complexity, the core of today’s generative AI remains language modeling. The concern raised by neuroscience is that human thought does not depend entirely on language. In this view, humans use language to convey thoughts, but language is not thinking itself.
This argument is central to a Nature commentary by Evelina Fedorenko (MIT), Steven T. Piantadosi (UC Berkeley), and Edward Gibson (MIT), titled “Language is primarily a tool for communication rather than for thinking.”
The commentary synthesizes decades of research on the relationship between language and cognition and challenges the idea that language creates reasoning ability. One line of evidence comes from fMRI studies. When people perform different cognitive tasks—ranging from solving math problems and logical reasoning to understanding others’ mental states—the activated brain regions do not fully overlap with language-processing areas.
That pattern suggests thinking ability and language are related but not identical. Studies of patients who lose language due to brain injury also support this distinction. Even with severe language impairment, many individuals retain the ability to reason, solve mathematical problems, understand causal relationships, or read social cues.
Fedorenko and colleagues argue that “the evidence is irrefutable,” citing many cases where severe language loss occurs while general thinking remains intact in important respects.
Another example is infants. Before they can speak, babies explore their environment, imitate behavior, learn rules, and form intuitive understanding of the world. Cognitive scientist Alison Gopnik argues that children learn like “little scientists,” continually testing, analyzing, and building “intuitive theories” about physics, biology, and psychology from an early age—implying that thinking exists before language.
In this framing, if humans lose the ability to speak, thinking can still remain. But if language is removed from a large language model, “there is almost nothing behind it.” As a result, some researchers argue that simply expanding data and compute may not move the AI industry toward AGI as tech leaders promise.
For years, the tech industry has treated a default belief as near-universal: with enough data and compute, AI will eventually reach or surpass human intelligence. But more AI experts are increasingly questioning that assumption.
Yann LeCun, a Turing Award winner and a prominent skeptic of LLMs, argues that current language models simulate language rather than truly understanding the physical world. LeCun has left his position at Meta to develop “world models,” AI systems designed for long-term memory, planning, and understanding physical relationships in the real world.
Other major figures—Yoshua Bengio, Eric Schmidt, and Gary Marcus—also argue that intelligence is not a single ability but an integration of multiple cognitive capabilities. Under an AGI definition proposed by this group, AI must achieve “flexibility and cognitive competence equivalent to or surpassing a well-educated adult.”
Even if such a system is built, the article argues that a larger question remains: can AI make the kind of revolutionary cognitive leaps humans have made?
Philosopher of science Thomas Kuhn argued that scientific revolutions do not arise simply from data accumulation or repeated experiments. They occur when people conceive entirely new ways of looking at the world—new scientific models that replace old thinking. Albert Einstein is cited as a classic example. Philosopher Richard Rorty extended this view, arguing that humans make progress by constructing new metaphors to describe the world, particularly when they are dissatisfied with the current conceptual system.
In the article’s view, a model trained on all available human knowledge can predict, remix, and reuse knowledge efficiently and imitate how a smart person would respond in many situations. But it has no internal reason to “not be satisfied” with the data it is given. It lacks a natural motivation to break free from existing cognitive frameworks and create entirely new modes of thinking as humans have done in the history of science.
The result could be a powerful repository of knowledge that remains trapped within the vocabulary and conceptual systems humans created. From this perspective, AI might become an unprecedented cognitive-support tool in human history, but it may not become a form of intelligence that rises above humans.
Meanwhile, the debate remains unresolved as the tech industry continues to invest heavily in Nvidia chips, data centers, and ever-larger models. The central question persists: does increasingly better language modeling truly lead to artificial general intelligence? So far, the article says, cognitive science does not provide the answer tech CEOs want to hear.
Source: The Verge
By Lại Dịu
Original article link: https://markettimes.vn/mark-zuckerberg-sam-altman-du-doan-ai-se-thong-minh-hon-nguoi-trong-nam-2026-nhung-nghien-cuu-lai-he-lo-goc-khuat-lon-nua-cuon-soot-ai-nghin-ty-usd-117695.html
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…