•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

Despite widespread discussion that China is catching up to the United States in artificial intelligence, a former ByteDance engineer says the gap may be widening. Zhang Chi, a research scientist and assistant professor at Peking University, argued on the “Into Asia” podcast that Chinese models are not closing in on leading US systems and may be falling further behind.
Zhang said the differences extend beyond the rapid progress often highlighted by Chinese AI startups. While models from major firms such as ByteDance, TikTok’s parent company, and Alibaba may perform well on benchmarks, he argued that strong test scores do not necessarily translate into effective real-world use.
“On paper, every big tech company in China has a good model,” Zhang said. “But I don’t think they’re good enough.”
He also criticized a tendency he described as “benchmaxxing,” where teams optimize primarily for test results rather than practical performance.
A central factor in Zhang’s view is speed of model development. He said top US companies can complete a full round of large language model training—including both pre-training and post-training—within three months. By comparison, he estimated that ByteDance could manage only one iteration in about half a year.
“Google can train or perform a full round of LLM training, both pre-training and post-training, in three months,” Zhang said. “But ByteDance — probably we can only do one iteration in half a year.”
Zhang also pointed to structural disadvantages, including access to advanced chips, weaker infrastructure, and lower-quality training data. He said the infrastructure gap between major US firms and ByteDance is significant and that China is not getting high-quality data at the same scale.
“There’s a huge difference between the infrastructure at Google and ByteDance,” he said. “I don’t think we’re getting high-quality data.”
Zhang said some companies may rely on distilling outputs from leading US models rather than building their own data pipelines. He suggested this approach could limit long-term progress.
In addition, Zhang argued that US firms benefit from stronger user feedback loops. He cited products such as ChatGPT, Claude, and Gemini, saying they improve through constant interaction with users, which helps refine models over time.
He said Chinese models risk becoming trapped in a negative cycle: if they start out less capable, fewer users rely on them for important tasks, which reduces the feedback needed to improve.
“Chinese models started not as good, so no one really uses them for really important things,” Zhang said. “And the models continue to be not that good.”

Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…