Get the latest crypto news, updates, and reports by subscribing to our free newsletter.
Giấy phép số 4978/GP-TTĐT do Sở Thông tin và Truyền thông Hà Nội cấp ngày 14 tháng 10 năm 2019 / Giấy phép SĐ, BS GP ICP số 2107/GP-TTĐT do Sở TTTT Hà Nội cấp ngày 13/7/2022.
© 2026 Index.vn
Nvidia (NVDA) is positioned at the center of the artificial intelligence (AI) infrastructure buildout, but investors also face mounting risks from rising competition and potential saturation in AI spending. The debate around the stock hinges on whether Nvidia can sustain its ecosystem advantages as hyperscalers increasingly pursue custom chips and alternative architectures.
Nvidia’s graphics processing units (GPUs) are described as the core hardware powering AI infrastructure, with the article estimating an approximate 90% market share.
The company’s moat is attributed to its ecosystem, beginning with CUDA, a software platform where early foundational AI code was written and optimized for Nvidia chips. The article also points to Nvidia’s proprietary NVLink interconnect system, which it says enables multiple chips to function as a more unified compute unit.
Beyond hardware and software, the article highlights Nvidia’s ability to anticipate market trends. It notes that Nvidia created CUDA about a decade before Advanced Micro Devices developed competing software, and that Nvidia seeded CUDA into institutions conducting early AI research. In 2020, Nvidia acquired Mellanox, which the article says became the basis for its networking segment.
More recently, Nvidia is described as preparing for the shift toward inference and agentic AI. The article cites acquisitions of Groq and SchedMD, leading to language processing units (LPUs) designed for inference and the NemoClaw platform for deploying AI agents. It also states that Nvidia has developed its own central processing units (CPUs), enabling it to deliver complete server racks tailored for training, inference, and agentic AI—positioning the company as a broader AI infrastructure provider rather than only a chipmaker.
The article argues that AI demand still has a long runway because large companies and governments are racing to build capabilities, supporting continued growth for Nvidia.
Despite Nvidia’s dominance, the article says competition is increasing. It points to custom AI application-specific integrated circuits (ASICs), which are hardwired for specific tasks and are gaining traction—particularly in inference—because of their power-efficiency advantages.
As examples, the article notes that Anthropic announced it would expand capacity using Alphabet’s Tensor Processing Units (TPUs). It also states that Anthropic already runs a large data center on Amazon’s Trainium chips. In addition, the article says more hyperscalers are designing their own custom chips, often with partners such as Broadcom or Marvell Technology.
The article also cites AMD’s progress. It says AMD’s ROCm software platform has improved in recent years and that AMD has partnerships with OpenAI and Meta Platforms to deliver GPUs in exchange for warrants. It further argues that the shift toward newer code written on open-source platforms could help AMD gain share, especially in inference where requirements may be less demanding.
Nvidia’s stock data included in the article:
The article’s biggest concern is that AI infrastructure spending may be approaching peak levels. It states that the five largest hyperscalers are set to spend $700 billion on AI infrastructure this year, describing this as about 1.5% of GDP and roughly in line with where past tech investment cycles peaked.
The article adds that cloud providers and hyperscalers will need strong returns to justify maintaining that level of spending.
The article’s conclusion is that Nvidia will likely lose some market share, but will remain the most important player in AI infrastructure due to its ecosystem and continued evolution. It also argues that hyperscalers are seeing returns on their investments, pointing to Taiwan Semiconductor Manufacturing’s ramp-up in capital expenditures as evidence that demand is expected to persist.
Finally, the article states that Nvidia is trading at a forward price-to-earnings multiple of 21 and characterizes the stock as a buy based on the expected long runway of growth.

Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…