•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

Nvidia has long been viewed as the dominant force in AI chips, but the same AI technologies that helped power its rise are now creating new competitive pressure. As modern AI systems increasingly rely on Nvidia designs, the company’s market capitalization has grown to well beyond $4 trillion.
Each new generation of Nvidia chips allows firms to train more powerful AI models by connecting hundreds or thousands of processors through networks inside large data centers. A key part of Nvidia’s success is that it also provides software development tools for each chip generation, helping developers build and maintain code that runs efficiently on Nvidia hardware.
However, that advantage may not remain unique for long. A startup called Wafer is training AI models to optimize source code so software runs as efficiently as possible on specific silicon chips—an area that has historically required specialized engineering.
Wafer’s CEO and co-founder, Emilio Andere, said the company is using reinforcement learning on open-source models to teach systems to write kernel code—the software layer that directly interfaces with hardware in the operating system. Wafer also integrates intelligent agent systems into existing programming models such as Claude from Anthropic or OpenAI’s GPT to improve the ability to generate code that runs on targeted chips.
Andere noted that many large technology companies already develop custom silicon. Apple and other firms have used customized chips to improve performance and efficiency in devices such as laptops, tablets, and smartphones. At the cloud scale, companies including Google and Amazon also fabricate their own silicon to optimize performance for their platforms. Meta, for example, said it would deploy up to 1 GW of compute power using new chips developed alongside Broadcom.
Deploying custom silicon typically requires writing large amounts of source code to ensure software runs smoothly and efficiently on the new processor. Wafer is partnering with major players including AMD and Amazon to help optimize software for their hardware.
So far, Wafer has raised $4 million in seed funding, with participation from Jeff Dean of Google, Wojciech Zaremba of OpenAI, and other notable investors.
Andere argues that Nvidia’s AI-driven approach could face limits. Some high-end chip lines already reach floating-point performance parity with Nvidia GPUs, and Wafer’s stated goal is to maximize intelligence per watt. He also said that high-performance engineers who can optimize code to run stably on chips are scarce and expensive, while Nvidia’s software ecosystem makes it easier to write and maintain code for its hardware—making it difficult for even large tech companies to do the same independently.
Beyond code optimization, AI may soon help design chips. Ricursive Intelligence, founded by former Google engineers Azalia Mirhoseini and Anna Goldie, is developing methods to use artificial intelligence to design computer chips.
Mirhoseini and Goldie previously developed an AI-based method at Google to optimize the layout of key components on a chip. That approach changed how Google designs its own processors and is now widely used across the industry to arrange features on different chips.
Ricursive aims to automate more elements of chip design and incorporate large language models into the process. The company’s goal is to let engineers describe chip changes or ask questions in natural language—similar to how developers can “vibe code” an application—potentially enabling a future where engineers can “vibe design” an entire chip.
Ricursive is still in development, but Mirhoseini said the company has demonstrated the ability to optimize more aspects of chip design. Investors have shown strong interest: Ricursive has raised $335 million at a $4 billion valuation in just a few months.
Goldie said AI could eventually co-design chips and the algorithms that power them, creating a recursive improvement loop. She added that the company is entering “a new era” where more compute can be invested to design chips faster and better, creating a “whole new law of scale for chip design.”
Mirhoseini said the bottlenecks Ricursive is targeting include physical design and verification—two core challenges in chip design. Chip design is described as among the most complex tasks, requiring engineers to arrange a very large number of components on a silicon wafer to optimize different functions. After design completion, performance must be verified through a rigorous iterative process before the design can be sent for fabrication.
Source: Wired
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…