Opinion Inside Business

AI chip contenders face daunting ‘moats’

Barriers to entry in an industry dominated by TSMC and Nvidia are very high

June Yoon

TSMC wafer. The chipmaker has about 90% of the world’s advanced chip market and more than three decades of experience building ‘fabs’

current progress 88%

June Yoon2 hours ago

2Print this page

Unlock the Editor’s Digest for free

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

In the drama that has just played out in Silicon Valley over the future of OpenAI, one side plot concerned an ambitious chip venture by its chief executive Sam Altman.

Before he was ousted and reinstated to the helm of the company, Altman had sought to raise as much as $100bn from investors in the Middle East and SoftBank founder Masayoshi Son to build a rival to compete with sector giants Nvidia and Taiwan Semiconductor Manufacturing Co. This would be a vast undertaking. And one where $100bn may not go very far.

Given that the US chip designer and Taiwanese chipmaker are critical to all things generative AI, Altman is unlikely to be the only one with hopes of taking them on. But the barriers to entry — moats in Silicon Valley parlance — are formidable.

Nvidia has about 95 per cent of the markets for GPU, or graphics processing units. These computer processors were originally designed for graphics but have become increasingly important in areas such as machine learning. TSMC has about 90 per cent of the world’s advanced chip market.

These businesses are lucrative. TSMC runs on gross margins of nearly 60 per cent, Nvidia at 74 per cent. TSMC makes $76bn in sales a year. The impressive figures make it seem as though there is plenty of room for more contenders.

A global shortage of Nvidia’s AI chips makes the prospect of vertical integration yet more attractive. As the number of GPUs required to develop and train advanced AI models grows rapidly, the key to profitability for AI companies lies in having stable access to GPUs.

That explains why global technology giants have been rushing to develop streamlined chips optimised for their workflows, such as those for data centre servers that train and run large language models for AI.

It is one thing for companies to design customised chips. But Nvidia’s profitability comes not from making chips cost-efficient, but by providing a one-stop solution for a wide range of tasks and industries. For example, Nvidia’s HGX H100 systems, which can go for about $300,000 each, are used to accelerate workloads for everything from financial applications to analytics.

Coming up with a viable rival for the HGX H100 system, which is made up of 35,000 parts, would take much more than just designing a new chip. Nvidia has been developing GPUs for more than two decades. That head start, which includes hardware and related software libraries, is protected by thousands of patents.

Even setting aside the challenges of designing a new AI chip, manufacturing is where the real challenge lies. Setting up a fabrication plant is the first hurdle. It is expected to take more than three years for TSMC to get the US plant it is building in Arizona to start production, despite having more than three decades of experience building “fabs”. Its overall investment into this plant is expected to come in at about $40bn.

Operating these plants require an army of highly skilled workers that have advanced degrees in electrical engineering, physics, or material science. A skilled worker shortage has already delayed the start date of its Arizona plant.

Another issue is buying the chipmaking gear that goes into the plants. Dutch maker ASML has a monopoly on the extreme ultraviolet lithography machines that are critical for the production of advanced chips. The waiting list for the machines, which cost more than $300mn each, averages around two years.

But patents, by far, remain the biggest hurdle. TSMC is one of the world’s largest patent holders with more than 52,000 patents related to chipmaking. Of them, around 3,000 are on its advanced packaging — a crucial technology for AI chips that boosts performance and where TSMC has the edge over rival Samsung in contract manufacturing. TSMC’s over eight years of investment in the technology has pushed the barriers to entry yet higher.

Navigating all this means painfully long lead times for new entrants, which is especially perilous in a rapidly evolving industry. At the same time, Nvidia and TSMC’s fat margins mean more cash for research and development, accelerating the pace of next-generation technology releases. The latter spends more than $30bn in capital expenditure each year.

With momentum in their favour, the gap between Nvidia and TSMC and their competitors has been widening over the past year. For now, not even their biggest rivals are sufficiently equipped to close that gap, let alone new joiners.

june.yoon@ft.com

Latest on Technology sector

Follow the topics in this article

Comments

Comment guidelines

Please keep comments respectful. Use plain English for our global readership and avoid using phrasing that could be misinterpreted as offensive. By commenting, you agree to abide by our community guidelines and these terms and conditions. We encourage you to report inappropriate comments.

Sort by

Newest

(Edited)

Altman’s recent hire of a chip verification engineer from google’s TPU team to lead chip development shows he’s not serious about AI hardware. He needs hardware architects to codesign with his algorithm team. He’s vastly underestimating the challenge

(Edited)

Nvidia is certainly in a good spot, but the moats are not as deep as one imagines. The reason is the demand side. Most AI chip demand should be expected to emerge from clouds like Azure and from data centers that sell SaaS. This sector is highly dominated by well-capitalized players with considerable technical prowess and a strong IP portfolio. These firms consider high costs of AI chips an unpalatable burden and have the wherewithal to design these chips for contract manufacture. At present, all major players have their AI chips in the works. Microsoft Azure is a leading cloud product and the following outlines the efforts to bring the AI chips in house.

At its Ignite developer conference in Seattle, Microsoft introduced a new chip, called Maia, to speed up AI computing tasks and provide a foundation for its $30-a-month "Copilot" service for business software users, as well as for developers who want to make custom AI services. The Maia chip was designed to run large language models, a type of AI software that underpins Microsoft's Azure OpenAI service and is a product of Microsoft's collaboration with ChatGPT creator OpenAI. https://www.reuters.com/technology/microsoft-introduces-its-own-chips-ai-with-eye-cost-2023-11-15/