Although everyone has access to mobile phones, PC processors are still the computing center of modern life: nearly 2 billion people turn on their personal computers for work and study every day. The processors in these computers, plus the hundreds of millions of PC processors installed in data centers and supercomputers, calculate everything in the invisible world of data, from recommending videos, recording stock market transactions, to analyzing battlefield intelligence, and finding the next step. A bombing target.
Over the past 20 years, the competitive landscape of this important infrastructure has remained static.
After the last chip war, several leading companies have firmly controlled their positions. Most of the time, Intel controls more than two-thirds of the market, deciding whether to increase CPU computing power by 8% or 10% next year; Nvidia is the first choice for drawing images in the virtual world, and Qualcomm determines how signals propagate through the air. Three years ago, Apple’s M1 chip was launched, once breaking the calm with performance beyond imagination. But its success is mostly attributed to capital strength by the outside world – indeed, only companies with the most money can build good chips.
This situation has been almost completely rewritten in the past week, and a simple truth has been repeated to the world: in the purely commercial world, technology will eventually advance and monopoly cannot last forever. The previous calm in the chip market was just waiting for the accumulation of technology.
Over the past seven days, a chip war surrounding personal computers has gradually taken shape in the US market. At least six companies with a market value of hundreds of billions of dollars are involved, launching attacks on companies and even partners that have no competitive relationship.
On October 25, Qualcomm released the laptop chip Snapdragon .
On the same day, Apple warmed up its new conference and launched its new M3 series processors on October 31. With unique 3nm technology, it refreshes the performance benchmark of notebook computers.
At the same time, many American media reported Nvidia and AMD’s new plan: to develop high-performance, low-power notebook chip solutions and launch them within two years to compete with Apple and Qualcomm.
New competition is spreading to the same level of the market. Nvidia will use its latest automotive chip DRIVEThor to solve all needs from in-car entertainment to autonomous driving. Tesla, like Apple, replaces the chips in its products with its own one by one.
A chip war that will determine the future of computing is breaking out, and the battlefield has returned to Silicon Valley.
Common direction: mobile phone chips counterattack computers, cars, and servers
Regardless of Apple’s M3 series or Qualcomm’s Snapdragon X Elite, their structures do not look like traditional computer chips, but more like mobile phone chips – although the size will be larger.
In traditional computers, parts such as CPUs, graphics cards, and memory sticks produced by different companies are sent to factories and welded onto circuit boards. Both Apple and Qualcomm’s processors are SoC (System Ona Chip) – processor cores such as CPU, GPU, memory, and controller are all integrated into one chip package. TSMC factories can complete most of the production work.
Similarly, Nvidia’s next-generation automotive chip Thor has also turned to SoC design. Server chips with higher performance requirements are the next breakthrough target.
The turning point occurred at the end of 2020, when Apple released the M1 chip designed with SoC. At first, Apple only used the new processor in entry-level computers, but the performance has caught up with the top-level Intel processor computers of the previous year, and the battery life is several hours longer.
Apple has been using Intel CPUs in Mac computers for the past 14 years. Starting in 2015, Intel processor performance improvements dropped to single-digit percentages. This was once seen as the inevitable result of the demise of Moore’s Law.
”In SoC, the distance between CPU, GPU, memory and other computing units is no more than 1 centimeter, and they can directly communicate with each other through wafers. Compared with the traditional method of circuits through external PCB boards, the information transmission efficiency will be greatly improved and power consumption can also be reduced. .” said Dr. Wang Bo, author of “A Brief History of Chips”.
If you think of a computer completing a task as cooking, the scheduling chip in a traditional computer is like going to different supermarkets or stalls to buy ingredients and then cooking. SoC is equivalent to taking ingredients from a refrigerator to cook. The M1 chip has richer “ingredients”. Apple has customized dedicated computing units for a series of specific purposes such as artificial intelligence, audio and video encoding, and encrypted storage to solve common problems faster. These functions all need to cooperate with the CPU, and it is necessary to shorten the information transmission distance.
In 2021, Apple will successively release M1Pro, M1Max, and M1Ultra with better performance. Wired magazine said these products “keep Moore’s Law alive.”
Intel also realized the industry’s shift to SoC early on, and launched Atom, an SoC platform suitable for smartphones and Internet-connected computers, in 2012. However, its reliance on Intel’s x86 architecture and its own chip foundries have put it at odds with Apple and Apple. The Arm architecture + TSMC supported by Qualcomm and other companies struggled to compete, and finally gave up trying in 2016.
”x86 is a complex instruction set. The CPU based on it has strong performance but also consumes a lot of power. GPU is also a high-power processor. Putting them together to make an SoC will cause heat dissipation, which will be a huge trouble.” Wang Bo said.
Moreover, there are many brands in the Windows laptop market and the ever-changing personal configuration requirements have restricted Intel to a certain extent. It has to provide CPUs that meet multiple needs at the same time and at lower prices as much as possible. It is difficult to iterate as quickly as Apple.
Intel CEO Pat Gelsinger is equally aware of Apple’s threat, telling employees in early 2021: “We have to deliver better products to the PC ecosystem than a lifestyle company.” But
. It faces more than just Apple. After launching the MacBook equipped with the M1 chip in 2020, Apple’s sales share in the laptop market doubled to 11%. The success of M1 has given Qualcomm and other new companies that are eager to enter the market a clear idea of what to do next and who to look for.
Lowering the technical threshold: democratizing chip design and TSMC solving manufacturing problems
Looking back, the evolution of chips on various devices to SoCs is natural, but the process is extremely complicated. It took Apple 12 years to form a chip design team to launch M1.
During this period, Apple recruited talents who had worked at chip companies such as Intel, Qualcomm, Broadcom, and Imagination through high salaries and mergers and acquisitions, and then gradually replaced the computing units in the chips with self-developed products. First, it abandoned Arm’s publicly available CPU core design, then replaced Imagination’s design with its own GPU, and developed its own dedicated computing units for image processing, encoding and decoding audio and video, accelerating artificial intelligence algorithms, and encrypted storage. Promoting iPhone chips to achieve a performance leap every two years, it is possible for M1 to surpass Intel chips.
The birth of a great product is often the end of an ultra-long marathon. After Apple’s first-generation Mac computer and first-generation iPhone were released, a large number of engineers left their jobs in a short period of time. Both Apple founder Steve Jobs and Microsoft founder Bill Gates (Microsoft was deeply involved in the software development of the first generation Mac) will mention this wave of departures more than once in interviews to illustrate how much their teams have worked. Extraordinary effort, and ultimately working to the point of exhaustion.
Apple chip engineers discovered that the end of one marathon is the beginning of the next one.
According to The Information, the number of chip projects within Apple has increased from single digits to dozens over the past decade, but the number of employees has not grown at the same rate.
The October 31 press conference is an example of the increasing burden on Apple engineers. The M1 series of chips has four specifications, but Apple engineers only made two complete designs – M1 and M1 Max, which were released nearly a year apart. M1 Pro is a shrunken version of M1 Max, while M1 Ultra is a spliced version of M1 Max. On October 31, Apple released three completely different designs at the same time – M3, M3 Pro, and M3 Max. This allows the M3 Pro to be smaller and cheaper, while the M3 Max can pursue ultimate performance. Apple’s chips more accurately serve products in different price ranges, but increase the workload of the chip team.
An Apple chip engineer said in an interview that in order to meet the needs of the company’s various product lines to quickly, stably and significantly iterate chips, Apple’s chip engineers work nearly 80 hours a week – 996 is only 72 hours, usually with a lunch break ——In order to complete the task on time.
According to statistics from multiple media outlets, hundreds of Apple chip engineers have resigned in the past two years. They also spread their experience in making high-performance processors.
In 2019, Gerard Williams III, senior director of platform architecture for Apple’s chip department, took the lead in founding the chip company NUVIA. He joined Apple in 2010 after working at Arm for 12 years. During his 9 years at Apple, he led the team to develop the CPUs for all Apple SoCs and was also the chief architect of Apple’s M1 Pro and M1 Max.
The other two chip experts who founded NUVIA with him are: John Bruno and Manu Gulati, both of whom have extensive chip work experience.
According to NUVIA’s official website, the goal of this group of Apple chip veterans is to develop more powerful CPUs to handle exponential growth in data and growing demands. Their technical route is consistent with Apple’s – to design a CPU core compatible with the Arm ecosystem from scratch.
After the success of the M1 series, NUVIA received acquisition invitations from a number of large technology companies. In 2021, Qualcomm won the competition from Microsoft, Intel, Meta and other companies and spent US$1.4 billion to acquire it. The three NUVIA founders stand to earn hundreds of millions of dollars from the deal—more than Apple CEO Tim Cook makes annually.
The NUVIA team joined Qualcomm with hundreds of employees, and its founding team all served as senior executives of Qualcomm. In less than two years, the performance of Qualcomm’s new processor has surpassed that of Apple’s M2 series.
What once limited a company from making high-performance chips was manufacturing. For most of the chip’s 60-year history, controlling the chip manufacturing plant was basically equivalent to controlling the chip itself. Intel once monopolized the chip market with its exclusive advanced wafer fab. Even if competitors could design good chips, they could not. Made with advanced technology.
Until 2017, cracks began to appear in the vertical integration system of chips established by Intel. Relying on huge iPhone orders and Apple’s requirement to significantly iterate chip performance every two years, TSMC’s chip manufacturing process quickly surpassed Intel’s. This year, when TSMC produced 10nm process chips, Intel was still using the 14nm process. In the following years, TSMC followed a steady pace to push 7-nanometer and 5-nanometer chips into reality and maintain its lead.
Under the same process, Intel’s x86 architecture chips perform better than the Arm architecture commonly used in SoC chips, but the gap between the two parties’ processes makes up for the performance shortcomings of the Arm solution. The M1 chip released by Apple in 2020 used a 5-nanometer process, while Intel’s laptop chips in the same year were still stuck at 10 nanometers (the transistor density is equivalent to TSMC’s 7-nanometer process).
TSMC’s open foundry nature determines that any company that wants to make chips can obtain top-notch manufacturing processes without investing heavily. Qualcomm’s XElite follows Apple’s lead in using the 4nm process. Although there is some gap compared to the latest M3, which uses 3nm, it has surpassed other products in the M series.
Not only do you have to have money to develop chips, you also have to be able to continue to make money from the chips.
Chip research and development requires continuous huge investment, so this is why it is always the giants who provoke competition. Giants not only need senior chip managers, but also hundreds or thousands of engineering teams. Therefore, the salaries and benefits of R&D personnel and engineers are a large part of R&D investment.
Starting in 2019, Qualcomm, which was originally willing to invest more than $5 billion in R&D each year, has seen its R&D expenses increase by about $1 billion per year. In the 12 months ending in the third quarter of this year, the cumulative investment in R&D was nearly US$9 billion.
The reasons supporting such intensive investment by these companies vary, but in essence they all have very stable “tax” revenue, so they have the opportunity to bring more revenue through the performance improvements brought by chip technology, forming a virtuous cycle.
Apple sells more than 200 million iPhones every year. Each self-developed chip not only improves product competitiveness, but also takes away profits originally belonging to chip suppliers. At the same time, its chips are used in computers, watches, headphones, and Vision Pro.
Qualcomm relies on its large number of patents and leadership in mobile communications to collect taxes from almost every smartphone – including Apple. According to calculations by analysts, Apple has to pay Qualcomm $13 in wireless patent licensing fees and $25 in baseband chip fees for every iPhone sold. Every year, the “tax” Qualcomm collects from Apple can almost cover the entire year’s R&D expenses. Qualcomm will use these expenses to develop more advanced Snapdragon chips, making it indispensable for more equipment manufacturers.
Similarly, the explosion in demand for AIGC and large models means that computing manufacturers and AI startups will need to purchase Nvidia GPUs in large quantities in the next few years. With reliable cash flow, NVIDIA can support self-developed CPUs and make further progress in the automotive and computer markets.
Once they leave the support of such a highly relevant main business, no matter how wealthy a large company is, they must seriously settle their accounts. Google wanted to develop its own SoC for its Pixel phones in 2016. Later, it hired SoC engineer Steve Molloy from Qualcomm as its chip director and recruited a large number of chip engineers in India.
However, seven years after the release of the Pixel series of mobile phones, the cumulative global shipments are 37.9 million units, which is still less than the iPhone’s sales in one quarter. Google’s founders have already assigned power to the CFO (Chief Financial Officer) and will not give unlimited resources without the prospect of return. Google’s mass production plan for its self-developed Pixel chips has been postponed to 2025.
Also having trouble is Meta. Meta formed a chip team called Facebook Agile Silicon Team in 2018, hoping to design chips from easy to difficult, and eventually use self-developed chips in the Quest series of virtual reality devices. But Quest continued to lose money, so Meta outsourced the design of custom chips to Samsung and MediaTek, and finally gave up custom chips and directly purchased Qualcomm XR chips.
Meta Quest2 is already the best-selling XR device so far, only selling about 10 million units a year. The initial sales of Apple’s upcoming VisionPro will not be better than it, but the chip development costs required have already been diluted by the annual sales of 200 million iPhones and 26 million Macs.
AI, cars and XR, new needs, new tax opportunities
About 60 years ago, a string of small cities in the southern part of the San Francisco Bay Area in California began to be called “Silicon Valley.” A group of companies here have promoted the application of transistors and integrated circuits, giving birth to the chip industry. Their first customers were governments and the military.
After the 1980s, with the popularization of computers and the birth of the Internet, consumers and enterprises replaced government agencies as Silicon Valley’s largest customers. Technology companies such as Apple, Nvidia, Google, and Meta were born here. Technology giants are entrenched in one area, making most of the profits in their respective industries, and are getting further and further away from “silicon”. At one time, America’s most important technology companies specialized in software or the Internet.
If chip demand is still limited to existing videos, tables, and games, Apple, Qualcomm, Nvidia, and AMD may not go all out. But AI, cars and XR have created new computing needs, while the stagnation of the consumer electronics market has intensified the urgency of competition – each company needs to squeeze out more profits.
At present, AI has already had some practical applications. Microsoft wants to integrate an AI assistant named “Copilot” into almost all productivity tools such as Office 365, Bing search, and Outlook emails; Apple is using the Transformer model to improve the input method (not yet Chinese); Adobe’s AI tool Firefly will also be integrated into Among design software such as Photoshop, Illustrator, Premiere, etc.
However, the computing resource consumption and cost of training and inferring large models are extremely exaggerated. Whether they purchase GPUs themselves or rent servers from cloud computing providers, companies providing AI services face severe computing power shortages and expensive operating costs. The only way to popularize it through large models is to use the processor of every computer and every mobile phone.
This is why press conferences from Qualcomm to Apple emphasize that new chips can better support mobile devices running large models locally. Apple claims that M3Max can support running Transformer models containing billions of parameter sizes; Qualcomm said that the first PC equipped with Snapdragon XElite will support local inference of 13 billion parameter models.
For the foreseeable future, personal computers will remain the most important productivity tool. Industry research organization Counterpoint predicts that AI will inject new vitality into the PC market that has been depressed for many years. By 2026, the penetration rate of global AIPC will exceed half. In this market, Apple wants to use chips to retain customers who are most willing to spend money to buy computers. Qualcomm wants PC manufacturers to sell more computers to pay taxes for themselves. Nvidia wants to move from GPUs to CPUs and take away more of the PC manufacturers’ profits. Profits, three companies collide here.
Another potential market demand comes from XR. It’s hard to say how big a market this will be, but the Vision Pro released by Apple this year has pointed the way for other manufacturers to use the screen’s “see-through” function to achieve augmented reality (AR) effects. In order for its visual experience to reach the “retina” standards we are accustomed to, the screen resolution per eye needs to reach 6K.
Vision Pro currently only has 4K, and already requires an M2 chip to be worn on the head, plus an R1 chip to process sensor information in real time, a built-in fan, and an external battery. Real-time rendering of complex images at 6K precision requires performance and power consumption that cannot be achieved by today’s chips.
The demand for chip computing power in automobiles is also growing. With the acceleration of electrification and intelligence, as well as the popularity of smart cockpits and autonomous driving, these “data centers on wheels” have attracted a number of chip manufacturers to enter. Automotive chips have also shifted from the original general-purpose, dispersed single-function chips to integrated multi-function SoCs.
Qualcomm has already used Snapdragon 8155 to bring its 7-nanometer advanced process into automotive chips; Nvidia’s next-generation SoC chip Thor, released last year, has a single-chip computing power of up to 2,000 TOPS, which is nearly 8 times that of its current product Orin. Qualcomm wants to participate in autonomous driving, and Nvidia wants to make the main chip for cars. Tesla does not want to rely on any of them.
The new environment is driving these technology companies to turn to chip battles, and the chip battle is likely to determine who is the technology company in the future.