From Macro Polo:
Semiconductor chips are the engines of our digital lives. Fitted with billions of electronic transistors that constitute elaborate integrated circuits, these chips power everything from laptops and mobile phones to smart TVs and automobiles. And the next frontier in this ~$450 billion industry are chips that support artificial intelligence (AI) technologies. These “AI chips” are forecast to comprise up to 20% of the total semiconductor chip market by 2025.
AI—the ability of machines to perform non-routine tasks once thought the preserve of human brains—requires specialized AI chips that are more powerful, more efficient, and optimized for the parallel matrix computation required by advanced machine learning (ML) algorithms. Image recognition, recommendation engines, natural language processing, and autonomous vehicles (AVs) are just some of the use cases that these chips support.
The diversity of AI applications means that AI chips are usually designed for specific functions. Rarely is there a one-size-fits-all solution (see “Inside an AI Chip” for more details), putting a premium not just on chip design but also on access to the sophisticated hardware and advanced manufacturing processes necessary to keep up with demand.
AI chips are crucial ground over which the competition for technology leadership will be fought. Tensions between reducing costs, mitigating risks, and investing in innovation will only deepen with time. Therefore, it is imperative for governments and businesses to understand the complex global interdependencies of semiconductor supply chains.
We begin by examining the market drivers for AI chips.
Rising Demand for AI Chips
The demand for AI chips will grow in line with demand for AI products. PwC predicts that AI will create productivity and consumption windfalls that add $15.7 trillion (or 14%) to global GDP in 2030. Insight Partners projects that this expansion will push sales of AI chips from $5.7 billion in 2018 to $83.3 billion in 2027, at a compound annual growth rate (CAGR) of 35%. This rate is nearly ten times that of the forecast growth in demand for non-AI chips.....MUCH MORE
The market for AI chips used to compute ML algorithms is basically divided into two segments: training and inference. Training is when enormous volumes of data are fed into AI algorithms to build and refine the powerful predictive models necessary to perform complex tasks in dynamic environments. Inference is when these trained models are applied to make real-time decisions based on real-world stimuli. Training was the first market segment to take off, but inference is now the more important segment as AI-enabled devices proliferate.
Training typically occurs in the “cloud” at large data centers, while inference happens both in the cloud and increasingly at the “edge,” which means within end-use devices like smart home assistants, drones, or AVs. In 2025, cloud computing will probably still be the larger market for AI chips, but edge computing will see faster growth as inference overtakes training (see “AI in the Cloud” section)....