Each week we find a new topic for our readers to learn about in our AI Education column.
At this point in AI Education we’d like to assume that our readers have some baseline knowledge of computing, the financial services industry, and artificial intelligence—but we know that we’re in the process of gaining new regular readers here, so we’re not going to gloss over some fundamental knowledge just for the sake of ease and brevity. In that spirit, it’s time to add to our coverage of artificial intelligence hardware with an explanation of AI PCs.
AI PCs are just what they sound like—personal computers designed for artificial intelligence operations. Pretty simple, right? Well, before we move on to tell you how we got to this topic in the first place, let’s explain why the AI PC is not really as simple as it may appear. For one thing, consider the relatively large amounts of computing power and energy necessary to handle a lot of the fancier and more sophisticated AI applications used thus far by the general public. For the most part, this computing power is not available on a PC, but from larger computers—even supercomputers—which users access from the cloud, or from terminal computers.
Thus, the AI personal computer is a newer example of AI edge computing, where more operations are done as closely to the end user as possible, hopefully leading to more efficient computing. If you’ve read up on any of our past coverage of artificial intelligence hardware, particularly regarding AI chips, you’ll recall that the personal computer typically relies on a microprocessor called a central processing unit, or CPU, to handle most of its core functions, and will also likely have a graphics processing unit, or GPU, to handle more resource-intensive operations like rendering video in real time. However, in newer AI PCs, the CPU and GPU may be joined by more AI-specific chips.
How Did We Get Here?
We leaned into this topic three times recently, spurring us to write an AI Education column dedicated to AI PCs. The first was when we stumbled upon discussions of artificial intelligence PCs when researching our columns on AI-specific chips. It turns out that AI PCs often rely on some of the new types of hardware we’ve recently discussed, including tensor processing units (TPUs) and neural processing unites (NPUs).
The second was when we wrote our roundup of top AI-related stories for 2024 before New Years Eve. One of our big stories for the year was the wave of new personal AI hardware rolled out in 2024, including Microsoft’s entry into the AI PC market, the Copilot+ PC, which it billed as the “fastest, most intelligent Windows PCs ever built” in announcing the new computers back in May.
Finally, this past week in Las Vegas, AI PCs were one of the hot topics at the Consumer Electronics Show (CES), spurring a bevvy of positive and negative headlines. Companies like AMD launched new chips aimed at the artificial intelligence in personal computing market, while industry watchdogs questioned whether AI chips and personal computers are worth the hype.
What Is the Copilot+?
Microsoft’s AI PC runs on the customary CPU and GPU, adding a neural processing unit (NPU) to the mix. An NPU is a microprocessor built with a dense neural-network-like architecture, enabling it to compute the high-level math functions necessary for machine learning more efficiently than conventional chips. Here, before we go on, is where we should note that the NPUs used by the Copilot+ line of computers, as well as the AI chips across most commercially available AI PCs, are a far cry from the ultra-powerful chips being designed by the likes of Nvidia and AMD for use in artificial intelligence data centers—hence the idea of AI PCs is that some but not all artificial intelligence functions are done on the PC itself.
Microsoft boasts that certain Copilot+ PCs greatly outperform similarly-powered conventional PCs functioning with just a CPU and GPU, offering 20x more power and 100x more efficiency when running AI workloads. The processing power enables users to access small language models, or SLMs, locally, while simultaneously accessing large language models on the cloud. This enables the computers to balance artificial intelligence functions between its onboard computing capabilities and the higher-powered computing available via the cloud, offering the end user a smooth experience while reducing the energy and infrastructure required for AI-infused personal computing.
The Copilot+ PCs come with a set of functions that Microsoft believes will be new to most users, including a Recall capability (still being rolled out at this writing) enabling users to have a built-in “photographic memory” of everything they’ve seen and done with a computer over time. Copilot+ also comes with co-creating capabilities, which merge visual generative AI with Microsoft’s image and photo editing tools. Third-party applications, including portions of Adobe’s Creative Cloud suite of apps, can also access Copilot+’s AI processing capabilities.
Microsoft is far from alone, the Copilot+ lineup is just one of the more visible examples of an AI PC on the market today.
Is It Hype?
Journalists and technology watchdogs at CES cite surveys of business IT leaders, where they found reactions that can only be fairly described as tepid, at best, to AI PCs. With that in mind, it’s unlikely that everyone is going to throw out their conventional PC in 2025 in favor of an AI PC. To paraphrase from a CES review in Computer Weekly, businesses are not accelerating their technology cycles just to implement AI PCs. And yes, it’s important also to note here that AI PCs for the most part are not really introducing new functionality to most end users. We can all access generative AI capabilities on our computers without an AI PC—but we probably can’t do it with the speed and energy efficiency of an AI PC.
But I would also hesitate to call the hub-bub around artificial intelligence personal computers just hype. There is a sense of urgency around bringing more AI processing power to the edge, as there’s a physical limit to the number of data centers that can be built, furnished and powered in a given period of time, and the deployment of AI across the economy in general is still accelerating by most measures.
Thus, I believe the deployment of AI PCs will accelerate as well, and I’m not alone. Take this from a 2024 report from market research firm Canalys: In 2024, AI PCs consisted of less than one-in-five (19%) PC shipments. By 2027, 60% of all PC shipments will be AI PCs. My guess is that, by the end of the decade, the conventional personal computer as we’ve known it will be essentially obsolete (after a fantastic five-decade-plus run) and the term “personal computer” will refer to AI PCs.