NPUs are slowly filtering into the desktop segment, but mobile still gets the best NPU hardware for now.
Arrow Lake-S will be the first Intel desktop architecture with a neural processing unit (NPU), but it won’t be as fast as people might expect. @Jaykihn on X reports that Arrow Lake-S will include an NPU that is only slightly more powerful than Meteor Lake’s NPU, featuring just 13 TOPS of AI performance.
Arrow Lake-S’s AI performance is well under Lunar Lake’s 45 TOPS of NPU-only performance and nowhere near enough to qualify for Copilot+ certification. Arrow Lake-S would need at least 40 TOPS of AI performance from the NPU alone to qualify as a hardware-compatible product for Copilot+ PCs (if Microsoft ever allows desktops into the Copilot+ PC ecosystem).
The performance numbers from Arrow Lake-S NPU suggest that Intel utilizes a variant of Meteor Lake’s NPU, which topped out at 10 TOPS of performance (no pun intended), possibly with a higher clock speed.
It is a mystery why Intel has decided to go with an NPU at all in Arrow Lake-S and use one that is not at least at the same performance level as Lunar Lake’s NPU (after all, Lunar Lake is Arrow Lake’s ultra-low power mobile counterpart — architecturally they are both similar). However, there are some takeaways we can surmise.
Having an NPU in a desktop environment is virtually useless; the main job of an NPU is to provide ultra-high AI performance with a low impact on laptop battery life. Desktops can also be used more often than laptops in conjunction with discrete GPUs, which provide substantially more AI performance than the best NPUs from Intel, AMD, or Qualcomm. For instance, Nvidia’s RTX 40 series graphics cards are capable of up to 1,300 TOPS of AI performance.
As a result, putting a bleeding-edge NPU inside Arrow Lake-S would be virtually pointless and a complete waste of resources and money. However, there are still mobile variants of Arrow Lake that can take advantage of an NPU to consider.
Intel probably had to choose the best middle-ground between both use cases with Arrow Lake. A lower-powered NPU is perhaps cheaper to manufacture, which would be a better route than putting Lunar Lake’s NPU in Arrow Lake. Another factor worth mentioning is that many of the Arrow-lake-equipped laptops will be sporting a discrete Nvidia GPU (inevitably), which lessens the importance of NPU performance when users can run AI-specific tasks on the Nvidia GPU (which has AI-specific tensor cores). This makes Intel’s decision to put a weak NPU inside Arrow Lake even more logical.
This does not discount the idea of creating two (or more) dies for Arrow Lake, one without an NPU and one with an NPU aimed at desktop and mobile systems, respectively. Regardless, we will probably get an answer once Arrow Lake arrives later this year.