BLOG

AMD’s New Laptop Chips Run AI Locally Without the Cloud — Here’s What That Actually Means

M MegaOne AI Apr 1, 2026 Updated Apr 2, 2026 4 min read
Engine Score 7/10 — Important
Editorial illustration for: AMD's New Laptop Chips Run AI Locally Without the Cloud — Here's What That Actually Means
  • AMD launched the Ryzen AI 400 series laptop processors at CES 2026, featuring up to 12 Zen 5 cores and a 60 TOPS NPU for running AI workloads locally without cloud connectivity.
  • The flagship Ryzen AI 9 HX 475 boosts to 5.2 GHz and includes 16 RDNA 3.5 GPU compute units clocked at up to 3.1 GHz.
  • Laptops from Acer, ASUS, Dell, HP, Lenovo, and MSI start at $499, with configurable power envelopes ranging from 15W to 54W.
  • AMD claims the Ryzen AI 400 series delivers up to 70% better performance than Intel’s competing Lunar Lake platform at the same thermal design power.

What Happened

AMD announced the Ryzen AI 400 series processors at CES 2026, targeting laptops and compact desktops that can run artificial intelligence workloads entirely on-device without requiring cloud connectivity. The chips, codenamed “Gorgon Point,” combine Zen 5 CPU cores with an upgraded XDNA 2 neural processing unit rated at up to 60 TOPS — a roughly 20% increase over the previous generation’s 50 TOPS capability.

The lineup is designed to exceed Microsoft’s Copilot+ PC certification requirements, which demand a minimum of 40 TOPS from the neural processing unit. AMD’s 60 TOPS figure surpasses that threshold by 50%, providing meaningful headroom for increasingly demanding AI applications.

Laptops powered by the new chips began shipping from OEM partners in the first quarter of 2026, with retail prices starting at $499.

Why It Matters

Most AI workloads today run in the cloud, which requires a persistent internet connection, introduces latency, and raises data privacy concerns for businesses handling sensitive information. On-device AI processing eliminates all three issues. Users can run large language model inference, image generation, real-time translation, and other AI tasks directly on their laptops without sending data to external servers.

The 60 TOPS NPU rating puts AMD ahead of Intel’s competing Lunar Lake platform in dedicated AI processing power. AMD claims the Ryzen AI 400 series delivers up to 70% better overall performance than Intel at the same thermal design power, a metric that matters for battery-powered devices where every watt of power consumption affects runtime.

For enterprise IT buyers, the configurable TDP range of 15W to 54W means the same chip architecture can power ultralight notebooks for traveling sales teams and high-performance workstations for engineering departments, simplifying procurement and software deployment across an organization.

Technical Details

The flagship Ryzen AI 9 HX 475 features 12 cores arranged in a hybrid configuration: four high-performance Zen 5 cores and eight power-efficient Zen 5c cores, providing 24 threads total. The chip boosts to 5.2 GHz and carries 36 MB of combined cache, split between 12 MB of L2 and 24 MB of L3. Integrated graphics come from the Radeon 890M with 16 RDNA 3.5 compute units running at up to 3.1 GHz.

Memory support reaches LPDDR5X at 8,533 MT/s, a step up from the Ryzen AI 300 series’ limit of 8,000 MT/s. The faster memory bandwidth benefits both AI inference tasks and integrated graphics performance. The entire lineup is manufactured on TSMC’s 4nm FinFET process node, with a default TDP of 28W and AMD’s configurable TDP allowing OEM partners to adjust between 15W and 54W based on chassis thermal design.

The XDNA 2 NPU architecture handles AI inference tasks — including real-time translation, content generation, background noise removal, and live camera effects — without engaging the CPU or GPU. This dedicated processing path preserves battery life during sustained AI workloads by keeping the higher-power compute units in low-power states.

Additional models in the lineup include the Ryzen AI 9 465 and Ryzen AI 7 450, which offer lower core counts and NPU ratings for mid-range and budget laptop segments while maintaining Copilot+ PC certification.

Who’s Affected

OEM partners including Acer, ASUS, Dell, HP, Lenovo, GIGABYTE, and MSI will ship laptops and compact desktops across consumer, business, and workstation categories. The $499 starting price positions AMD’s AI-capable laptops within reach of mainstream business buyers, students, and general consumers — not just high-end professionals and early adopters.

Software developers building AI-native Windows applications benefit from the standardized XDNA 2 NPU, which provides a consistent hardware target across multiple OEM devices and form factors. Microsoft’s Copilot+ certification creates a baseline guarantee of AI functionality that developers can design against without worrying about hardware fragmentation.

What’s Next

Ryzen AI 400 series laptops are now shipping from OEM partners globally. The main limitation constraining the platform’s AI capabilities is software optimization. While the NPU hardware delivers strong raw performance, the ecosystem of desktop applications designed to use local AI acceleration through AMD’s XDNA 2 SDK is still maturing. The practical impact of 60 TOPS on daily workflows will depend on how quickly application developers optimize for on-device inference rather than defaulting to cloud-based AI processing.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy