AMD: On-Chip AI Integration is the Future of PC Computing


  • Staff
In January 2023, AMD unveiled select Ryzen™ 7040 Mobile Series processors with an AMD integrated Ryzen AI engine¹. This dedicated engine is built using the AMD XDNA™ architecture and is designed to offer lower latency, better battery life, and a secure environment for running AI workloads compared to sending data to the cloud. The launch of Ryzen AI was a first for x86 processors and an investment in the future of computing, but why integrate an AI engine in the first place?

There has been an explosion of interest in AI over the past year as businesses and the general public have seen what ChatGPT, Stable Diffusion, and other generative AIs can do. AI deployment is in its infancy, but the field is maturing rapidly as companies like Microsoft, Google, and Adobe enter the space.

The computing industry has seen workloads move from research labs and supercomputing settings to personal computing devices before. The inclusion of the floating point-point unit (FPU) in consumer CPUs and the advent of affordable consumer 3D graphics cards are both examples of this type of migration. Historically, integration of these new technologies has led to periods of high growth, often in unexpected ways.

Before CPU manufacturers began adding on-die floating point co-processors in the early 1990s, games like Doom were lauded for their use of fixed-point math. The wider availability of FPUs made the leap to fully 3D games like Quake possible just a few years later by driving significantly better visual quality and higher frame rates.

The same step forward brought the productivity benefits of sophisticated parametric CAD applications from $50,000 workstations down to $5,000 PCs, democratizing the technology by making it affordable to more companies. Over time, the addition of AVX SIMD instructions in CPUs dramatically increased float-point compute performance enabling the ray-traced rendering and images that we are now used to seeing in Hollywood movies.

Another example of how technologies have driven growth in unexpected ways is the introduction of the first consumer 3D graphics cards. These devices found immediate application in drawing shaded and textured triangles to display 3D objects. As the gaming industry drove increased performance demands on these early graphics cards, device makers began to add limited programmability before finally transforming into the general-purpose SIMD compute engines that they are today. These same GPU engines added high precision 64-bit floating point capabilities and were adopted for simulation and scientific analysis applications, before the recent shift to AI models that operate well with lower-precision datatypes.

In all these instances, both consumer and commercial, new use cases emerged as the technology adoption rose and the accelerators became more capable. AI is poised to make a similar leap. Select Ryzen 7040 Series processors are designed to support the processing needs of early adopters who prize the additional compute capability and unique features of a dedicated AI engine, but who also require support for software optimized for more conventional processing.

The Right Tool for the Right Task

Ryzen AI includes the ability to run AI workloads on a number of different compute engines in the processor. The best path for execution depends on the type of task, software support, model optimization, and the relative capability of each compute engine in the system. Ryzen 7040 Mobile Series processors are unique in their class because they offer the flexibility of leadership CPU, GPU, and AMD XNDA architecture-based computing capabilities. This provides developers and end-users unprecedented flexibility in workload execution, even as AI workloads evolve over time.

CPU vs. AMD XDNA Architecture Execution

Ryzen 7040 processors feature “Zen 4” based CPU cores which include support for AVX-512 ISA (Instruction Set Architecture) extension. This specialized instruction set allows the CPUs to execute AI workloads much more quickly in supported applications. CPUs are not typically used for training an AI model, but they can perform well in certain inference workloads. AVX-512 provides an additional advantage when supported by the application. While the “Zen 4” architecture offers some unique capabilities for processing AI workloads, there are still architectural advantages to the AMD XDNA AI engine.

Figure1-Smallerest.png


The left-hand side of the image above shows a conventional multi-core CPU with a mixture of individual and shared caches as well as the CPU's link to main memory. The right side shows a block diagram of the AMD XDNA AI Engine and its fundamentally different memory hierarchy.

Traditional CPUs rely on a mixture of private and shared caches to reduce memory access latency and improve performance. Communication between CPU cores is handled by shared interconnects or via shared caches. This arrangement works well when executing the types of workloads CPUs excel at, but it isn't the best solution for an AI engine. AI engines run optimally when they can deterministically schedule memory operations, but a typical CPU's memory latency varies depending on whether information is found within a cache or if the chip must retrieve it from main memory.

GPU vs. AMD XDNA Architecture Execution

More recently, GPUs have been the compute engine of choice for executing AI workloads due to their programmable shader architecture, high degree of parallelism, and efficient floating point compute capabilities. Ryzen 7040 Series processors also include an AMD RDNA™3-based graphics processor, which offers another powerful engine to execute AI workloads.

In many AI workloads, GPUs can offer higher performance than CPUs can deliver. However, even GPUs may have limitations which make them less optimized than dedicated hardware for AI processing. They often contain hardware blocks vital for 3D rendering that aren't used when running AI code, making them potentially less efficient than a dedicated accelerator. Additionally, GPUs emphasize executing operations across hundreds or thousands of cores. They use their own sophisticated memory architectures (not shown above) and hide cache misses by leveraging the intrinsic parallelism of graphics workloads. These capabilities are critical in graphics rendering, but they don't necessarily enhance the performance of an AI processor.

Figure2.png


The image above shows an archetypal neural network on the left and the AMD XDNA adaptive dataflow architecture at the heart of the AMD Ryzen AI engine on the right. The connections running from L1 to L6 simulate the way neurons are connected in the human brain. The Ryzen AI engine is flexible and can allocate resources differently depending on the underlying characteristics of the workload, but the example above works as a proof-of-concept.

Imagine a workload in which each neural layer performs a matrix multiply or convolution operation against incoming data before passing the new values to the next neuron(s) down the line. The AMD XDNA architecture is a dataflow architecture, designed to move data from compute array to compute array without the need for large, power-hungry, and expensive caches. One of the goals of a dataflow architecture is to avoid unexpected latency caused by cache misses by not needing a cache in the first place. This type of design emphasizes high performance without incurring latency penalties while fetching data from a CPU-style cache. It also avoids the increased power consumption associated with large caches.

Advantages of Executing AI on the AMD XDNA Architecture

High performance CPU and GPU technologies are important pillars of AMD's long-term AI strategy, but they aren't as transformative as integration of an AI engine on-die could be. Today, AI engines are already being used to offload certain processing tasks from the CPU and GPU. Moving tasks like background blur, facial detection, and noise cancellation are all tasks that can be performed more efficiently on a dedicated AI engine, freeing CPU and GPU cycles for other things and helping with improved power efficiency at the same time.

Integrating AI into the APU has several advantages. First, it tends to reduce latency and increase performance compared to attaching a device via the PCIe® bus. When an AI engine is integrated into the chip it can also benefit from shared access to memory and greater efficiency through more optimal data movement. Finally, integrating silicon on-die makes it easier to apply advanced power management techniques to the new processor block.

An external AI engine attached via a PCI Express® slot, or an M.2 slot is certainly possible, but integrating this capability into our most advanced “Zen 4” and AMD RDNA™ 3 silicon was a better way to make it available to customers without sacrificing the advantages above. Applications that leverage this local processor can benefit from the faster response times it enables and more consistent performance.

It's an exciting time in AI development. Today, customers, corporations, and manufacturers are evaluating AI at every level and power envelope. The only certainty in this evolving space is that if we could look ahead 5-7 years, we wouldn't "just" see models that do a better job at the same tasks that ChatGPT, Stable Diffusion, or Midjourney perform today. There will be models and applications for AI that nobody has even thought of yet. The AI performance improvements AMD has integrated into select processors in the Ryzen 7040 Mobile Series processors give developers and end-users the flexibility and support they need to experiment, evaluate, and ultimately make that future happen.

Source:
 

Attachments

  • amd.png
    amd.png
    6.4 KB · Views: 0
Looking at the right side of the top pic... it's about time.
They talked about making chips like this back in the 80's (Scientific American, for one).

Whether or not we can call this AI though, is an entirely different kettle of fish.
AI is supposed to "learn" and then go off on it's own... thinking for itself.

I don't see that happening just because we finally re-wired chips in a more efficient manner.

In other words... I think the AI term that is being bandied about, is more marketing than actual AI.
 

My Computers

System One System Two

  • OS
    Win 11 Home ♦♦♦22631.3527 ♦♦♦♦♦♦♦23H2
    Computer type
    PC/Desktop
    Manufacturer/Model
    Built by Ghot® [May 2020]
    CPU
    AMD Ryzen 7 3700X
    Motherboard
    Asus Pro WS X570-ACE (BIOS 4702)
    Memory
    G.Skill (F4-3200C14D-16GTZKW)
    Graphics Card(s)
    EVGA RTX 2070 (08G-P4-2171-KR)
    Sound Card
    Realtek ALC1220P / ALC S1220A
    Monitor(s) Displays
    Dell U3011 30"
    Screen Resolution
    2560 x 1600
    Hard Drives
    2x Samsung 860 EVO 500GB,
    WD 4TB Black FZBX - SATA III,
    WD 8TB Black FZBX - SATA III,
    DRW-24B1ST CD/DVD Burner
    PSU
    PC Power & Cooling 750W Quad EPS12V
    Case
    Cooler Master ATCS 840 Tower
    Cooling
    CM Hyper 212 EVO (push/pull)
    Keyboard
    Ducky DK9008 Shine II Blue LED
    Mouse
    Logitech Optical M-100
    Internet Speed
    300/300
    Browser
    Firefox (latest)
    Antivirus
    Bitdefender Internet Security
    Other Info
    Speakers: Klipsch Pro Media 2.1
  • Operating System
    Windows XP Pro 32bit w/SP3
    Computer type
    PC/Desktop
    Manufacturer/Model
    Built by Ghot® (not in use)
    CPU
    AMD Athlon 64 X2 5000+ (OC'd @ 3.2Ghz)
    Motherboard
    ASUS M2N32-SLI Deluxe Wireless Edition
    Memory
    TWIN2X2048-6400C4DHX (2 x 1GB, DDR2 800)
    Graphics card(s)
    EVGA 256-P2-N758-TR GeForce 8600GT SSC
    Sound Card
    Onboard
    Monitor(s) Displays
    ViewSonic G90FB Black 19" Professional (CRT)
    Screen Resolution
    up to 2048 x 1536
    Hard Drives
    WD 36GB 10,000rpm Raptor SATA
    Seagate 80GB 7200rpm SATA
    Lite-On LTR-52246S CD/RW
    Lite-On LH-18A1P CD/DVD Burner
    PSU
    PC Power & Cooling Silencer 750 Quad EPS12V
    Case
    Generic Beige case, 80mm fans
    Cooling
    ZALMAN 9500A 92mm CPU Cooler
    Mouse
    Logitech Optical M-BT96a
    Keyboard
    Logitech Classic Keybooard 200
    Internet Speed
    300/300
    Browser
    Firefox 3.x ??
    Antivirus
    Symantec (Norton)
    Other Info
    Still assembled, still runs. Haven't turned it on for 13 years?
Back
Top Bottom