What is an NPU? And why is it key to unlocking on-device generative AI?


  • Staff

 OnQ Blog:

The generative artificial intelligence (AI) revolution is here. With the growing demand for generative AI use cases across verticals with diverse requirements and computational demands, there is a clear need for a refreshed computing architecture custom-designed for AI. It starts with a neural processing unit (NPU) designed from the ground-up for generative AI, while leveraging a heterogeneous mix of processors, such as the central processing unit (CPU) and graphics processing unit (GPU). By using an appropriate processor in conjunction with an NPU, heterogeneous computing maximizes application performance, thermal efficiency and battery life to enable new and enhanced generative AI experiences.

Qualcomm-image

Figure 1: Choosing the right processor, like choosing the right tool in a toolbox, depends on many factors and enhances generative AI experiences.

Why is heterogenous computing important?​

Because of the diverse requirements and computational demands of generative AI, different processors are needed. A heterogeneous computing architecture with processing diversity gives the opportunity to use each processor’s strengths, namely an AI-centric custom-designed NPU, along with the CPU and GPU, each excelling in different task domains. For example, the CPU for sequential control and immediacy, the GPU for streaming parallel data, and the NPU for core AI workloads with scalar, vector and tensor math.

Heterogeneous computing maximizes application performance, device thermal efficiency and battery life to maximize generative AI end-user experiences.

Qualcomm-image

Figure 2: NPUs have evolved with the changing AI use cases and models for high performance at low power.

What is an NPU?​

The NPU is built from the ground-up for accelerating AI inference at low power, and its architecture has evolved along with the development of new AI algorithms, models and use cases. AI workloads primarily consist of calculating neural network layers comprised of scalar, vector,and tensor math followed by a non-linear activation function. A superior NPU design makes the right design choices to handle these AI workloads and is tightly aligned with the direction of the AI industry.

Qualcomm-image

Figure 3: The Qualcomm AI Engine consists of the Qualcomm Hexagon NPU, Qualcomm Adreno GPU, Qualcomm Kryo or Qualcomm Oryon CPU, Qualcomm Sensing Hub, and memory subsystem.

Our leading NPU and heterogeneous computing solution​

Qualcomm is enabling intelligent computing everywhere. Our industry-leading Qualcomm Hexagon NPU is designed for sustained, high-performance AI inference at low power. What differentiates our NPU is our system approach, custom design and fast innovation. By custom-designing the NPU and controlling the instruction set architecture (ISA), we can quickly evolve and extend the design to address bottlenecks and optimize performance.

The Hexagon NPU is a key processor in our best-in-class heterogeneous computing architecture, the Qualcomm AI Engine, which also includes the Qualcomm Adreno GPU, Qualcomm Kryo or Qualcomm Oryon CPU, Qualcomm Sensing Hub, and memory subsystem. These processors are engineered to work together and run AI applications quickly and efficiently on device.

Our industry-leading performance in AI benchmarks and real generative AI applications exemplifies this. Read the whitepaper for a deeper dive on our NPU, our other heterogeneous processors, and our industry-leading AI performance on Snapdragon 8 Gen 3 and Snapdragon X Elite.

Qualcomm-image

Figure 4: The Qualcomm AI Stack aims to help developers write once and run everywhere, achieving scale.

Enabling developers to accelerate generative AI applications​

We enable developers by focusing on ease of development and deployment across the billions of devices worldwide powered by Qualcomm and Snapdragon platforms. Using the Qualcomm AI Stack, developers can create, optimize and deploy their AI applications on our hardware, writing once and deploying across different products and segments using our chipset solutions.

The combination of technology leadership, custom silicon designs, full-stack AI optimization and ecosystem enablement sets Qualcomm Technologies apart to drive the development and adoption of on-device generative AI. Qualcomm Technologies is enabling on-device generative AI at scale.


Download the whitepaper: “Unlocking on-device generative AI with an NPU and heterogeneous computing."

Check out "The future of AI is hybrid" Part 1

Check out "The future of AI is hybrid" Part 2


 Source:

 

Attachments

  • what-is-an-npu-and-why-is-it-key-to-unlocking-on-device-generative-ai.jpg
    what-is-an-npu-and-why-is-it-key-to-unlocking-on-device-generative-ai.jpg
    29.6 KB · Views: 1
Last edited:
I need a new brain. Will this help?
 

My Computer

System One

  • OS
    Windows 11 Pro
    Computer type
    PC/Desktop
    Manufacturer/Model
    Self build
    CPU
    Core i7-13700K
    Motherboard
    Asus TUF Gaming Plus WiFi Z790
    Memory
    64 GB Kingston Fury Beast DDR5
    Graphics Card(s)
    Gigabyte GeForce RTX 2060 Super Gaming OC 8G
    Sound Card
    Realtek S1200A
    Monitor(s) Displays
    Viewsonic VP2770
    Screen Resolution
    2560 x 1440
    Hard Drives
    Kingston KC3000 2TB NVME SSD & SATA HDDs & SSD
    PSU
    EVGA SuperNova G2 850W
    Case
    Nanoxia Deep Silence 1
    Cooling
    Noctua NH-D14
    Keyboard
    Microsoft Digital Media Pro
    Mouse
    Logitech Wireless
    Internet Speed
    50 Mb / s
    Browser
    Chrome
    Antivirus
    Defender

Latest Support Threads

Back
Top Bottom