Cervell™ NPU

C1
NPU IP Core for Edge AI

C1 is a compact, fully programmable RISC‑V AI IP core for real‑time inference  

Tight CPU–Vector–Tensor coupling eliminates bolt‑on accelerators and DMA choreography, simplifying design and reducing latency/power  

Key Benefits

Unified architecture maximizing memory efficiency, seamless integration, and optimized edge performance

 Memory‑first efficiency

Memory‑first efficiency

Gazillion™ issues many outstanding requests and sustains DRAM bandwidth to keep utilization high

Single ISA

Single ISA

CPU, Vector, and Tensor share one open RISC‑V software model for faster bring‑up

Right‑sized performance

Right‑sized performance

Ideal for vision, speech, and sensor fusion at the edge

Architecture Highlights

Heterogeneous RISC-V compute with non-blocking memory streaming and unified Linux-ready software

AI Accelerator

64-bit RISC-V AI Accelerator

64‑bit RISC‑V CPU tightly coupled with RVV 1.0 Vector and a programmable Tensor unit (matrix ops)

Sustained Memory Delivery

Sustained Memory Delivery

Gazillion™ memory streaming to keep engines fed under DRAM pressure

Simple Linux Programming

Simple Linux Programming

Linux‑ready bring‑up flows and a unified programming model

Software Path

Single software stack
One RISC‑V ISA across CPU/Vector/Tensor
Linux‑ready bring‑up

Typical Deployments

Smart cameras

Optimized RISC-V IP: Low-power, real-time AI processing for your smart camera

Smart cameras

Embedded analytics

Deploy powerful, efficient RISC-V cores for instant, local data analysis

Embedded analytics

Industrial IoT

Achieve secure, ultra-low latency control and processing in industrial devices

Industrial IoT

Compact gateways

Build highly efficient, customizable gateways for low-latency network connectivity

Compact gateways

Find out more

Our IP is silicon-ready, and in silicon implementations. Speak to us about reference designs

Learn more