NPU IP Core

Cervell™ NPU

All-In-One RISC-V NPU
Unified Architecture for Cutting-Edge Intelligence

Customizable Throughput  
Offers modular scaling to meet extreme processing demands, from localized devices to massive infrastructure

Seamless Hardware Fusion 
Integrates multiple compute engines into one system to remove bottlenecks and ensure instantaneous execution

Open-Source Flexibility 
Provides a fully programmable environment that eliminates proprietary restrictions for modern language models and deep learning

Cervell™ Features

Based on Semidynamics
ALL-In-One RISC-V

1GHz - INT8 --
1GHz - INT4 --
2GHz - INT8 --
2GHz - INT4 --
Standard Data Type Support
  • Activations: INT8, INT16, INT32, INT64, FP16, FP32, FP64 (*)
  • Convolutions: INT4, INT8, INT16, FP16, BF16 (*)
(*) Configuration options available

Key Benefits

Next-Gen AI Computing:
Powerful, Flexible, and Vendor-Free

All-in-One Processing

All-in-One Processing

CPU, Vector, and Tensor seamlessly combined for zero-latency AI workload

Standard RISC-V AI Acceleration

Standard RISC-V AI Acceleration

Fully programmable, no vendor lock-in

Ideal for

Ideal for

LLMs, Deep Learning, Edge AI, AI Datacenters

High Efficiency

High Efficiency

AI acceleration optimized for LLMs, Recommendation Systems, and Deep Learning

Architecture Highlights

Scalable NPU Core for AI
Scalable NPU Core for AI
Configurable from 8-64 TOPS
Configurable from 8-64 TOPS
RISC-V Based All-In-One architecture
RISC-V Based All-In-One architecture (CPU, Vector, and Tensor)