CPU + Vector + Tensor
Get the best of general-purpose compute, flexible vector acceleration, and powerful AI-specific
performance—on one integrated platform.
Why It Matters
Integrated, Efficient, Low Power, and
Low Latency Computing

No need for separate accelerators
Everything is tightly integrated and easy to use based on a single ISA

More efficient
Lower power, lower latency, and no data-copying between blocks

Highest efficiency
Lowest power, lowest latency, and no data-copying between compute elements

Scales with your needs
From edge AI to heavy-duty inferencing and training
What You Get
Powerful CPU, Flexible Vector Unit, and
Scalable Tensor Unit for AI

Powerful CPU core (Atrevido)

Flexible vector unit (V4, V8, V16, V32)

Scalable tensor unit (T1, T2, T4, T8) for AI workloads

Runs Yolo, Llama and LLMs with ease

Simple software—no DMA, no kernel changes
Great For
Scalable AI Inference and
Advanced Applications.

AI inference
(vision, language, recommendation)
