CPU + Vector + Tensor

One Platform for Complete AI Acceleration

Get the best of general-purpose compute, flexible vector acceleration, and powerful AI-specific performance—on one integrated platform.

Why It Matters

Integrated, Efficient, Low Power, and
Low Latency Computing

ISA icon

No need for separate accelerators

Everything is tightly integrated and easy to use based on a single ISA

 More efficient icon

More efficient

Lower power, lower latency, and no data-copying between blocks

 Highest efficiency icon

Highest efficiency

Lowest power, lowest latency, and no data-copying between compute elements

  Scales with your needs icon

Scales with your needs

From edge AI to heavy-duty inferencing and training

What You Get

Powerful CPU, Flexible Vector Unit, and
Scalable Tensor Unit for AI

Atrevido CPU icon

Powerful CPU core (Atrevido)

vector unit icon

Flexible vector unit (V4, V8, V16, V32)

 tensor unit icon

Scalable tensor unit (T1, T2, T4, T8) for AI workloads

Yolo, Llama and LLMs icon

Runs Yolo, Llama and LLMs with ease

DMA Free icon

Simple software—no DMA, no kernel changes

Great For

Scalable AI Inference and
Advanced Applications.

AI inference icon

AI inference
(vision, language, recommendation)

 High-throughput icon

High-throughput AI applications

Explore CPU + Vector + Tensor Systems

Atrevido CPU

Atrevido CPU

Out-of-Order
High-Performance RISC-V Core

More on Atrevido CPU
Vector Unit

Vector Unit

Out-of-Order
The fastest Vector Unit

More on Vector Unit
Vector Unit

Tensor Unit

World’s first fully programmable RISC-V Tensor Unit

More on Tensor Unit