RISC-V CPU

Atrevido A426
CPU core for demanding AI

When you need serious per-core performance, Atrevido brings 4-wide out-of-order execution, and optional tensor math—built to keep models fed and cores busy  

Key Benefits

High IPC, low stall time

High IPC, low stall time

Wide OOO pipeline plus Gazillion Misses™ to keep issuing while data streams in

Seamless scale-up

Seamless scale-up

Coherent or non-coherent integration. Available in single or four core configurations (Atrevido A426 MP4)

Linux-ready and customizable

Linux-ready and customizable

Tailor caches, widths, and memory paths without breaking the software stack

AI-ready out of the box

AI-ready out of the box

Seamlessly integrate AI capabilities into your existing workflow

Architecture Highlights

High-Throughput OOO Pipeline

High-Throughput OOO Pipeline

4-wide OOO pipeline with register renaming, branch speculation, and deep queues for AI throughput

Unblock Memory Bottlenecks

Unblock Memory Bottlenecks

Gazillion Misses™ memory subsystem sustaining up to ~128 outstanding misses to pierce the memory wall

Integrated Tensor Power

Integrated Tensor Power

Tensor Unit integrates seamlessly for high-utilization GEMMs

Multi-Fabric Coherency Design

Multi-Fabric Coherency Design

Coherency-friendly design for CHI/NoC fabrics, or AXI for non-coherent deployments

Software Path

Single software stack

Single software stack from CPU-only to CPU+Vector+Tensor (All-in-One IP)

Linux-ready with RVV and TU libraries

Typical Deployments

Memory-Optimized AI Inference

LLM and vision inference where memory stalls dominate

Memory-Optimized AI Inference

Advanced Pipeline Acceleration

Recommendation & analytics pipelines needing wide OOO and vector/tensor throughput

Advanced Pipeline Acceleration

Seamless Multi-core Scaling

AI kernels that benefit from coherent multi-core scale-up

Seamless Multi-core Scaling

Find out more

Our IP is silicon-ready, and in silicon implementations. Speak to us about reference designs

Learn more