Blog

By Pedro Marcuello  |  July 10th, 2025

AI runs on vectors

A key trend in AI recently has been the emphasis on vectorizing data. That is not a technical recommendation, but rather a fundamental shift in enterprise data and AI strategy. To thrive in the new industrial revolution, companies must transform into AI-powered organizations, rethinking how data flows, scales, and drives decision-making.

Processing vectorized data efficiently requires hardware architectures optimized for parallel processing and high throughput. Clearly, if you ask a GPU company the answer to this and every other problem, it is to use a GPU, and fair enough, GPUs are designed for parallel processing and excel at handling large-scale vectorized data.

GPUs are vector-like but not vector processors in the classical architectural sense. They dominate AI workloads because of their massive parallelism, but modern vector architectures like RISC-V Vector Extension provide more granular control, customizability, and efficiency for specific workloads—particularly useful in inference, embedded AI, and power-sensitive environments.

While GPUs remain the dominant force in AI—especially for training large models—vector architectures are rapidly gaining traction, particularly for inference and edge computing. Their ability to deliver high-throughput, power-efficient performance with greater architectural flexibility makes them ideal for specialized AI workloads. With RISC-V vector extensions, custom chips like Semidynamics' Cervell, and growing adoption in HPC systems, vector compute is increasingly becoming a vital complement to GPU-heavy architectures. The future of AI hardware looks hybrid, with vectorization playing a critical role in optimizing performance beyond the datacenter.

Our CEO has been a vector champion for many years; from 2004 to 2014; he was instrumental in creating a vector extension for the Intel x86 Instruction Set Architecture (ISA). This extension evolved into AVX-512, a set of instructions designed to enhance performance for compute-intensive applications.

At Semidynamics we firmly believe that the future for AI hardware processing is not proprietary ISAs like x86 and Arm, but open and flexible ones: namely RISC-V.

Why? There is no vendor lock-in and developers can take full control over the architecture, which is crucial for AI startups building custom hardware.

Using RISC-V, you can build highly efficient vector computing that scales better for AI workloads than  those with Vector Extension in legacy ISA CPUs.

For vectorized data processing with RISC-V, architectures like Semidynamics’ Atrevido cores offer a compelling alternative to traditional CPUs, particularly in AI, HPC, and custom accelerators.

Semidynamics specializes in fully customizable RISC-V CPUs with vector extensions, making them ideal for workloads that require high-throughput parallelism. We implement RISC-V Vector Extension 1.0, which enables parallelism for AI, scientific computing, and data processing. And unlike off-the-shelf CPUs, Semidynamics lets customers tailor vector width, cache size, and memory bandwidth to fit their exact needs. This makes it perfect for AI accelerators, edge devices, and HPC workloads.

Another of our innovations is Gazzillion™ memory streaming, which allows ultra-fast, non-blocking memory accesses for workloads that need to process vast amounts of data. This complements vector or tensor processing in AI inference and real-time signal processing, where memory bottlenecks can slow down performance.

If the only answer for you is a GPU… well, that’s fine, because our all-in-one architectural approach combines the best features of a GPU, tensor core, CPU, vector processor, and more.

Cervell, the latest innovation from Semidynamics, expands on this framework. Cervell is an AI-focused processor with a RISC-V foundation that combines tensor, vector, and scalar processing into a single, fully programmable architecture. Cervell, which combines adjustable vector units with Gazzillion™ memory streaming and native support for quantization and sparsity, is made for edge AI and workloads requiring a lot of inference. This enables developers to optimize for power economy and silicon area in addition to sheer performance, which is crucial for applications ranging from embedded AI to driverless cars. Cervell advances the vectorization story by allowing genuine heterogeneous compute within a single ISA and toolchain, providing system designers with more flexibility beyond what GPUs and fixed-function AI devices can provide.

Today vector acceleration is the must-have at the heart of AI processing, and we’ve been getting into position since we formed the company in 2016. The AI era will be written in vectors, and with RISC-V, we’re writing the future on our own terms.