Quick Witted

Target Markets

Machine Learning

The Avispado core, with its small area and power is ideal for energy-conscious SOCs targeting Machine Learning.

If your next SoC is targeting the Machine Learning market, then Avispado’s small footprint combined with its ability to talk to a RISC-V Vector Unit (1.0) is the perfect fit for your target.

Combined with our Gazzillion technology, Avispado can deal with very high sparsity in tensor weights, resulting in excellent Energy per operation.

Recommendation Systems

The Gazzillion technology is specifically designed for Recommendation Systems, a key part of DataCenter Machine Learning.

By supporting hundreds of misses per Avispado, you can build an SoC that smoothly delivers highly sparse data to the compute engines without a large silicon investment.

64-bit Core

Ready for the most demanding workloads, Avispado supports large memory capacities with its 64-bit native data path.

With its complete MMU support, Avispado is also Linux-ready, including multiprocessing.

Vector Ready

Avispado supports the upcoming RISC-V Vector Specification 1.0 as well as Semidynamics Open Vector Interface, giving you freedom of choice between your own custom vector unit and using Semidynamics offerings.

Vector Instructions densely encode lots of computations, thereby reducing energy per operation.

Vector Gather instructions support sparse tensor weights efficiently, helping machine learning workloads.

Multiprocessor Ready

Avispado supports cache-coherent Multiprocessing environments. Its native CHI interface can be tailored down to ACE or AXI, depending on your needs.

Be it 2, 4, or hundreds of cores, Avispado is ready for your next SOC.

Customizable 2-wide
in-order pipeline

Decodes 2 instructions/cycle In-order issue Gazzillion Misses™ SV48 Direct hardware support for unaligned accesses Customizable I$ from 8KB to 32KB D$ from 8KB to 32KB Branch Predictor