The Avispado core, with its small area and power is ideal for energy-conscious SOCs targeting Machine Learning.
If your next SoC is targeting the Machine Learning market, then Avispado’s small footprint combined with its ability to talk to a RISC-V Vector Unit (1.0) is the perfect fit for your target.
Combined with our Gazzillion technology, Avispado can deal with very high sparsity in tensor weights, resulting in excellent Energy per operation.
The Gazzillion technology is specifically designed for Recommendation Systems, a key part of DataCenter Machine Learning.
By supporting hundreds of misses per Avispado, you can build an SoC that smoothly delivers highly sparse data to the compute engines without a large silicon investment.
Ready for the most demanding workloads, Avispado supports large memory capacities with its 64-bit native data path.
With its complete MMU support, Avispado is also Linux-ready, including multiprocessing.
Avispado supports the upcoming RISC-V Vector Specification 1.0 as well as Semidynamics Open Vector Interface, giving you freedom of choice between your own custom vector unit and using Semidynamics offerings.
Vector Instructions densely encode lots of computations, thereby reducing energy per operation.
Vector Gather instructions support sparse tensor weights efficiently, helping machine learning workloads.
Avispado supports cache-coherent Multiprocessing environments. Its native CHI interface can be tailored down to ACE or AXI, depending on your needs.
Be it 2, 4, or hundreds of cores, Avispado is ready for your next SOC.