division of labor in mind
On one side of the OVI, the Atrevido/Avispado cores implement all the logic to support vector load and vector store operations (and all their numerous variants). All the connection to memory, via AXI or CHI, is managed by the Semidynamics core.
On the other side of the OVI, you implement the arithmetic part of the Vector Unit to your exact needs and desired PPA.
In between both pieces, OVI defines how data is sent back and forth between both parties and how instructions are communicated from the core side to your Vector Unit.
implement the following logic for you
- Vector Loads
- Vector Strided Loads
- Vector Indexed Loads
- Vector Segmented Loads
- Vector Stores
- Vector-to-Scalar Memory Coherency
- Vector Strided Stores
- Vector Indexes Store
- Vector Segmented Store
- Scalar-to-Vector Memory Coherency
- Optional PMP if required
- Virtual Memory (SV48 or SV39 MMU of your choice)1
- Decoding of vector arithmetic instructions sent to your Vector Unit2
- The subordinate side of the OVI interface.
- The arithmetic units inside the Vector Unit.
- The vector logic for issuing vector instructions to your vector arithmetic units.
The OVI.fifo Protocol
In case you still would like to leverage the RISC-V vector toolchain and ecosystem and still connect a custom computation engine to a RISC-V Vector Unit, Semidynamics has developed the OVI.fifo simplification of the OVI interface, illustrated in the figure below.
As it can be seen, you now only need to implement the custom extensions of your interest, while Semidynamics provides both the core and a fully compliant RISC-V Vector Unit. A very simple fifo-style push/pop interface is implemented between the Vector Unit and your custom engine. You will need to implement the “subordinate” side of the OVI.fifo interface and your own custom logic.