AI Software
Develop AI models using the Aliado RISC-V SDK and ONNX Runtime
Aliado RISC-V SDK
All-in-one toolkit with a full compile/debug toolchain, high-speed emulators for functional testing, and optimized libraries—unified in an Eclipse-based IDE
Aliado IDE
The Aliado RISC-V SDK is an all-in-one development environment so you can go from fresh install to running code quickly on Semidynamics hardware
High-velocity workflow
Compile, run, and debug C/C++ with an integrated cross-compile toolchain and Aliado
emulators
Deep visibility
Debugger, memory and register viewers, disassembler/assembly views, serial terminal, and
emulation
Customizable
Extend with the full ecosystem of Eclipse plugins to match your workflow
Ready where you are
Works on Linux and Windows Subsystem for Linux (WSL)
Aliado Toolchain
Compile smarter. Debug easier
Dual compilers
GCC and LLVM/Clang with a full suite (compiler, linker, assembler, debugger)
Optimisation built in
Modern compiler with auto-vectorization for high-performance C/C++
Use any IDE
Or go seamless inside the Aliado IDE
Runs where you do
Linux and Windows Subsystem for Linux (WSL)
Aliado Kernel Library
Power your AI on RISC-V . Accelerate AI math on Semidynamics RISC-V
Hardware-tuned
Maximally optimized kernels for our Vector and Tensor Units
Fast primitives
Matrix multiplication, transpositions, activations, and data-transformation for
multi-dimensional tensors/data
AI-first
Building blocks for neural networks and high-performance compute, ready to drop into your
C/C++ pipelines
Runs where you do
Linux and Windows Subsystem for Linux (WSL)
Aliado Emulator
Run and validate without silicon—fast.
Integrated QEMU and Spike let you execute Semidynamics RISC-V code on your Linux
workstation, from user-space apps to true bare-metal—so you can iterate quickly, catch
issues early, and de-risk bring-up
Fast iteration
Up to 1B instructions/sec (QEMU) and 100M instructions/sec (Spike)
Flexible targets
Linux user mode, bare-metal, and semi-hosting for lightweight I/O without a full OS
Accurate coverage
Full RISC-V extensions plus Semidynamics instructions
Seamless workflow
Integrated with the Aliado IDE and toolchain for compile-run-debug loops
Semidynamics ONNX-Runtime
By leveraging ONNX, Semidynamics hardware can run thousands of models available in common repositories like HuggingFace or the ONNX Model Zoo. Using Semidynamics ONNX-Runtime, these models integrate out of the box on Semidynamics hardware
Run ONNX models on Semidynamics—out of the box
Plug-and-play
Execute thousands of ONNX models (e.g., from Hugging Face and ONNX Model Zoo) with no conversion
Optimized path
Uses Semidynamics Vector and Tensor Units via the Aliado Kernel Library for efficient inference
Straightforward integration
Drop into your ONNX Runtime workflow and target Semidynamics hardware directly
Semidynamics Inferencing Tools
Semidynamics Inferencing Tools add an application layer on top of our ONNX Runtime Execution Provider for Cervell™, so you can go from trained model to running product quickly
Ship AI apps faster on Cervell™
Validated by Semidynamics across a wide range of ONNX models, the tools help you prototype in hours and harden for production with clean, maintainable code
Built for developers. Optimised for Cervell™. Ready for real workloads
High-level library
One API to set up sessions, manage tensors, and run inference
Working examples
Production-grade samples for chatbots (Llama, Qwen), object detection (YOLO family), and image classification (ResNet, MobileNet, AlexNet)
Semidynamics Model Zoo
Pick from validated LLMs and vision models used as our tech demos. Drop them into your workflow with SMD ONNX Runtime and Inferencing Tools—no conversion, no fuss
Curated ONNX models, ready to run on Semidynamics
AI Research Focus
Continuous innovation driving advanced AI model development at Semidynamics
Seamless Integration
Effortlessly deploy ONNX models from repositories like HuggingFace or Model Zoo
Cervell™ Model Zoo
Semidynamics has already ported popular AI models to Cervell™. All models are tested and verified
| Name | Parameters | Category | Data Type |
|---|---|---|---|
| Deepseek-R1-Distill Llama3.1 |
8B | LLM | FP16 |
| Llama v2 | 7 B | LLM | FP16 |
| Llama v3 | 8B | LLM | FP16 |
| Qwen2 | 1.5B | LLM | FP16 |
| BERT | LLM | FP16 | |
| Phi3 | 1B | LLM | INT4 |
| Yolo V3 | Object Detection | INT8, FP16 | |
| Yolo V10 | 2.3 M | Object Detection | FP16 |
| VIT-Vision Transformer | 86 M | Image Classification | FP16 |
| Mobilnet | 400 k | Image Classification | FP16 |
| Resnet 50 | CNN | INT8, FP16 | |
| Alexnet (*) | 60 M | CNN | FP32 |
| Stable Difusion | Text-to-image | FP16 |
(*) Cervell™ can run any network with any data type
Aliado Quantization Recommender
Drop in any ONNX model. Get a ranked recommendation for the best bit-level quantization (int4) with clear rationale and scores
Optimizes architecture selection effectively
The Aliado Quantization Recommender helps you choose the right scheme for your model and target data type
Sensitivity-aware
Respects your model’s tolerances and goals
Calibration-savvy
Uses calibration data (when available) to propose specific techniques
Transparent
Score-based, hierarchical view so you can see why a choice wins
Broad support
Works across model families, including Transformers
Focused
Tells you what to try first so you don’t waste cycles
Built for developers
Integrated with ONNX workflows. Ready for production