Press Releases
Find our latest news, announcements, and updates
Semidynamics Unveils 3nm AI Inference Silicon and Full-Stack Systems
Company announces breakthrough in memory efficiency and silicon readiness for next-gen AI data centers; 3nm Chip Tape-Out
Barcelona, Spain — February 3rd, 2026 — Semidynamics today announced its expansion into full-stack AI infrastructure, unveiling a strategic roadmap to deliver high-performance inference silicon and vertically integrated systems. Building on its proprietary architecture, the company is developing chips, boards, and rack-level systems designed for the most demanding AI workloads in next-generation data centers.
Breaking the Memory Wall and 3nm Milestone
Semidynamics’ entry into the silicon market is built on years of architectural innovation. The company’s silicon architecture features a new memory subsystem designed to overcome bandwidth bottlenecks and mitigate supply constraints tied to high-end memory. By optimizing data flow and memory access, Semidynamics enables large-scale AI inference models to run more efficiently—supporting high-concurrency applications while reducing total cost of ownership.
In December 2025, Semidynamics achieved silicon readiness at the 3nm node—one of the most advanced process technologies in the world. This ‘tape-out’ milestone with TSMC marks a major step toward delivering production-grade chips and full-stack systems.
Semidynamics will offer a vertically integrated stack—chips, boards, and racks—targeting leading-edge, multi-accelerator AI platforms.
“Semidynamics has spent years mastering high-performance architecture at the fundamental level”, said Roger Espasa, CEO at Semidynamics. “We are now leveraging that expertise to solve the AI industry’s most critical challenges in memory and efficiency. Our successful 3nm tape-out is a vital technical validation as we execute a rigorous multi-stage roadmap toward delivering product-ready silicon and rack-scale systems. We are building a world-class, European-designed AI inference platform for the long term.”
Partner and Ecosystem Statements
Semidynamics’ approach is supported by partners across AI and HPC ecosystem:
EuroHPC Joint Undertaking (EuroHPC JU) — Anders Jensen, Executive Director of the EuroHPC Joint Undertaking, commented: “I very much welcome the addition of a European-grown provider of advanced GPU chips to our AI ecosystem. This is a strategic milestone in strengthening Europe’s digital sovereignty and technological autonomy. Semidynamics has already been a key partner of the EuroHPC JU through the European Processor Initiative. We look forward to continuing our collaboration with innovative European suppliers like Semidynamics to power our supercomputers and AI Factories across Europe.”
The EuroHPC JU is the legal and funding entity that brings together the European Union and participating countries to coordinate efforts and pool resources to position Europe as a world leader in supercomputing. To this end, the EuroHPC JU has already procured 12 supercomputers across Europe including JUPITER and Alice Recoque, Europe’s first exascale systems. In parallel, the EuroHPC JU is overseeing the deployment of 19 AI factories across Europe, supported by 13 AI Factory Antennas.
HPE - Robert Wisniewski, Fellow, Chief Architect and VP of HPC and AI Solutions, HPE, said: “We are looking forward to working with Semidynamics on their compelling future technology."
Bull, Atos Group - Bruno Lecointe, VP, global head of HPC, HPC-AI and Quantum Computing at Bull, Atos Group commented: "Silicon technologies like those showcased by Semidynamics are critical for Europe, enabling AI factories, data centers, and high-performance computing with sovereignty at heart. Bull, as a leading AI and HPC system designer and manufacturer, delivering European sovereign IP and end-to-end solutions, is enthusiastic about such developments advancing Europe’s technological ecosystem, next-generation systems, and strategic autonomy."
Telefonica - One of Europe’s largest telecom groups ($42.1B revenue in 2024) with an approximately $22B market capitalization in mid-January 2026, Lorena Senador-Gómez, Global Partnerships and Devices Director at Telefónica said: "By advancing AI capabilities developed in Europe, Telefónica strengthens choice and resilience for our customers, complementing and integrating the best from our partners worldwide.”
Multiverse Computing — Enrique Lizaso, Co-founder and CEO of Multiverse Computing, said: “We are pleased to collaborate with Semidynamics to co-optimize Multiverse Computing's AI models for Semidynamics’ full-stack solution. By combining Semidynamics’ silicon roadmap with our compressed AI models and AI solutions we aim to help deliver faster, more cost- and energy-efficient deployments, enabling larger working sets and high-concurrency AI services in data center environments.”
E4 Computer Engineering — A designer, builder and maintainer of next-generation digital infrastructure for demanding HPC and AI applications, has been working closely with Semidynamics to create a strategic and technical cooperation targeted to the development of advanced AI and HPC racks using Semidynamics technology. In full alignment with Semidynamics, E4 is looking forward to bringing this exciting inference solution to the market and to its customers. "We believe Semidynamics positions E4 for a significant growth in the HPC and AI markets, serving the ever-growing requirements of its customers.” said Cosimo Damiano Gianfreda, CEO of E4 Computer Engineering, “Together, we’ll create a co-branded campaign to drive engagement and attract new customers for both sides.”
Megware — Dr. Axel Auweter, Managing Director of MEGWARE, said: “We’re incredibly excited to collaborate with the Semidynamics team and look forward to supporting the deployment of their technologies in both existing data centers as well as future AI gigafactories. As a European HPC and AI systems specialist, we develop sustainable and sovereign AI platforms in Germany, delivered as fully integrated, production-ready systems. We will dedicate all of our system engineering and deployment experience to this collaboration in order to jointly translate Semidynamics’ silicon into boards, servers, and rack-scale infrastructures.”
Jon Peddie Research - “Most AI silicon today focuses primarily on raw compute, but performance gains are increasingly gated by memory architecture,” said Dr. Jon Peddie, president of Jon Peddie Research (JPR). “Semidynamics stands out by rethinking the memory subsystem from first principles and emphasizing system-level efficiency. That is an important differentiator. There is an increasing demand for inference solutions that can scale efficiently across real-world workloads and deployment environments.”
About Semidynamics
Founded in 2016 and headquartered in Barcelona, Semidynamics is an advanced computing company with a team of over 150 employees. The company is dedicated to the productization of high-performance silicon, system engineering, and software enablement to meet the global demand for scalable AI infrastructure. Semidynamics serves a global customer base in compliance with all applicable export controls and trade regulations.
Semidynamics Welcomes Iakovos Stamoulis as Chief Technology Officer
Barcelona, Spain – 2nd December 2025. Semidynamics today announced the appointment of Iakovos Stamoulis as Chief Technology Officer (CTO).
Iakovos brings more than 25 years of experience in computer graphics and semiconductors, with a strong record of converting advanced concepts into commercial products. His deep expertise in graphics processing units (GPUs)—the foundational architecture of modern AI acceleration—makes him uniquely qualified to lead Semidynamics' technical roadmap.
He began his career at Advanced Rendering Technology in the UK and USA, where he co‑engineered the first Ray Tracing Graphics Engine chip, demonstrating an early ability to push the boundaries of parallel processing. He later co-founded Think Silicon and served as its CTO until the company was acquired by Applied Materials. There, he led multidisciplinary teams in the co-design of hardware and software, delivering ultra-low-power graphics processors and AI solutions deployed in hundreds of millions of devices worldwide. This experience in optimizing performance-per-watt is directly aligned with Semidynamics’ mission to build efficient, high-bandwidth AI cores.
Iakovos has also played a key role in the broader technology community. He served as Chair of the Graphics SIG for RISC-V International, contributing to the advancement of open specifications for processors. He holds a D.Phil. from the Centre for VLSI and Computer Graphics at the University of Sussex, UK.
“We are very pleased to welcome Iakovos to Semidynamics,” said Roger Espasa, CEO of Semidynamics. “The path from graphics to AI is well-trodden for a reason: the parallel processing principles are very similar. His expertise, creativity, and leadership will be invaluable as we continue to design European processor technology for the AI age. More than anything, we are proud that he is joining our team, and I am confident that with Iakovos as CTO, Semidynamics will enter a new era of innovation and growth.”
Iakovos will lead Semidynamics’ technology strategy as the company continues to develop processor IP for next-generation AI and compute systems.
Semidynamics Inferencing Tools Accelerate AI App Deployment on Cervell NPU
From trained model to running product—faster, with ONNX Runtime integration and production-grade samples
Santa Clara, USA — October 22nd 2025 — Semidynamics today announced the Semidynamics Inferencing Tools, a new software suite that lets developers deploy AI applications on the Cervell RISC-V NPU in a fraction of the time.
Sitting above the Aliado SDK and leveraging Semidynamics’ ONNX Runtime Execution Provider, the suite streamlines everything from session setup to tensor management, so teams can move from a trained model to a running product quickly and confidently.
“Developers want results,” said Pedro Almada, lead software developer, Semidynamics. “With the Inferencing Tools, you point to an ONNX model, choose your configuration, and you’re running on Cervell—prototype in hours, then harden for production.”
Organizations building AI features—assistants, agents, vision pipelines—can now target Cervell with less integration overhead, shorter development cycles, and a clearer path from prototype to production. The Inferencing Tools focus teams on application logic and user value while maintaining a clean, maintainable codebase.
Highlights
- Faster time-to-production: High-level library on top of Semidynamics’ ONNX Runtime Execution Provider for Cervell—no model conversion required.
- Built-in guidance: Clean APIs handle session setup, tensor management, and inference orchestration, reducing repetitive code and integration risk.
- Production-grade examples: Ready-to-adapt samples for LLM chatbots (e.g., Llama, Qwen), object detection (YOLO family), and image classification (ResNet, MobileNet, AlexNet).
- Validated at scale: Tested by Semidynamics across a wide range of ONNX models to help ensure predictable performance and robust deployment.
- One ecosystem, two lanes:
- Aliado SDK for kernel-level control and peak performance.
- Inferencing Tools for rapid iteration, cleaner app code, and faster shipping.
The Semidynamics Inferencing Tools are available today in the Software Centre to Semidynamics customers and partners.
About Semidynamics
Semidynamics is the only company offering fully customizable RISC-V processor IP. With expertise in high-bandwidth architectures, vector/tensor extensions, and groundbreaking memory systems, Semidynamics enables customers to design exactly the core they need for AI, HPC, and other performance-critical workloads. Based in Barcelona, Semidynamics is redefining what’s possible with RISC-V.
Semidynamics Announces Cervell™ All-in-One RISC-V NPU Delivering Scalable AI Compute for Edge and Datacenter Applications
New fully programmable Neural Processing Unit (NPU) combines CPU, Vector, and Tensor processing to deliver up to 256 TOPS for LLMs, Deep Learning, and Recommendation Systems.
Barcelona, Spain – 6 May 2025 – Semidynamics, the only provider of fully customizable RISC-V processor IP, announces Cervell™, a scalable and fully programmable Neural Processing Unit (NPU) built on RISC-V. Cervell combines CPU, vector, and tensor capabilities in a single, unified all-in-one architecture, unlocking zero-latency AI compute across applications from edge AI to datacenter-scale LLMs.
Delivering up to 256 TOPS (Tera Operations Per Second) at 2GHz, Cervell scales from C8 to C64 configurations, allowing designers to tune performance to application needs — from 8 TOPS INT8 at 1GHz in compact edge deployments to 256 TOPS INT4 in high-end AI inference.
Says Roger Espasa, CEO of Semidynamics:
“Cervell is designed for a new era of AI compute — where off-the-shelf solutions aren’t enough. As an NPU, it delivers the scalable performance needed for everything from edge inference to large language models. But what really sets it apart is how it’s built: fully programmable, with no lock-in thanks to the open RISC-V ISA, and deeply customizable down to the instruction level. Combined with our Gazillion Misses™ memory subsystem, Cervell removes traditional data bottlenecks and gives chip designers a powerful foundation to build differentiated, high-performance AI solutions.”
Why NPUs Matter
AI is rapidly becoming a core differentiator across industries — but traditional compute architectures weren’t built for its demands. NPUs are purpose-designed to accelerate the types of operations AI relies on most, enabling faster insights, lower latency, and greater energy efficiency. For companies deploying large models or scaling edge intelligence, NPUs are the key to unlocking performance without compromise.
Cervell NPUs are purpose-built to accelerate matrix-heavy operations, enabling higher throughput, lower power consumption, and real-time response. By integrating NPU capabilities with standard CPU and vector processing in a unified architecture, designers can eliminate latency and maximize performance across diverse AI tasks, from recommendation systems to deep learning pipelines.
Unlocking High-Bandwidth AI Performance
Cervell is tightly integrated with Gazillion Misses™, Semidynamics’ breakthrough memory management subsystem. This enables:
- Up to 128 simultaneous memory requests, eliminating latency stalls
- Over 60 bytes/cycle of sustained data streaming
- Massively parallel access to off-chip memory, essential for large model inference and sparse data processing
The result is an NPU architecture that maintains full pipeline saturation, even in bandwidth-heavy applications like recommendation systems and deep learning.
Built to Customer Specifications
Like all Semidynamics cores, Cervell is fully customizable and customers may:
- Add scalar or vector instructions
- Configure scratchpad memories and custom I/O FIFOs
- Define memory interfaces and synchronization schemes
- Request bespoke features to suit your application
As demand grows for differentiated AI hardware, chip designers are increasingly looking for ways to embed proprietary features directly into their processor cores. While many IP providers offer limited configurability from fixed option sets, Semidynamics takes a different approach — enabling deep customization at the RTL level, including the insertion of customer-defined instructions. This allows companies to integrate their unique “secret sauce” directly into the solution protecting their ASIC investment from imitation and ensuring the design is fully optimized for power, performance, and area. With a flexible development model that includes early FPGA drops and parallel verification, Semidynamics helps customers accelerate time-to-market while reducing project risk.
This flexibility, combined with RISC-V openness, ensures customers are never locked in — and always in control.
Cervell At-a-Glance
| Configuration | INT8 @ 1GHz | INT4 @ 1GHz | INT8 @ 2GHz | INT4 @ 2GHz |
|---|---|---|---|---|
| C8 | 8 TOPS | 16 TOPS | 16 TOPS | 32 TOPS |
| C16 | 16 TOPS | 32 TOPS | 32 TOPS | 64 TOPS |
| C32 | 32 TOPS | 64 TOPS | 64 TOPS | 128 TOPS |
| C64 | 64 TOPS | 128 TOPS | 128 TOPS | 256 TOPS |
Semidynamics’ Aliado SDK Accelerates AI Development for RISC-V with Seamless ONNX Integration
Barcelona, Spain – March 4, 2025. Semidynamics, the leading IP company for high performance, AI-enabled, RISC-V processors, has announced its support for the ONNX Runtime and the availability of its RISC-V Software Development Kit (SDK), Aliado.
Semidynamics has integrated support for its hardware into ONNX Runtime, enabling end-users to seamlessly integrate AI into their applications. The ONNX project, originally developed by Microsoft, defines a common standard for AI models and all major AI frameworks support importing or exporting ONNX-format models. Most open-source AI model repositories, such as HuggingFace, already provide models in the ONNX format enabling ONNX users to import practically any model currently available, without any model compilation step.
On top of the ONNX format, the ONNX-Runtime can then take ONNX models and execute them in a variety of hardware. It also provides a suite of tools to perform model optimization and even quantization. With the newly integrated support for Semidynamics hardware, end-users will be ready to develop their AI applications from day one.
For end-users developing RISC-V applications in general, the Aliado SDK enables quick and seamless development, debugging and fine-tuning of applications for Semidynamics hardware. It provides a complete software development solution including a compilation and debugging toolchain, emulators for functional testing of applications and a highly optimized library of common routines, all integrated into a single development environment.
Users can easily get started with a Windows WSL or Linux compatible, fully integrated development environment (IDE). The IDE is an Eclipse-based complete solution that has been integrated with all the Aliado SDK features, enabling users to write C and C++ applications with as little friction as possible. Backing the IDE is a comprehensive toolchain for compiling and debugging code targeting Semidynamics hardware. It is based both on the GNU Compiler Collection (GCC) and LLVM, providing two of the most common and familiar environments for users, including code optimization and auto-vectorization. The toolchain is integrated with the IDE but is compatible with any IDE of choice.
Any RISC-V compiled code can then be functionally validated from x86 workstations using QEMU or Spike emulators, modified to support Semidynamics’ custom instructions. QEMU provides fast functional validation and is capable of both system and user mode emulations, whereas Spike is the official RISC-V International’s simulator. The SDK also facilitates bare metal development for Spike, for end-users targeting embedded systems.
Finally, the core of Semidynamics’ support for AI, the Kernel Library, is included, which is what backs Semidynamics’ integrated support in ONNX-Runtime. This was created to leverage the performance of the Semidynamics RISC-V hardware most efficiently. It is a collection of functions operating on multi-dimensional data with a particular focus on AI. A large number of crucial operations, such as Matrix multiplications, transpositions, activation functions, and more, have been optimized for Semidynamics hardware enabling quick development of efficient AI applications.
Everything will be available for download at semidynamics.com/software
Roger Espasa, CEO of Semidynamics, added, “Our philosophy is to always make it very easy for end-users to use our products. Our support for ONNX-RT and Aliado SDK will enable them to rapidly develop and test their software on their PCs to see how well it will perform on our hardware. This really speeds up time to market as it can all be perfected in simulation before any hardware is actually created.”