보도 자료
September 09, 2024
Semidynamics on major recruitment drive for RISC-V software engineers
Also recruiting for engineers to produce test chips

Barcelona Spain, - 9 September 2024. Semidynamics, the European RISC-V custom core AI specialist, is on a major recruitment drive for a wide range of engineers from junior to senior at its Barcelona HQ. The company has grown from 40 at the beginning of this year to 90 and aims to have 120 by the beginning of 2025.

 

Roger Espasa, Semidynamics’ CEO, explained, “Our innovative All-In-One AI IP has brought in a large number of enquiries and, because we offer a customisation service, we are bringing on more RISC-V software engineers at a rate of three to four per month to ensure this runs smoothly and are looking for more. In addition, as an exciting next step in the evolution of the company, we will be making test chips using our All-In-One AI IP so we are now recruiting for a new set of engineers that we need for Verification, Front End, DfT, NoC design, etc.”


Pedro Marcuello, Semidynamics’ IP Director, “Full details of all vacancies can be found on our website https://semidynamics.com/en/hiring. In addition, it is a vital part of our company philosophy to support learning so we help undergraduates gain valuable hands-on experience with us through our Student Internship Program whilst studying for their degree.
We also have a Masters Program that enables graduates to do their Master’s Thesis whilst learning on real projects and being paid at the same time! There are few things more rewarding than sharing knowledge and mentoring people to seeing them grow in first class engineers.”

June 25, 2024
Semidynamics releases Tensor Unit efficiency data for its new All-In-One AI IP

Barcelona, Spain – 25 June 2024. Semidynamics, the European RISC-V custom core AI specialist, has announced Tensor Unit efficiency data for its ‘All-In-One’ AI IP running a Llama-2 7B-parameter Large Language

 Model (LLM). 

Roger Espasa, Semidynamics’ CEO, explained, “The traditional AI design uses three separate computing elements: a CPU, a GPU (Graphical Processor Unit) and an NPU (Neural Processor Unit) connected through a bus. This traditional architecture requires DMA-intensive programming, which is error-prone, slow, and energy-hungry plus the challenge of having to integrate three different software stacks and architectures. In addition, NPUs are fixed-function hardware that cannot adapt to future AI algorithms yet-to-be-invented.

“In contrast, Semidynamics has re-invented AI architecture and integrates the three elements into a single, scalable processing element. We combine a RISC-V core, a Tensor Unit that handles matrix multiplication (playing the role of the NPU) and a Vector Unit that handles activation-like computations (playing the role of the GPU) into a fully integrated, all-in-one compute element, as shown in Figure 1. Our new architecture is DMA-free, uses a single software stack based on ONNX and RISC-V and offers direct, zero-latency connectivity between the three elements. The result is higher performance, lower power, better area and a much easier-to-program environment, lowering overall development costs. In addition, because the Tensor and Vector Units are under the direct control of a flexible CPU, we can deploy any existing or future AI algorithm, providing great protection to our customer’s investments.”

Figure 1 Comparison of traditional AI architecture to Semidynamics’ new All-In-One integrated solution
Figure 2 Attention Layer in LLM

Large Language Models (LLMs) have emerged as a key element of AI applications. LLMs are computationally dominated by self-attention layers, shown in detail in Figure 2. These layers consist of five matrix multiplications (MatMul), a matrix Transpose and a SoftMax activation function, as shown in Figure 2. In Semidynamics’ All-In-One solution, the Tensor Unit (TU) takes care of matrix multiplication, whereas the Vector Unit (VU) can efficiently handle Transpose and SoftMax. Since the Tensor and Vector Units share the vector registers, expensive memory copies can be largely avoided. Hence, there is zero latency and zero energy spent in transferring data from the MatMul layers to the activation layers and vice versa. To keep the TU and the VU continuously busy, weights and inputs must be efficiently fetched from memory into the vector registers. To this end, Semidynamics’ GazzillionTM Misses technology provides unprecedented ability to move data. By supporting a large number of in-flight cache misses, data can be fetched ahead-of-time yielding high resource utilization. Furthermore, Semidynamics’ custom tensor extension includes new vector instructions optimized for fetching and transposing 2D tiles, greatly improving tensor processing. 

Figure 3 LlaMa 2 Tensor Unit efficiency 
organized by Tensor-A shape

Semidynamics has run the full Llama-2 7B-parameter model (BF16 weights) on its All-In-One element, using Semidynamics’ ONNX Run Time Execution Provider, and calculated the utilization of the Tensor Unit for all the MatMul layers in the model. The results are shown in Figure 3. The results are aggregated and presented organized by the A-tensor shape. There are a total of 6 different shapes in Llama2, as shown in the x-axis labels in Figure 2. As it can be seen, utilization is above 80% for most shapes, in sharp contrast with other architectures. Results are collected in the most challenging conditions, i.e., with a batch of 1 and for the first-token computation. To complement this data, Figure 4 presents the Tensor Unit efficiency for large matrix sizes, to demonstrate the combined efficiency of the Tensor Unit and the GazzillionTM technology. Figure 4 is annotated with the A+B matrix size. One can see that as the number of elements in the N, M, P dimensions of the matrix increase, the total size in MBs quickly exceeds any possible cache/scratchpad available. The noteworthy aspect of the chart is that the performance is stable slightly above 70%, irrespective of the total size of the matrices. This quite surprising result is thanks to the Gazzillion technology being capable of sustaining a high streaming data rate between main memory and the Tensor Unit.

Figure 4 Tensor Unit utilization for 8-bit (left side) and 16-bit matrices (right side) for different matrix sizes

Espasa concluded, “Our new All-In-One AI IP not only delivers outstanding AI performance but is also so much easier to program as there is now just one software stack instead of three. Developers can use the RISC-V stack they already know and they do not have to worry about software-managed local SRAMs, or DMAs. Furthermore, Semidynamics provides an ONNX runtime optimized for the All-In-One AI IP, which allows programmers to easily run their ML models. Therefore, our solution represents a big step forward in programmer friendliness and ease-of-integration into new SOC designs. Our customers using All-In-One will be able to pass on to their customers, developers, and users all these benefits in the form of better and easier-to-program silicon.

“Moreover, our All-In-One design is completely resilient to future changes in AI/ML algorithms and workloads. This is a huge risk protection for customers starting a silicon project that will not hit the market for several years. Knowing that your AI IP will still be relevant when your silicon enters volume production is a unique advantage of our technology.”

April 05, 2024
Semidynamics announces All-In-One AI IP for super powerful, next generation AI chips

Integrated, RISC-V AI IP processing element meets the increasing needs of AI chips

Launch at Embedded World 2024 on booth 5-337

 

Barcelona, Spain – 5 April 2024. Semidynamics, the European RISC-V custom core AI specialist, has announced its ‘All-In-One AI’ IP that is designed for super powerful, next generation AI chips and algorithms such as transformers. Currently, AI chip designers use the approach of integrating separate IP blocks next to the system CPU to handle the ever-increasing demands of AI. Semidynamics has taken a revolutionary approach of a unified solution combining RISC-V, vector, tensor and Gazzillion technology so that AI chips are now easy to program and scale to whatever processing power is required.

The data volume and processing demand of AI is constantly increasing and the current solution is, essentially, to integrate more individual functional blocks. The CPU distributes dedicated partial workloads to gpGPUs (general purpose Graphical Processor Units) and NPUs (Neural Processor Units), and manages the communication between these units. But this has a major issue as moving the data between the blocks creates high latency. It is also hard to program with three different types of IP blocks each with their own instruction set and tool chains. Finally, non-programmable, fixed-function NPU blocks today can become obsolete even before reaching silicon due to the constant introduction of new AI algorithms. An AI chip being designed today could easily be out of date by the time it is silicon in 2027 as software is always evolving faster than hardware.

Roger Espasa, CEO of Semidynamics, said, “The current AI chip configuration is inelegant with typically three different IP vendors and three tool chains, with poor PPA (Power Performance Area) and is increasingly hard to adapt to new algorithms. For example, they cannot handle such as an AI algorithm called a transformer but our All-in-One AI IP is ideal for this. We have created a completely new approach that is easy to program as there is just the RISC-V instruction set and a single development environment. Integrating the various blocks into one RISC-V AI processing element means that new AI algorithms can easily deployed without worrying about where to distribute which workload. The data is in the vector registers and can be used by the vector unit or the tensor unit with each part simply waiting in turn to access the same location as needed. Thus, there is zero communication latency and minimized caches that lead to optimized PPA but, most importantly, it easily scales to meet greater processing and data handling requirements.”

Semidynamics has combined four of its innovative IPs together to form one, fully integrated solution called the ‘All-In-One AI’ IP processing element. This has a fully customisable RISC-V 64-bit core, Vector Units (as the gpGPUs), a Tensor Unit (as the NPUs) and the Gazzillion® Unit to ensure huge amounts of data can be handled from anywhere in the memory without suffering from cache misses. As a result, there is just one IP supplier, one RISC-V instruction set and one tool chain making implementation significantly easier and faster with reduced risk. As many of these new processing elements as required to meet the application’s needs can be put together on a single chip to create a next generation, ultra-powerful AI chip.

Semidynamics’ new All-In-One AI IP processing element

Roger Espasa concluded, “We have established a completely new way to architect ever more powerful chips that we believe will enable AI to overcome the shortcomings of the current state-of-the-art designs. Our revolutionary, integrated, All-In-One AI processing elements create a scalable solution that will be at the heart of a whole new generation of ultra-powerful AI chips which will be accessible to everyone. By using our new Configurator tool, they can create the appropriate balance of Tensor and Vector units with RISC-V control capabilities in the processing element.

The RISC-V core inside our All-In-One AI IP provides the ‘intelligence’ to adapt to today’s most complex AI algorithms and even to algorithms that have not been invented yet. The Tensor provides the sheer matrix multiply capability for convolutions, while the Vector unit, with its fully general programmability, can tackle any of today’s activation layers as well as anything the AI software community can dream of in the future. Having an All-In-One processing element that is simple and yet repeatable solves the scalability problem so our customers can scale from a 1/4 TOPS to hundreds of TOPS by using as many processing elements as needed on the chip. In addition, our IP remains fully customisable to enable companies to create unique solutions rather than using standard off-the-shelf chips.”

March 25, 2024
Industry veteran Volker Politz joins Semidynamics as Chief Sales Officer

Barcelona, Spain  25 March 2024. Semidynamics, the European RISC-V custom core AI specialist, has appointed industry veteran, Volker Politz as its Chief Sales Officer. He has worked in the semiconductor industry for companies such as Imagination Technologies, MIPS, Verisilicon, Synopsys and Renesas.

Roger Espasa, CEO of Semidynamics, said, “We are delighted that Volker has joined us as his wealth of over 30 years of experience in IP sales for major companies will be invaluable.”

Volker Politz, added, “At every IP company that I have worked for, customers always want to modify the IP to precisely meet their application and were disappointed that only minor adjustments were possible at best. Semidynamics’ strategy of having fully customisable RISC-V processor cores is a breakthrough and just what customers want. Semidynamics’ new Configurator tool enables them to create a custom core configuration in a couple of hours plus Semidynamics can open up the core and insert custom instructions. Semidynamics’ vision of combining RISC-V, vector units and tensor units with customization is truly unique in the industry. Together, this is a world-beating combination and is why I joined Semidynamics. Plus, Roger is a world class expert in RISC-V with a speciality in vector units that are key to enabling the next generation of AI chip solutions to be created. He is Mr Vector!”

March 05, 2024
Semidynamics puts the power of full core customisation into hands of customers

Free Configurator tool enables customers to specify exactly what they want their new RISC-V core to be

Barcelona, Spain – 5 March 2024. Semidynamics, the European RISC-V custom core specialist, has released its new tool called ‘Configurator’ that puts the power of Semidynamics’ full customisation of a RISC-V processor core in the hands of the customer. It uses dozens of blocks that have already been verified by Semidynamics so that the final core is therefore also verified. This gives customers an incredible fast time to a workable core design in a matter of a few hours from the thousands of possible variants. 

Semidynamics’ unique ‘Open Core Surgery’ allows the customer to tailor the Semidynamics IP to their needs, including adding new instructions, new interfaces and the customer’s ‘secret sauce’ deep inside. This process entails two steps. First step is to configure the base parameters of the architecture, which is what Configurator enables. The second step is to describe the special features required by the customer; the Semidynamics engineering team will take those requests and implement them according to the customer’s requirements. The newly released Configurator helps in both steps. First, it provides an easy way for the customer to specify the configuration parameters for the IP. Second, it allows the customer to describe the additional changes, beyond configuration, required.

Roger Espasa, Semidynamics’ CEO, explained, “We are the only company offering fully customisable RISC-V IP cores so there are many choices to be made by customers when specifying their requirements. Our new Configurator tool makes this process extremely easy for the customer to do on their own computer screen. The tool has a sequential set of options that logically works through the thousands of possible variants. As each one option is selected, the resulting core configuration is immediately displayed on the screen so that the customer can see how the layout of blocks builds. The customer can go back and change any of the options to see the effect on the core layout. Naturally, we are available to help and advise customers in the choices to ensure the best possible core design for their application.

“When the customer has finalised the design, it is sent to us for a PPA and licensing quote. Once this is agreed and the contract signed, the RTL is sent to them immediately and as it is already verified by us they don’t have that time consuming stage to do. The whole aim of Configurator is to empower the customer with an easy-to-use way to have the exact custom core that they need in the shortest possible time so that they are fastest to market with their innovative product.”

Some of the choices offered by the Configurator tool include instruction and data cache sizes, main memory bus size and type, and eight optional extensions. Additional options include Semidynamics’ state-of-the-art Tensor and RISC-V 1.0 compliant Vector Units with a choice of number of cores and data configuration. On top of these, Gazzillion® technology that ensures constant data flow from memory.

The Semidynamics Configurator tool is web-based so it can be run on the customer’s own computer system once they have registered which can be done at www.semidynamics.com/configurator.

January 17, 2024
YorChip, Inc. announces its first Chiplet for Edge AI applications with IP licensed from Semidynamics, the leader in RISC-V IP based in Barcelona

YorChip Edge AI Compute Chiplet with support for UCIe and 10-100+ Int8-TOPS

SAN RAMON, CA, 94582 -- January 17, 2024 -- YorChip, Inc. announces its first Chiplet for Edge AI applications with IP licensed from Semidynamics, the leader in RISC-V IP based in Barcelona. The Semidynamics High-Performance High-Bandwidth Core IP with 4 Atrevido 423 with V8 SMD VPUs and T16 SMD Tensor Units can provide 10 Int8-TOPS per chiplet. Edge AI requires high performance, high bandwidth, and low cost, the target technology is 12nm with a target die size sub 25mm2, delivers with scalable performance and low costs.

Chiplets represent multi-billion-dollar market potential – according to Transparency Market Research, the Chiplet market is expected to reach more than US$47 Billion by 2031, representing one of the fastest growing segments of the semiconductor industry at more than 40% CAGR from 2021 to 2031. This growth is expected to be enabled by the considerable cost reduction and improved yields Chiplets will enable as compared to traditional system-on-chip (SoC) designs.

YorChips’ CEO and founder, Kash Johal, said, “We chose to work with the Semidynamics team due to their long-term focus on fully customizable 64-bit RISC-V processor IP and expertise to rapidly support AI/ML workloads. Coupled with our UCIe PHY and low latency switching fabric customers can cluster up to 16 Compute Chiplets to support 100 Int-8 TOPS.”

Semidynamics’ CEO and founder, Roger Espasa, said, “Targeted at Automotive, Robotic, Drone and other high-performance edge AI markets, our RISC-V Core IP paired with our Vector Processor Unit and Tensor Processor Unit, ensures greater efficiency in big data applications and high-bandwidth memory systems. We are excited to be partnering with YorChip as it will give us the opportunity to showcase our latest IP in a Chiplet application allowing customers to try and discover our technology.”

QuickLogic CEO, Brian Faith, said, “There is an insatiable demand for design flexibility, higher performance and bandwidth and this Chiplet will be of great interest to our FPGA Chiplet customers as they both feature built-in UCIe interfaces.”

Visit Chiplet Summit Feb 6-8 to learn about YorChip’s building block Chiplets.
@ QuickLogic and YorChip Booth, Santa Clara Convention Center

Availability

Chiplets will sample in Q2 2025 with volume production in early 2026.

About YorChip

Silicon Valley start-up focused on Chiplets for Mass Markets. We are leveraging proven partner IP and our novel die-to-die technology to deliver off-the-shelf, low cost, secure chiplets at scale. We are developing a complete ecosystem of off the shelf Chiplets. www.yorchip.com

About Semidynamics

Founded in 2016 and based in Barcelona, Spain, Semidynamics™ is the only provider of fully customizable RISC-V processor IP and specializes in high bandwidth, high-performance cores with Vector Units, Tensor Units and Gazzillion, and targeted at machine learning and AI applications. The company is privately owned and is a strategic member of the RISC-V Alliance.

November 02, 2023
Semidynamics and Arteris Partner To Accelerate AI RISC-V System-on-Chip Development

Highlights:

  • Arteris and Semidynamics partnership enhances the flexibility and highly configurable interoperability of RISC-V processor IP with system IP.
  • Integrated and optimized solutions will focus on accelerating artificial intelligence, machine learning and high-performance computing applications.
  • The partnership will result in a demonstrator platform in 2024.

CAMPBELL, Calif. – November 2, 2023 – Arteris, Inc. (Nasdaq: AIP), a leading provider of system IP which accelerates system-on-chip (SoC) creation and Semidynamics, a provider of fully customizable high bandwidth and high-performance RISC-V processor IP, today announced a partnership to accelerate electronic product innovation for artificial intelligence (AI), machine learning (ML) and high-performance computing (HPC) applications.

The partnership supports the interoperability between Semidynamics' Atrevido™ and Avispado™ 64-bit RISC-V processor IP cores and Arteris’ Ncore cache coherent network-on-chip (NoC) system IP. The combined solution delivers interoperability to speed up the development of AI/ML and HPC designs.

"For markets like machine learning, key-value stores and recommendation systems, we optimize our customizable RISC-V processors and supporting technologies, such as Vector Units, Tensors Units and Gazzillion™, to deal with the computing of highly sparse data, with long memory latencies, and high-bandwidth memory systems," said Roger Espasa, CEO of Semidynamics. "Efficient data transport within our cores and between chips and chiplets is vital for overall system performance. Partnering to pre-integrate with Arteris' Ncore cache coherent technology will result in accelerated project schedules for our mutual customers."

"Our goal is to support our customers’ choices on processor IP while providing the SoC connectivity backbone for the emerging RISC-V ecosystem and its use in combination with other processor architectures," said Michal Siwinski, CMO at Arteris. "Our collaboration with Semidynamics supports our mission to catalyze SoC innovation so our shared customers can focus on dreaming up what comes next and creating leading-edge products, including those supporting the rapid evolution of AI."

The partnership currently focuses on a demonstrator design integrating a Semidynamics’ four-core RISC-V cluster using Arteris’ Ncore cache coherent NoC technology. This collaboration is expected to be available for customer demonstrations in Q1 2024. For more information, contact info@arteris.com and info@semidynamics.com.

October 24, 2023
Semidynamics launches first fully-coherent RISC-V Tensor unit to supercharge AI applications

Optimised for its 64-bit fully customisable RISC-V cores

 

Barcelona, Spain – 24 October, 2023. Semidynamics has just announced a RISC-V Tensor Unit that is designed for ultra-fast AI solutions and is based on its fully customisable 64-bit cores.

State-of-the-art Machine Learning models, such as LLaMa-2 or ChatGPT, consist of billions of parameters and require a large computation power in the order of several trillions of operations per second. Delivering such massive performance while keeping energy consumption low poses a significant challenge for hardware design. The solution to this problem is the Tensor Unit that provides unprecedented computation power for performance-hungry AI applications. The bulk of computations in Large Language Models (LLMs) is in fully-connected layers that can be efficiently implemented as matrix multiplication. The Tensor Unit provides hardware specifically tailored to matrix multiplication workloads, resulting in a huge performance boost for AI.

 

Figure 1 Semidynamics Tensor Unit

The Tensor Unit is built on top of the Semidynamics RVV1.0 Vector Processing Unit and leverages the existing vector registers to store matrices, as shown in Figure 1. This enables the Tensor Unit to be used for layers that require matrix multiply capabilities, such as Fully Connected and Convolution, and use the Vector Unit for the activation function layers (ReLU, Sigmoid, Softmax, etc), which is a big improvement over stand-alone NPUs that usually have trouble dealing with activation layers.

The Tensor Unit leverages both the Vector Unit capabilities as well as the Atrevido-423 Gazzillion™ capabilities to fetch the data it needs from memory. Tensor Units consume data at an astounding rate and, without Gazzillion, a normal core would not keep up with the Tensor Unit’s demands. Other solutions rely on difficult-to-program DMAs to solve this problem. Instead, Semidynamics seamlessly integrates the Tensor Unit into its cache-coherent subsystem, opening a new era of programming simplicity for AI software.

In addition, because the Tensor Unit uses the vector registers to store its data and does not include new, architecturally-visible state, it seamlessly works under any RISC-V vector-enabled Linux without any changes. 

Figure 2 The overall ensemble with the Atrevido-423 core, the Gazzillion Unit, the Vector Unit and the Tensor Unit

Semidynamics’ CEO and founder, Roger Espasa, said, “This new Tensor Unit is designed to fully integrate with our other innovative technologies to provide solutions with outstanding AI performance. First, at the heart, is our 64-bit fully customisable RISC-V core. Then our Vector Unit which is constantly fed data by our Gazzillion technology so that there are no data misses. And then the Tensor Unit that does the matrix multiplications required by AI. Every stage of this solution has been designed to be fully integrated with the others for optimal AI performance and very easy programming. The result is a performance increase of 128x compared to just running the AI software on the scalar core. The world wants super-fast AI solutions and that is what our unique set of technologies can now provide.”

Further details on the Tensor Unit will be disclosed at the RISC-V North America Summit in Santa Clara on November 7th 2023.

October 03, 2023
Semidynamics and SignatureIP create a fully tested RISC-V multi-core environment and CHI interconnect

Advanced multi-core RISC-V chips can now easily be created for applications such as AI and ML

Barcelona, Spain – 3 October, 2023. There is an ever-increasing demand for more powerful chip designs for advanced applications, such as AI and ML, that require many cores on one chip. To facilitate this, Semidynamics and SignatureIP have partnered to integrate their respective IPs to provide a fully-tested RISC-V, multi-core environment and CHI interconnect for the development of state-of-the-art chip designs. 

Semidynamics’ CEO and founder, Roger Espasa, said, “Working closely together with other members of the RISC-V community is one of the driving forces of RISC-V’s rapidly growing success. There is a natural synergy between the two companies that has resulted in a solution that enables cutting edge, multi-cores chips to be created. SignatureIP’s C-NoC CHI interconnect solution makes it very straightforward to lay out the Network on Chip (NoC) for multiple cores on a chip using our mature, proven technologies which minimizes risks and accelerates time to market.”

SignatureIP’s Coherent NoC is architected for performance and scalability across chiplets. It supports a transport layer for chiplet communication. The C-NoC IP is a directory-based architecture with distributed home-node support and optional system level caches for high performance. SignatureIP’s state-of-the-art inoculator.ai tool supports automation to generate a physically-aware NoC for a system. Combined with the automation tool and a simple licensing model, the process of evaluation, licensing, and implementation becomes an easy task for SignatureIP’s customers. 

Kishore Mishra, SignatureIP’s CTO, added, “Semidynamics revolutionized the 64-bit RISC-V processor with cores that are fully customizable using its ‘Open Core Surgery’ approach. This goes deep into the core and is not the tweakable approach typically found in IPs. Combining our technologies now enables multi-core chip designs to be created on this fully coherent RISC-V/CHI platform and then prototyping on an FPGA to demonstrate the integrated performance. We have fully tested them together to ensure compatibility and minimization of verification time”. 

September 20, 2023
Semidynamics shortlisted for Semiconductor Product of the Year (Digital) in Elektra Awards 2023

Barcelona, Spain – 20 September, 2023. Semidynamics is a finalist in this year’s Elektra Awards in the category of Semiconductor Product of the Year (Digital). This is for the company’s new Atrevido 64-bit RISC-V IP processor core. The core is unique in that it is fully customisable to precisely meet the customer’s specifications. This includes Semidynamics ability to open up the core to insert a customer’s specific instructions which it calls Open Core Surgery™.

Semidynamics’ CEO and founder, Roger Espasa, said, “We have been in stealth mode until this year, perfecting our cores and supporting technologies. The Vector Unit can process unprecedented amounts of data bits and, to fetch all this data from memory, we have our Gazzillion™ technology that can handle up to 128 simultaneous requests for data and track them back to the correct place in whatever order they are returned. Together our technologies take RISC-V to a whole new level with the fastest handling of big data currently available that will open up opportunities in many application areas of High-Performance Computing such as video processing, AI and ML. We are delighted and honoured that our innovative technology has been recognised as a finalist by the judges of these prestigious international Awards.”

Details of the awards can be found at Elektra Awards 2023 and the winners will be announced at the awards ceremony on Wednesday 29 November at the Grosvenor House Hotel, Park Lane, London.

July 20, 2023
Semidynamics announces fully customisable, 4-way Atrevido 423 RISC-V core for big data applications

Launches at RISC-V Summit China on booth B2,  August 23-25 2023

Barcelona, Spain – 20 July, 2023. Semidynamics, the only provider of fully customisable RISC-V processor IP, has launched the next member of its Atrevido family of 64-bit cores. The Atrevido 423 has a wider, 4-way pipeline, allowing for the decoding and retirement of up to two times more instructions than its recently launched, 2-way, 223 core. It is also coupled with more functional units, which significantly increases the IPC (instructions-per-cycle).
 
Roger Espasa, Semidynamics’ CEO, said, “The Atrevido 423 is particularly well suited for applications that require massive amounts of data. It shines when the data required cannot fit in memory hierarchy levels that are closer to the core (such as L1, L2 or even L3) by tolerating very large latencies without compromising on throughput thanks to our Gazzillion™ misses technology. This can handle up to 128 simultaneous requests for data and track them back to the correct place in whatever order they are returned. Gazzillion™ allows the core to access memory hierarchy levels far away from the core without an impact in bandwidth or throughput. Effectively, Gazzillion™ technology removes the latency issues that can occur when using CXL technology to enable far away memory to be accessed at the supercharged rates that it was designed to deliver. This makes Atrevido very well positioned to handle AI and HPC workloads, which typically need to rapidly access very large amounts of data from main memory.”


 
 

Atrevido can be configured as a coherent core with a CHI NoC or as a simpler, incoherent core connected via an AXI interface. Furthermore, with an improved TLB and MMU and support for SV39/48/57, the core is well suited for running applications with large memory footprints using Linux. The Out-Of-Order core comes with a large menu of RISC-V extensions that can be added. Most notably, it can be configured with the in-house Vector Unit, which fully supports the latest RISC-V vector spec. Other important extensions are bit manipulation, crypto, single-precision FP, double-precision FP and half-precision FP, and bfloat16. Customers can also optionally choose to protect the Data cache with ECC and the Instruction cache with parity, if required for their target markets. Furthermore, the Atrevido core is fully compliant with the latest RVA22 RISC-V profile. The cores are process agnostic with versions already being supplied down to 5nm.
 
Roger Espasa added, “Semidynamics has the fastest cores on the market for moving large amounts of data with a cache line per clock at high frequencies even when the data does not fit in the cache. And this can be done at frequencies up to 2.4 GHz on the right node. The rest of the market averages about a cache line every many, many cycles, that is nowhere near Semidynamics’ one every cycle.”
 
Crypto-Enabled
The scalar crypto extension implemented follows the latest specification (Zks and Zk) and provides high performance encryption for algorithms such as SHA2-256, SHA2-512, ShangMi 3, ShangMi 4, AES-128, AES-192, and AES-256. The Atrevido 423 constant-time implementation provides security against side-channel attacks while still delivering a high-performance crypto solution. 
 
Open Core Surgery for full customisation
“Customers for these kinds of state-of-the-art cores want to have unique solutions with their own special secret sauce built,” explained Espasa. “We are unique in offering Open Core Surgery™ where we open up the core to insert custom instructions within it. This is unique as other companies’ cores are only configurable from a set of predetermined options. This completely protects the customer’s ASIC from copying and protects its multi-million-dollar investment in the new ASIC. It also means that it is optimised for Power, Performance and Area with no unnecessary overheads or compromises.”
 
Semidynamics can implement a customer’s ‘secret sauce’ features into the RTL in a matter of weeks, which is something that no-one else offers. Semidynamics also enables customers to achieve a fast time to market for their customised core as a first drop can be delivered that will run on an FPGA. This enables the customer to check functionality and run software on it while Semidynamics does the core verification. By doing these actions in parallel, the product can be brought to market faster and with reduced risk.
 
Vector Unit
Key to this is Semidynamics’ Vector Unit that is the largest, fully customisable Vector Unit in the RISC-V market, delivering up to 2048b of computation per cycle for unprecedented data handling. The Vector Unit is composed of several 'vector cores', roughly equivalent to a GPU core, that perform multiple calculations in parallel. Each vector core has arithmetic units capable of performing addition, subtraction, fused multiply-add, division, square root, and logic operations. Semidynamics' vector core can be tailored to support different data types: FP64, FP32, FP16, BF16, INT64, INT32, INT16 or INT8, depending on the customer’s target application domain. The largest data type size in bits defines the vector core width or ELEN. Customers then select the number of vector cores to be implemented within the Vector Unit, either 4, 8, 16 or 32 cores, catering for a very wide range of power-performance-area trade-off options. Once these choices are made, the total Vector Unit data path width or DLEN is ELEN x number of vector cores. Semidynamics supports DLEN configurations from 128b to 2048b.
 
Uniquely, Semidynamics offers a second key choice in the Vector Unit: the number of bits of each vector register (known as VLEN) can also be tailored to customer’s needs. While most other vendors assume that VLEN is equal to DLEN (i.e., 1X ratio), Semidynamics offers 2X, 4X and 8X ratios. When the VLEN is larger than the DLEN, a vector operation uses multiple cycles to execute. For example, when VLEN=2048 and DLEN=512, each vector arithmetic operation will take 4 clocks to execute. This is a great feature for tolerating large memory latencies and for reducing power. This unleashes the ability for the Vector Unit to process unprecedented amounts of data bits which it is being continuously fed by Gazzillion™ .
 

June 01, 2023
Semidynamics announces largest, fully customisable Vector Unit in the RISC-V market, delivering up to 2048b of computation per cycle for unprecedented data handling

Launches at RISC-V Summit Europe 2023 (booth 6)

Barcelona, Spain – 1 June, 2023. Semidynamics has announced its new, entirely customisable Vector Unit to go with its innovative range of fully customisable 64-bit RISC-V cores. The Vector Unit is totally compliant with the RISC-V Vector Specification 1.0 with many, additional, customisable features to provide enhanced data handling capabilities. Together they set a new standard for data handling both in terms of unprecedented speed and volume.

 



Semidynamics’ CEO and founder, Roger Espasa, explained, “Our recently announced Atrevido™ core is unique in that we can do ‘Open Core Surgery’ on it. This means that, unlike other vendors’ cores that are just configurable from a set of options, we actually open up the core and change the inner workings to add features or special instructions to create a totally bespoke solution. We have taken the same approach with our new Vector Unit to perfectly complement the ability of our cores to rapidly process massive amounts of data.”   

A Vector Unit is composed of several 'vector cores', roughly equivalent to a GPU core, that perform multiple calculations in parallel. Each vector core has arithmetic units capable of performing addition, subtraction, fused multiply-add, division, square root, and logic operations. Semidynamics' vector core can be tailored to support different data types: FP64, FP32, FP16, BF16, INT64, INT32, INT16 or INT8, depending on the customer’s target application domain. The largest data type size in bits defines the vector core width or ELEN. Customers then select the number of vector cores to be implemented within the Vector Unit, either 4, 8, 16 or 32 cores, catering for a very wide range of power-performance-area trade-off options. Once these choices are made, the total Vector Unit data path width or DLEN is ELEN x number of vector cores. Semidynamics supports DLEN configurations from 128b to 2048b.

Semidynamics has equipped its Vector Unit with a high-performance, cross-vector-core network that provides all-to-all connectivity between the vector cores at high bandwidth, even for the very large, 32-vector core option. The cross-vector-core unit is used for specific instructions in the RISC-V standard that shuffle data between the different vector cores, such as vrgather, vslide, etc.

Uniquely, Semidynamics offers a second key choice in the Vector Unit: the number of bits of each vector register (known as VLEN) can also be tailored to customer’s needs. While most other vendors assume that VLEN is equal to DLEN (i.e., 1X ratio), Semidynamics offers 2X, 4X and 8X ratios. When the VLEN is larger than the DLEN, a vector operation uses multiple cycles to execute. For example, when VLEN=2048 and DLEN=512, each vector arithmetic operation will take 4 clocks to execute. This is a great feature for tolerating large memory latencies and for reducing power.

“This unleashes the ability for the Vector Unit to process unprecedented amounts of data bits,” added Espasa. “And to fetch all this data from memory, we have our Gazzillion™ technology that can handle up to 128 simultaneous requests for data and track them back to the correct place in whatever order they are returned. Together our technologies take RISC-V to a whole new level with the fastest handling of big data currently available that will open up opportunities in many application areas of High-Performance Computing such as video processing, AI and ML.”

The new Vector Unit is Out-Of-Order and pairs with Semidynamics’ Out-Of-Order Atrevido core and upcoming In-Order cores. If required, Semidynamics can do Open Core Surgery™ on cores and Vector Units to provide special interfaces and protocols to a customer’s proprietary IP block.
 

April 17, 2023
Semidynamics launches world’s first fully customisable RISC-V IP cores

Ideal for handling large amounts of data
 
Barcelona, Spain – 17 April, 2023. Semidynamics, the only provider of fully configurable RISC-V processor IP, has announced the world’s first, fully customisable, 64-bit RISC-V family of cores that are ideal for handling large amounts of data for applications such as AI, Machine Learning (ML) and High-Performance Computing (HPC). The cores are process agnostic with versions already being supplied down to 5nm. 

Semidynamics CEO and founder, Roger Espasa, explained, “Until now, RISC-V processor cores had configurations that were fixed by the vendor or had a very limited number of configurable options such as cache size, address bus size, interfaces and a few other control parameters. Our new IP cores enable the customer to have total control over the configuration, be it new instructions, separate address spaces, new memory accessing capabilities, etc. This means that we can precisely tailor a core to meet each project’s needs so there are no unrequired overheads or compromises. Even more importantly, we can implement a customer’s ‘secret sauce’ features into the RTL in a matter of weeks, which is something that no-one else offers. Every designer using RISC-V wants to have the perfect set of Power, Performance and Area along with unique differentiating features and now, for the first time, they can have just that from us.”
 
The first in the family, which is available for licensing now, is the Atrevido™ core. This has Out-of-Order scheduling that is combined with the company’s proprietary Gazzillion™technology so that it can handle highly sparse data with long latencies and with high bandwidth memory systems that are typical of current machine learning applications. Effectively, Gazzillion technology removes the latency issues that can occur when using CXL technology to enable far away memory to be accessed at the supercharged rates that it was designed to deliver.
 
The Gazzillion technology is specifically designed for Recommendation Systems that are a key part of Data Centre Machine Learning. By supporting over a hundred misses per core, an SoC can be designed that delivers highly sparse data to the compute engines without a large silicon investment. In addition, the core can be configured from 2-way up to 4-way to help accelerate the not-so-parallel portions of Recommendation Systems.
 
For the most demanding workloads, such as HPC, the Atrevido core supports large memory capacities with its 64-bit native data path and 48-bit physical address paths. Espasa added, “We have the fastest cores on the market for moving large amounts of data with a cache line per clock at high frequencies even when the data does not fit in the cache. And we can do that at frequencies up to 2.4 GHz on the right node. The rest of the market averages about a cache line every many, many cycles, that is nowhere near our one every cycle. So, if the application streams a lot of data and/or the application touches very large data that does not fit in cache, we have the best RISC-V cores on the market for your use case.”
 
With its complete MMU support, Atrevido is also Linux-ready including supporting cache-coherent, multi-processing environments from two and up to hundreds of cores. It is vector ready, supporting both the RISC-V Vector Specification 1.0 as well as the upcoming Semidynamics Open Vector Interface. Vector instructions densely encode large numbers of computations to reduce the energy used by each operation. Vector Gather instructions support sparse tensor weights efficiently to help with machine learning workloads.
 
He concluded, “We have been in stealth mode while we created the core architecture that the RISC-V community really wants – one with full customisability, not just a few tweakable settings. No-one else has such a complex RISC-V core that can be totally configured to perfectly meet the specific needs of each project rather than having to use an off-the-shelf core and compromise.”
 

September 22, 2021
We are proud to have participated in the release of the new EPAC1.0 RISC-V Test Chip with our Avispado RISC-V core

The test chip contains four vector processing micro-tiles (VPU) composed of an Avispado RISC-V core designed by SemiDynamics and a vector processing unit designed by Barcelona Supercomputing Center and the University of Zagreb. Each tile also contains a Home Node and L2 cache, designed respectively by Chalmers and FORTH, that provide a coherent view of the memory subsystem. The tiles are connected by a very high-speed network on chip and SERDES technology from EXTOLL.

August 24, 2021
Congratulations to Dave Ditzel from Esperanto Technologies on his presentation at Hot Chips 33 Conference
"Esperanto Technologies unveiled Energy-Efficient RISC-V-Based Machine Learning Accelerator Chip at Hot Chips 33 Conference. SemiDynamics contributed to their product with the overall architecture, the vector instruction set, the tensor extensions and the minion RTL implementation. Looking forward to seeing the product coming out soon!"
June 01, 2021
The European Processor Initiative (EPI), a project Semidynamics is proud to be involved in, announced the release of the EPI EPAC1.0 RISC-V Test Chip for fabrication

EPAC combines several accelerator technologies specialized for different application areas. The test chip contains, among other features, four vector processing micro-tiles (VPU) composed of an Avispado RISC-V core designed by SemiDynamics and a vector processing unit designed by Barcelona Supercomputing Center and the University of Zagreb.

December 11, 2020
Congratulations to Esperanto on their announcement of the ET-SoC-1 !

SemiDynamics contributed to their product with the overall architecture, the vector instruction set, the tensor extensions and the minion RTL implementation. Looking forward to seeing the product coming out soon!