Grenoble, France – June 29th, 2020Because of growing costs, data privacy, power consumption and latency concerns, in the future AI and complex data analytics algorithms will be executed closer to the sensors, where the data is generated. The need for energy efficient computing solutions at the edge of the network is therefore rising very rapidly. Deloitte predicts that the edge AI chip market will grow much more quickly than the overall chip market. In order to adapt to these new market requirements, we are currently developing two new computing platforms.
- PANTHER DSP platform:
PANTHER is based on an innovative low latency interconnect offering up to 240 Gbps bandwidth, enabling to link 16 cacheless DSP cores to reach high level of processing performances.
It performs 64 MAC operations per cycle with an efficiency of 120 GOPS/W (@2.2 GOPS) in 40 nm LP technology.
PANTHER benefits from enhanced SIMD DSP, NN, and Audio instructions to optimize the number of MAC operations per cycle.
It will be available in Q1 2021.
- RAPTOR Neural Network Accelerator platform:
RAPTOR is a programmable hardware accelerator specialized in Neural Network inference and vision processing, which includes a host core, a DMA, and up to 128
It offers support for a wide range of image processing operations.
RAPTOR performs more than 128 MAC operations per cycle with an efficiency of 2200 GOPS/W (@16 GOPS) in 28 nm FD-SOI technology.
It will be available in Q2 2021.
PANTHER & RAPTOR are fully plug and play with all necessary compilers and RTL configuration tools which will allow front-end designers to accelerate their design cycle.
The two platforms will come pre-verified and silicon proven on various process nodes. They will be fully interoperable with Dolphin Design CHAMELEON MCU subsystem to achieve the highest possible level of energy efficiency.