RAPTOR - Vision AI accelerator IP

Effortlessly deploy sub-mW vision tasks at the very edge

Expend the least energy to get actionable insights quickly without compromising on performance

Addressing a variety of AI tasks at mW levels

Support popular AI networks with best-in-class energy efficiency

(*) power numbers are accounting for accelerators and memory for GF22FDX technology node

Bringing AI computing into energy constained sensors

Go beyond tinyML and turn rich AI endpoint into reality

Tiny ML & Egde AI

Creating a power-efficient, yet AI-rich environment is a challenge.
We solved it, such that you don’t have to.

Meet RAPTOR, a new computing approach

Raptor is a Neural  Processing Unit tightly designed with its deployment tool suite from day1 accelerating the development of on-sensors data analytics at ultra-low power.

A near-memory computing architecture coupled with a data re-use approach saving significant energy at the system level.

A deployment framework efficiently mapping your Deep Neural Networks (DNN) Models onto Raptor NPU.

Try the Raptor deployment framework

The Raptor tool suite enables rapid exploration & implementation of DNNs into Raptor NPU devices.
See the key performance indicators for your own use case.

The deployment tool and the application binaries seamlessly integrate with common frameworks, 

allowing a smooth and easy integration into existing development ecosystems,

reducing the gap between the data science and embedded intelligence engineering fields.

the private Beta version of Raptor deployment framework is now available.

Silicon proven

The Raptor Neural Network Processor is both silicon and software proven.