Tiny RAPTOR is a complete Neural Processor solution to deploy AI at the very edge combining Software and Hardware approaches resulting from more than 3 years of development, together with CEA-List in a joint laboratory.
Starting from your traditional learning frameworks, you can seamlessly quantize your algorithms (both Byte and sub-Byte quantization) and deploy them to our dedicated Hardware.
This very specific software/hardware co-development enables very high computing efficiency (>50%) and optimized Data Reuse (>90%) thanks to its unique Near-Memory Computing approaches to bring unachieved Energy Efficiency results in TinyML benchmarks.
This gives a low energy benchmark of 32uJ and a latency of 10ms for Visual Wake Words detection and 12uJ with a latency of 3.5ms for keywords in the MLperf Tiny inference benchmarks from MLcommons. These outstanding performances are measured on our demonstrator VEP in 22FDX from GlobalFoundries.
Tiny Raptor fits particularly well with both sound and vision use-cases such as Speech recognition, Noise cancellation, Sound recognition, Face identification, Object detection, and Image classification. It enables AI market-favorite applications like Surveillance, Smart camera, Wearable, TWS, Smart speakers, and IoT sensor fusion.
To learn more, you can contact us or stop by our booth (3A-225) at Embedded World to see demos including our toolchain and TinyML benchmarks running on silicon. We will also take part in the pitch contest on June 22, from 10:00 to 11:00 GMT+2, at the Exhibitor Forum Hall 2: Booth nr. 2-520