Voice is the new touch. From industrial machines to consumer appliances, voice is fast becoming a dominant Human Machine Interface. Ignitarium’s Septra platform implements auditory Deep Learning algorithms to deliver ultra-optimized voice and sound analytics solutions on highly constrained edge devices. Let Septra convert your devices into highly attentive listeners.
Custom Deep Neural Network
Works for Stationary & non-Stationary noise
Low latency (<25 ms)
Scalable from MCUs to FPGAs to SoCs
(SmartPhone, Walkie-talkie, VOIP, wearables)
Human to Machine Communication
Works well in noisy environments.Coupled with our noise suppression engine, recognition rates higher than 95% are consistently achieved.
Requires Minimal Voice Samples
Our unique audio data preparation technology expands a minimal set of original voice samples to a synthetic dataset that is orders of magnitude larger. This data preparation tool is part of user software and allows infield training.
Enabling “Tiny ML” class of applications
Our AI solutions are designed specifically for low-cost, low-power edge devices built using MCU, DSP and FPGA. With ultra-low memory footprint, customer applications have access to more RAM.
Speakers & Headphones
Sound Type Identification
IGN-SEC enables the classification of ambient sound allowing precise identification of various sound types. The underlying algorithms are accurate enough to discriminate between very similar sound types (eg. two different sirens, the bark of two different dog breeds, etc.)
Anomalous Sound Detection:
Anomalies in operation of equipments & infrastructure can be caught early on, by analysing the sounds picked up by microphones installed on or close to the equipment. IGN-SEC then categorizes the picked-up audio as normal or abnormal, allowing early failure prediction of these machines.