- Admin
- February 24, 2025

Case Study
Custom DNN Deployment on AI Hardware Accelerator

Client
A U.S. based semiconductor company that designs ASIC and platform SoCs approached Ignitarium for benchmarking an AI accelerator; customization & porting of deep neural networks; and associated activities to host the application on an emulator toolbox and subsequently on an FPGA prototype. The AI SoC is targeted at a consumer electronics application for object detection of more than 80 classes.
Scope
Customizing a neural network, training it, debugging of related source codes and implementation on AI accelerator
Challenges
While designing the solution, Ignitarium was required to address the following challenges:
● Constraints with respect to deployment on low-memory devices
● Handling non-supported layers in the custom neural network by the Toolbox
● Balancing the expectations from business and engineering teams at customer end
Solution
● The target network had to detect a large number of object classes with reasonable accuracy. As existing widely used networks like YoloV3 have limitations on number of classes, a custom neural network was utilized.
Ignitarium’s engineering team implemented pre-processing and post-proc essing
source code in C for the custom neural network. The network was required to be tested on a hardware accelerator on FPGA.
● Devised smart and effective alternative workarounds for handling unsupported layers
● Successfully ran a few real-world applications on the emulator and validated the capabilities of the customer’s product
● Accuracy benchmarking of the custom neural network was done in the next stage. Accuracy numbers were close to that of the standard accuracy benchmark numbers of state-of-the-art neural networks. Suggestions were given on further activities like power and speed benchmarking.
Business Impact

With Ignitarium’s help, the customer demonstrated the working of a complex object classification neural network on emulator and FPGA setup, which increased the confidence of end customer on the performance of the AI SoC under design

Customer saved > 75% of their efforts by leveraging the deep experience of Ignitarium’s core AI team

With the resulting positive saving on time (~25% reduction), the customer was able to showcase the solution demo with additional features, thereby exceeding the expectations of key stakeholders