- Team Marketing
- October 3, 2024
FPGA Based Acceleration of a Custom Deep Neural Network Model Inference | Technical Paper
We are in an era where Artificial Intelligence (AI) makes real-time decisions in applications like Advanced Driver Assistance Systems (ADAS), robots, autonomous vehicles, industrial automation, aerospace, and defense. These applications utilize deep learning neural networks (DNNs) for accurate predictions.
When the neural network structure becomes deeper, the demand for computational power also increases. So, there is a need for a hardware accelerator to replace a general-purpose processor to achieve better performance. Recently, Field Programmable Gate Array (FPGA) has become a promising device as a hardware accelerator over Graphic Processing Units (GPUs) and Application Specific Integrated Circuits (ASICs) due to their power efficiency and reconfigurability.
Earlier, one of the difficulties in adopting FPGA was the requirement of specialized programming skills for the developer. Today, FPGA vendors provide acceleration frameworks that directly support popular neural network frameworks. The availability of these tools considerably reduces the development effort and time to market for the deep learning model implementations in FPGA.