Performance Improvement of Neural Networks in Edge   Computing with Heterogeneous devices
Defence

Performance Improvement of Neural Networks in Edge Computing with Heterogeneous devices

Executing deep learning models on edge devices is challenging due to their high computational complexity. The simultaneous use of Central Processing Units (CPUs) and Graphics Processing Units (GPUs) in edge devices is an emerging approach to overcome this challenge. This method, by leveraging the strengths of both platforms, offers advantages such as increased efficiency, reduced power consumption, and improved flexibility. Despite challenges like programming complexity and data management, novel solutions such as parallel programming frameworks, algorithm optimization, and performance analysis tools pave the way to overcome these obstacles and benefit from the significant advantages of this approach. In this project, we aimed to improve the runtime efficiency of these models by proposing a method for appropriate partitioning of neural network models. This method involves a detailed analysis of the computational requirements of each part of the model and the appropriate assignment of tasks to the CPU and GPU. The results obtained show that by applying this method, a significant improvement in runtime and resource consumption has been achieved. Specifically, a 20% improvement in the SqueezeNet model, 14% in the MobileNet model, and 13% in the ResNet model has been observed. 

Latest Posts
Categories