Kernel-Level Pruning for CNN

No Thumbnail Available

Date

2023

Journal Title

Journal ISSN

Volume Title

Publisher

Springer Science and Business Media Deutschland GmbH

Abstract

Deep learning solves many real-life problems with excellent accuracy, designing a model from scratch face two challenges. The first one is that the dataset size for many applications is relatively small, which leads to overfitting. The second challenge is that the computational cost for training will be very high when we have a huge dataset. Most developer prefers transfer learning where we choose a standard pre-train model like VGGNet, ResNet, GoogLeNet. These pre-train models are trained on a similar problem with a huge dataset. For example, for the Image Classification problem, most developers choose a model trained on the ImageNet dataset. ImageNet dataset has 1000 images of each 1000 different classes, so the total number of images is 1000 × 1000 of size 224 × 224 each. Pre-train models are extensive and computationally expensive during inference time, making it challenging to deploy in real-life applications. The recent trend in research is to compress deep neural networks to reduce computational cost and memory requirements. In this paper, we focus on kernel-level pruning. We have reduced pruning sparsity to 30 to 40% with a nominal drop in the accuracy to 7%. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

Description

Keywords

Channel Pruning, Kernel-Level Pruning, Neural Network Compression, Structured Pruning

Citation

Lecture Notes in Electrical Engineering, 2023, Vol.997 LNEE, , p. 71-78

Endorsement

Review

Supplemented By

Referenced By