Comparing Different Sequences of Pruning Algorithms for Hybrid Pruning

dc.contributor.authorPragnesh, T.
dc.contributor.authorMohan, B.R.
dc.date.accessioned2026-02-06T06:34:38Z
dc.date.issued2023
dc.description.abstractMost developers face two significant issues while designing the architecture of a neural network. First, the available dataset for many real-life problems is relatively small, leading to overfitting. Second, When a dataset is large enough, the computation cost to train the model on a given dataset is enormous. Thus most developers use transfer learning with a standard model like VGGNet, ResNet, and GoogleNet. These standard models are memory and computationally expensive during inference, making them infeasible to deploy on resource-constrained devices. The recent research trend is to compress the standard model used for transfer learning to reduce memory and computing costs. In CNN, approximately 10% parameters are present in the convolution layer, contributing to 90% of computational cost, while 90% parameters are present in dense, contributing 10% of computational cost. This paper focuses on the structure pruning of parameters in the convolution layer to reduce computational costs. Here we explore and compare the following pruning technique, 1) Channel pruning with a quantitative score, 2) Kernel pruning with a quantitative score, 3) Channel pruning with a similarity score, and 4) Kernel pruning with a similarity score. Finally, as mentioned earlier, we try several combinations of pruning to form a hybrid pruning. © 2023 IEEE.
dc.identifier.citation2023 14th International Conference on Computing Communication and Networking Technologies, ICCCNT 2023, 2023, Vol., , p. -
dc.identifier.urihttps://doi.org/10.1109/ICCCNT56998.2023.10307846
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/29357
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.subjectand Pruning for computational Speed-up
dc.subjectCNN Compression
dc.subjectDeep Compression of CNN
dc.subjectKernel Level Pruning
dc.subjectPruning Channels
dc.titleComparing Different Sequences of Pruning Algorithms for Hybrid Pruning

Files