Channel Pruning of Transfer Learning Models Using Novel Techniques

dc.contributor.authorPragnesh, P.
dc.contributor.authorMohan, B.R.
dc.date.accessioned2026-02-04T12:25:29Z
dc.date.issued2024
dc.description.abstractThis research paper delves into the challenges associated with deep learning models, specifically focusing on transfer learning. Despite the effectiveness of widely used models such as VGGNet, ResNet, and GoogLeNet, their deployment on resource-constrained devices is impeded by high memory bandwidth and computational costs, and to overcome these limitations, the study proposes pruning as a viable solution. Numerous parameters, particularly in fully connected layers, contribute minimally to computational costs, so we focus on convolution layers' pruning. The research explores and evaluates three innovative pruning methods: the Max3 Saliency pruning method, the K-Means clustering algorithm, and the Singular Value Decomposition (SVD) approach. The Max3 Saliency pruning method introduces a slight variation by using the three maximum values of the kernel instead of all nine to compute the saliency score. This method is the most effective, substantially reducing parameter and Floating Point Operations (FLOPs) for both VGG16 and ResNet56 models. Notably, VGG16 demonstrates a remarkable 46.19% reduction in parameters and a 61.91% reduction in FLOPs. Using the Max3 Saliency pruning method, ResNet56 shows a 35.15% reduction in parameters and FLOPs. The K-Means pruning algorithm is also successful, resulting in a 40.00% reduction in parameters for VGG16 and a 49.20% reduction in FLOPs. In the case of ResNet56, the K-Means algorithm achieved a 31.01% reduction in both parameters and FLOPs. While the Singular Value Decomposition (SVD) approach provides a new set of values for condensed channels, its overall pruning ratio is smaller than the Max3 Saliency and K-Means methods. The SVD pruning method prunes 20.07% parameter reduction and a 24.64% reduction in FLOPs achieved for VGG16, along with a 16.94% reduction in both FLOPs and parameters for ResNet56. Compared with the state-of-the-art methods, the Max3 Saliency and K-Means pruning methods performed better in Flops reduction metrics. © 2024 The Authors.
dc.identifier.citationIEEE Access, 2024, 12, , pp. 94914-94925
dc.identifier.urihttps://doi.org/10.1109/ACCESS.2024.3416997
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/21397
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.subjectClustering algorithms
dc.subjectDeep learning
dc.subjectDigital arithmetic
dc.subjectParameter estimation
dc.subjectReduction
dc.subjectSingular value decomposition
dc.subjectAccuracy
dc.subjectChannel pruning
dc.subjectComputational modelling
dc.subjectDeep compression of CNN
dc.subjectFiltering algorithm
dc.subjectKernel
dc.subjectNetwork compression
dc.subjectNeural network compression
dc.subjectNeural-networks
dc.subjectStructured pruning
dc.subjectTransfer learning
dc.subjectConvolution
dc.titleChannel Pruning of Transfer Learning Models Using Novel Techniques

Files

Collections