FedPruNet: Federated Learning Using Pruning Neural Network

dc.contributor.authorGowtham, L.
dc.contributor.authorAnnappa, A.
dc.contributor.authorSachin, D.N.
dc.date.accessioned2026-02-06T06:35:29Z
dc.date.issued2022
dc.description.abstractFederated Learning (FL) is a distributed form of training the machine learning and deep learning models on the data spread over heterogeneous edge devices. The global model at the server learns by aggregating local models sent by the edge devices, maintaining data privacy, and lowering communication costs by just communicating model updates. The edge devices on which the model gets trained usually will have limitations towards power resource, storage, computations to train the model. This paper address the computation overhead issue on the edge devices by presenting a new method named FedPruNet, which trains the model in edge devices using the neural network model pruning method. The proposed method successfully reduced the computation overhead on edge devices by pruning the model. Experimental results show that for the fixed number of communication rounds, the model parameters are pruned up to 41.35% and 65% on MNIST and CIFAR-10 datasets, respectively, without compromising the accuracy compared to training FL edge devices without pruning. © 2022 IEEE.
dc.identifier.citation2022 IEEE Region 10 Symposium, TENSYMP 2022, 2022, Vol., , p. -
dc.identifier.urihttps://doi.org/10.1109/TENSYMP54529.2022.9864565
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/29885
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.subjectdeep learning
dc.subjectedge computing
dc.subjectFederated Learning
dc.subjectmodel pruning
dc.subjectneural networks
dc.titleFedPruNet: Federated Learning Using Pruning Neural Network

Files