Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
3 results
Search Results
Item Representation Learning in Continuous-Time Dynamic Signed Networks(Association for Computing Machinery, 2023) Sharma, K.; Raghavendra, M.; Lee, Y.-C.; Anand Kumar, M.A.; Kumar, S.Signed networks allow us to model conflicting relationships and interactions, such as friend/enemy and support/oppose. These signed interactions happen in real-time. Modeling such dynamics of signed networks is crucial to understanding the evolution of polarization in the network and enabling effective prediction of the signed structure (i.e., link signs) in the future. However, existing works have modeled either (static) signed networks or dynamic (unsigned) networks but not dynamic signed networks. Since both sign and dynamics inform the graph structure in different ways, it is non-trivial to model how to combine the two features. In this work, we propose a new Graph Neural Network (GNN)-based approach to model dynamic signed networks, named SEMBA: Signed link's Evolution using Memory modules and Balanced Aggregation. Here, the idea is to incorporate the signs of temporal interactions using separate modules guided by balance theory and to evolve the embeddings from a higher-order neighborhood. Experiments on 4 real-world datasets and 3 different tasks demonstrate that SEMBA consistently and significantly outperforms the baselines by up to 80% on the tasks of predicting signs of future links while matching the state-of-the-art performance on predicting existence of these links in the future. We find that this improvement is due specifically to superior performance of SEMBA on the minority negative class. Code is made available at https://github.com/claws-lab/semba. © 2023 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0124-5/23/10.Item Neural Pooling for Graph Neural Networks(Springer Science and Business Media Deutschland GmbH, 2024) Harsha, S.S.; Mishra, D.Tasks such as graph classification, require graph pooling to learn graph-level representations from constituent node representations. In this work, we propose two novel methods using fully connected neural network layers for graph pooling, namely Neural Pooling Method 1 and 2. Our proposed methods have the ability to handle variable number of nodes in different graphs, and are also invariant to the isomorphic structures of graphs. In addition, compared to existing graph pooling methods, our proposed methods are able to capture information from all nodes, collect second-order statistics, and leverage the ability of neural networks to learn relationships among node representations, making them more powerful. We perform experiments on graph classification tasks in the bioinformatics and social network domains to determine the effectiveness of our proposed methods. Experimental results show that our methods lead to an increase in graph classification accuracy over previous works and a general decrease in standard deviation across multiple runs indicating greater reliability. Experimental results also indicate that this improvement in performance is consistent across several datasets. © Springer Nature Switzerland AG 2024.Item FedLSF: Federated Local Graph Learning via Specformers(Institute of Electrical and Electronics Engineers Inc., 2024) Ram Samarth, B.B.; Annappa, B.; Sachin, D.N.The abundance of graphical data and associated privacy concerns in real-world scenarios highlight the need for a secure and distributed methodology utilizing Federated Learning for Graph Neural Networks(GNNs). While spatial GNNs have been explored in FL, spectral GNNs, which capture rich spectral information, remain relatively unexplored. Despite enhancing GNNs' expressiveness through attention-based mechanisms, challenges persist in the spatial approach for FL due to cross-client edges. This work introduces two information capture methods for spectral GNNs in FL settings, Global Information Capture and Local Information Capture, which address cross-client edges. Federated Local Specformer (FedLSF) is proposed as a novel methodology that combines local information capture with state-of-the-art(SOTA) Specformer, enabling local graph learning on clients. FedLSF leverages Specformers involving spectral and attention approaches by integrating Eigen Encoding, Transformer architecture, and graph convolution. This enables capturing rich information from eigen spectra and addresses concerns related to cross-client edges through fully connected eigen-spaces. Experimental results demonstrate FedLSF's efficacy in both homophily and heterophily datasets, showing significant accuracy improvements (2-50)% in highly non-independent and identically distributed (Non-IID) scenarios compared to the present SOTA. This research advances attention-based spectral mechanisms in FL for GNNs, providing a promising solution for preserving privacy in non-IID graph data environments. Implementation can be found at https://github.com/achiverram28/FedLSF-DCOSS. © 2024 IEEE.
