Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
3 results
Search Results
Item Cloud Computing Enabled Big Multi-Omics Data Analytics(SAGE Publications Inc., 2021) Koppad, S.; B, A.; Gkoutos, G.V.; Acharjee, A.High-throughput experiments enable researchers to explore complex multifactorial diseases through large-scale analysis of omics data. Challenges for such high-dimensional data sets include storage, analyses, and sharing. Recent innovations in computational technologies and approaches, especially in cloud computing, offer a promising, low-cost, and highly flexible solution in the bioinformatics domain. Cloud computing is rapidly proving increasingly useful in molecular modeling, omics data analytics (eg, RNA sequencing, metabolomics, or proteomics data sets), and for the integration, analysis, and interpretation of phenotypic data. We review the adoption of advanced cloud-based and big data technologies for processing and analyzing omics data and provide insights into state-of-the-art cloud bioinformatics applications. © The Author(s) 2021.Item Clumped-MCEM: Inference for multistep transcriptional processes(Elsevier Ltd, 2019) Shetty, K.S.; B, A.Many biochemical events involve multistep reactions. Among them, an important biological process that involves multistep reaction is the transcriptional process. A widely used approach for simplifying multistep reactions is the delayed reaction method. In this work, we devise a model reduction strategy that represents several OFF states by a single state, accompanied by specifying a time delay for burst frequency. Using this model reduction, we develop Clumped-MCEM which enables simulation and parameter inference. We apply this method to time-series data of endogenous mouse glutaminase promoter, to validate the model assumptions and infer the kinetic parameters. Further, we compare efficiency of Clumped-MCEM with state-of-the-art methods – Bursty MCEM2 and delay Bursty MCEM. Simulation results show that Clumped-MCEM inference is more efficient for time-series data and is able to produce similar numerical accuracy as state-of-the-art methods – Bursty MCEM2 and delay Bursty MCEM in less time. Clumped-MCEM reduces computational cost by 57.58% when compared with Bursty MCEM2 and 32.19% when compared with delay Bursty MCEM. © 2019 Elsevier LtdItem Pod Scheduling and Proactive Resource Management in an Edge Cluster using MCDM and Federated Learning(Springer Science and Business Media B.V., 2025) Kumar, N.K.; B, A.; J, H.; Srinivasan, S.; Sand, S.S.Edge computing, which locates computational resources closer to the data sources, has become crucial in meeting the demands of applications that need high bandwidth and low latency. To cater to edge computing scenarios, KubeEdge, an extension of Kubernetes(K8s), expands its capabilities to meet edge-specific requirements such as limited resources, irregular connections, and heterogeneous environments. Edge trace data cannot be shared between cloud providers because of privacy issues, which makes generic distributed training ineffective. However, even with edge computing’s potential advantages, the built-in scheduling algorithms have several drawbacks. A significant problem is the lack of efficient resource management and allocation mechanisms at the edge, which causes edge nodes to be underutilized or overloaded which leads to violation of Quality of Service(QoS) and inefficient utilization of resources leads to Service Level Agreement(SLA) violations. In this regard, VIKOR and ELECTRE III based pod scheduling strategy is proposed in this paper and evaluated using Wikipedia and NASA server workload. The experimental results shows that 50% reduction in standard deviation for ELECTRE III and 40% reduction in standard deviation for VIKOR against default scheduler of Kubernetes. The average response time of 30.6593ms and 31.8803ms is achieved for Electre III and VIKOR for Wikipedia dataset. A proactive resource management system is proposed for KubeEdge containerized services where it incorporates a federated learning framework to predict future workloads using the Bidirectional Long Short-Term Memory (Bi-LSTM) and Gated Recurrent Unit (GRU). The experimental comparison of federated learning shows 99.65%, 98.64% reduction in MSE for CPU utilization % and 89.72%, 76.57% reduction in MSE for Memory utilization % with respect to GRU and BI-LSTM models in contrast to centralized learning. The proposed approach effectiveness is evaluated through statistical techniques and found significant. © The Author(s), under exclusive licence to Springer Nature B.V. 2025.
