Conference Papers

Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506

Browse

Search Results

Now showing 1 - 10 of 21
  • Item
    MatchCloud: Service Matching for Multi Cloud Marketplace
    (Institute of Electrical and Electronics Engineers Inc., 2021) Chakma, A.; Kumar, S.; Mahato, P.K.; Satpathy, A.; Addya, S.K.
    The modern applications execute in the cloud via independent executable entities called virtual machines (VMs). In a typical multi-SP market with variable pricing and heterogeneous resource demands of VMs, resource allocation/placement is particularly challenging. To maximize the social welfare of the multi-SP markets, in this paper, we propose a resource allocation technique called MatchCloud formulated as a one-to-many matching game. Owing to the in-applicability of the classical deferred acceptance algorithm (DAA) due to size heterogeneity, we adopt a modified version of the algorithm. Moreover, preference generation is crucial for matching markets. Hence, we also present a simple yet efficient technique to assign preferences to two different stakeholders, i.e., VMs and SPs. Simulation results show that VM proposing RDA performs better compared to when SPs propose. © 2021 IEEE.
  • Item
    Container-based Service State Management in Cloud Computing
    (Institute of Electrical and Electronics Engineers Inc., 2021) Nath, S.B.; Addya, S.K.; Chakraborty, S.; Ghosh, S.K.
    In a cloud data center, the client requests are catered by placing the services in its servers. Such services are deployed through a sandboxing platform to ensure proper isolation among services from different users. Due to the lightweight nature, containers have become increasingly popular to support such sandboxing. However, for supporting effective and efficient data center resource usage with minimum resource footprints, improving the containers' consolidation ratio is significant for the cloud service providers. Towards this end, in this paper, we propose an exciting direction to significantly boost up the consolidation ratio of a data-center environment by effectively managing the containers' states. We observe that many cloud-based application services are event-triggered, so they remain inactive unless some external service request comes. We exploit the fact that the containers remain in an idle state when the underlying service is not active, and thus such idle containers can be checkpointed unless an external service request comes. However, the challenge here is to design an efficient mechanism such that an idle container can be resumed quickly to prevent the loss of the application's quality of service (QoS). We have implemented the system, and the evaluation is performed in Amazon Elastic Compute Cloud. The experimental results have shown that the proposed algorithm can manage the containers' states, ensuring the increase of consolidation ratio. © 2021 IFIP.
  • Item
    Automating the Selection of Container Orchestrators for Service Deployment
    (Institute of Electrical and Electronics Engineers Inc., 2022) Chaurasia, P.; Nath, S.B.; Addya, S.K.; Ghosh, S.K.
    With the ubiquitous usage of cloud computing, the services are deployed as a virtual machine (VM) in cloud servers. However, VM based deployment often takes more amount of resources. In order to minimize the resource consumption of service deployment, container based lightweight virtualization is used. The management of the containers for deployment is a challenging problem as the container managers need to consume less amount of resources while also catering to the needs of the clients. In order to choose the right container manager, we have proposed an architecture based on the application and user needs. In the proposed architecture, we have a machine learning based decision engine to solve the problem. We have considered docker containers for experimentation. The experimental results show that the proposed system can select the proper container manager among docker compose based manager and Kubernetes. © 2022 IEEE.
  • Item
    Democratizing University Seat Allocation using Blockchain
    (Institute of Electrical and Electronics Engineers Inc., 2022) Jahnavi, Y.; Prathyusha, M.; Shahanaz, S.; Thummar, D.; Ghosh, B.C.; Addya, S.K.
    Online seat allocation processes such as Joint Seat Allocation Authority in India have streamlined the university seat allocation process and reduced the risk of seats being vacant. Similar centralized online counseling processes are used for many universities in different countries. In-spite of being a collaborative process involving different stakeholders, such systems are centralized having their inherent limitations including lack of transparency, risk of censorship, manipulation, and single point of failure. In this demonstration, we showcase a decentralized ledger technology based system and application for democratizing the university seat allocation process. We demonstrate that the user experience of the proposed system is almost identical to the traditional centralized one, in spite of having the additional benefits of transparency, auditability, and non-repudiability of the decentralized architecture. © 2022 IEEE.
  • Item
    A Time Series Forecasting Approach to Minimize Cold Start Time in Cloud-Serverless Platform
    (Institute of Electrical and Electronics Engineers Inc., 2022) Jegannathan, A.P.; Saha, R.; Addya, S.K.
    Serverless computing is a buzzword that is being used commonly in the world of technology and among developers and businesses. Using the Function-As-A-Service (FaaS) model of serverless, one can easily deploy their applications to the cloud and go live in a matter of days, it facilitates the developers to focus on their core business logic and the backend process such as managing the infrastructure, scaling of the application, updation of software and other dependencies is handled by the Cloud Service Provider. One of the features of serverless computing is ability to scale the containers to zero, which results in a problem called cold start. The challenging part is to reduce the cold start latency without the consumption of extra resources. In this paper, we use SARIMA (Seasonal Auto Regressive Integrated Moving Average), one of the classical time series forecasting models to predict the time at which the incoming request comes, and accordingly increase or decrease the amount of required containers to minimize the resource wastage, thus reducing the function launching time. Finally, we implement PBA (Prediction Based Autoscaler) and compare it with the default HPA (Horizontal Pod Autoscaler), which comes inbuilt with kubernetes. The results showed that PBA performs fairly better than the default HPA, while reducing the wastage of resources. © 2022 IEEE.
  • Item
    A Preliminary Study of Serverless Platforms for Latency Sensitive Applications
    (Institute of Electrical and Electronics Engineers Inc., 2022) Sarathi, T.V.; Sai Nischal Reddy, J.; Shiva, P.; Saha, R.; Satpathy, A.; Addya, S.K.
    Serverless computing is the new-age cloud delivery model wherein resources are provisioned only during event-triggered functions. It dramatically improves the flexibility and scalability of applications compared to virtual machine (VM)/container-based service delivery models. As serverless computing is gaining significant impetus, major cloud providers such as Amazon, Microsoft Azure, and Google have launched their respective serverless computing platforms. However, for a user selecting an appropriate service provider (SP), meeting its desired quality-of-services (QoS) is challenging. Moreover, there is not enough public information available to assist the users in making such accurate decisions. Hence, we provide preliminary analysis via real-time experimentation for the users in this work, acting as a stepping stone in selecting an appropriate SP. To evaluate, we consider execution time and execution cost as evaluation metrics to assess different real-world SPs' performance by considering different workloads. Experimental results show that Azure functions achieved lower execution times than AWS Lambda and Google Cloud Functions, but in terms of execution cost, AWS Lambda costs much lower than the other two platforms. © 2022 IEEE.
  • Item
    DeSAT: Towards Transparent and Decentralized University Counselling Process
    (Institute of Electrical and Electronics Engineers Inc., 2022) Thummar, D.; Jahnavi, Y.; Prathyusha, M.; Shahanaz, S.; Ghosh, B.C.; Addya, S.K.
    The admission process in academic institutions (universities, colleges, etc.) is more digitized than ever. Starting from standardized tests to application processing, to shortlisting on the basis of merit, to even document verification, everything is carried out through online processes now. However, in spite of having huge benefits in terms of convenience, existing admission processes severely lack transparency. The entire process is dependent on certain central authoritative entities such as the testing authorities followed by the institutes themselves. Moreover, critical tasks such as verifying educational and identity-related documents of students is a tedious affair and the effort is duplicated across all institutions. In this work, we attempt to overcome these limitations of the existing workflow of academic institutes' admission process by designing a distributed ledger based framework that involves the academic institutes, testing authorities, document and credential validators, as well as the students. Our framework DeSAT uses verifiable credentials together with a permissioned ledger to remove the duplicate efforts in verification of test scores as well as validation of students' documents. In addition, it makes the entire process transparent and auditable while enforcing fair merit-based seat allotment through smart contracts. Through a prototype implementation using Hyperledger Fabric, Indy, and Aries, we demonstrate the practicality of DeSAT and show that our system offers acceptable performance while scaling with the number of participating institutions. © 2022 IEEE.
  • Item
    Virtual Machine Placement in Non-Cooperative Cloud Federation-Alliance
    (Institute of Electrical and Electronics Engineers Inc., 2023) Addya, S.K.; Satpathy, A.; Turuk, A.K.; Shaoo, B.
    Many inter-cloud organizations have been proposed to overcome the current limitations of cloud computing, such as service interruption, lack of interoperability, and degradation of services. One such multi-cloud architecture is the cloud federation, where multiple geographically distributed autonomous service providers voluntarily agree to share resources governed by a well-defined set of rules. Although federation offers numerous benefits for the service providers, sharing resources is supervised using a strict set of protocols offering limited flexibility. Hence, this paper proposes a relaxed resources sharing model for the service providers called cloud federation-alliance. The formation of the alliance is modeled as a non-cooperative game among the service providers. The game's stability and the alliance's performance loss are also studied using price-of-stability (PoS) and price-of-anarchy (PoA), respectively. A modified best-fit placement strategy is focused on reducing the power consumed. To assess the performance of the alliance placement, we compare its performance with random and worst-fit allocation techniques. This work aims to build a stable, sustainable, multi-cloud federation alliance and address this structure's critical issues. Extensive simulation results show stability between 2% and 30% with varying workloads. © 2023 IEEE.
  • Item
    MatchVNE: A Stable Virtual Network Embedding Strategy Based on Matching Theory
    (Institute of Electrical and Electronics Engineers Inc., 2023) Keerthan Kumar, T.G.K.; Srivastava, A.; Satpathy, A.; Addya, S.K.; Koolagudi, S.G.
    Network virtualization (NV) can provide greater flexibility, better control, and improved quality of service (QoS) for the existing Internet architecture by enabling heterogeneous virtual network requests (VNRs) to share the substrate network (SN) resources. The efficient assignment of the SN resources catering to the demands of virtual machines (VMs) and virtual links (VLs) of the VNRs is known as virtual network embedding (VNE) and is proven to be NP-Hard. Deviating from the literature, this paper proposes a framework MatchVNE that is focused on maximizing the revenue-to-cost ratio of VNRs by considering a blend of system and topological attributes that better capture the inherent dependencies among the VMs. MatchVNE performs a stable VM embedding using the deferred acceptance algorithm (DAA). The preference of the VMs and servers are generated using a hybrid entropy, and the technique for order of preference by similarity to ideal solution (TOPSIS) based ranking strategy for VMs and servers. The attribute weights are determined using entropy, whereas the server and VM ranking are obtained via TOPSIS. The shortest path, VL-embedding, follows VM-embedding. The simulation results show that MatchVNE outperforms the baselines by achieving a 23% boost in the average revenue-to-cost-ratio and 44% improvement in the average acceptance ratio. © 2023 IEEE.
  • Item
    VMAP: Matching-based Efficient Offloading in IoT-Fog Environments with Variable Resources
    (IEEE Computer Society, 2023) Morey, J.V.; Satpathy, A.; Addya, S.K.
    Fog computing is a promising technology for critical, resource-intensive, and time-sensitive applications. In this regard, a significant challenge is generating an offloading solution that minimizes the latency, energy, and number of outages for a dense IoT-Fog environment. However, the existing solutions either focus on a single objective or mainly dedicate fixed-sized resources as virtual resource units (VRUs). Moreover, these solutions are restrictive and not comprehensive, resulting in poor performance. To overcome these challenges, this paper proposes a VMAP model addressing the lacunas above. Offloading problem is abstracted as a one-to-many matching game between two sets of entities - tasks and fog nodes (FNs) by considering both preferences. Moreover, the preferences and weights of the parameters are generated using the Analytic Hierarchy Process (AHP). Exhaustive simulations indicate that the proposed strategy outperforms the baseline algorithms, considering average task latency and energy consumption by 35% and 22.2%, respectively. Additionally, resource utilization also experiences a boost by 28.57%, and 97.98% of tasks complete their execution within the deadlines. © 2023 IEEE.