Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
5 results
Search Results
Item A study of performance scalability by parallelizing loop iterations on multi-core SMPs(2010) Raghavendra, P.S.; Behki, A.K.; Hariprasad, K.; Mohan, M.; Jain, P.; Bhat, S.S.; Thejus, V.M.; Prabhu, V.Today, the challenge is to exploit the parallelism available in the way of multi-core architectures by the software. This could be done by re-writing the application, by exploiting the hardware capabilities or expect the compiler/software runtime tools to do the job for us. With the advent of multi-core architectures ([1] [2]), this problem is becoming more and more relevant. Even today, there are not many run-time tools to analyze the behavioral pattern of such performance critical applications, and to re-compile them. So, techniques like OpenMP for shared memory programs are still useful in exploiting parallelism in the machine. This work tries to study if the loop parallelization (both with and without applying transformations) can be a good case for running scientific programs efficiently on such multi-core architectures. We have found the results to be encouraging and we strongly feel that this could lead to some good results if implemented fully in a production compiler for multi-core architectures. © Springer-Verlag Berlin Heidelberg 2010.Item Achieving operational efficiency with cloud based services(2011) Bellur, K.V.; Krupal, M.; Jain, P.; Raghavendra, P.S.Cloud Computing is the evolution of a variety of technologies that have come together to alter an organization's approach to building IT infrastructure. It borrows from several computing techniques - grid computing, cluster computing, software-as-a-service, utility computing, autonomic computing and many more. It provides a whole new deployment model for enterprise web-applications. The cloud proposes significant cost cuts when compared to using an internal IT infrastructure. The "pay for what you use" model of cloud computing is significantly cheaper for a company than the "pay for everything up front" model of internal IT. Hardware Virtualization is the enabling technology behind many of the cloud infrastructure vendor offerings. Through virtualization, a physical server can be partitioned into any number of virtual servers running their own operating systems, in their allocated memory, CPU and disk footprints. From the perspective of the user or application on the virtual server, no indication exists to suggest that the server is not a real, physical server. In this paper, we make an attempt to enhance dynamic cloud based services using efficient load balancing techniques. We describe various steps involved in developing and utilizing cloud based infrastructure in such a way that cloud based services can be offered to users in an efficient manner. In the design of load balancing algorithms for an application offering cloud based services, the various details described in this paper offer useful insight, while the actual implementation may be based on the exact requirements at hand. © 2011 IEEE.Item DROCC: Deep Robust One-Class Classification(ML Research Press, 2020) Goyal, S.; Raghunathan, A.; Jain, M.; Simhadri, H.; Jain, P.Classical approaches for one-class problems such as one-class SVM and isolation forest require careful feature engineering when applied to structured domains like images. State-of-the-art methods aim to leverage deep learning to learn appropriate features via two main approaches. The first approach based on predicting transformations (Golan & El-Yaniv, 2018; Hendrycks et al., 2019a) while successful in some domains, crucially depends on an appropriate domain-specific set of transformations that are hard to obtain in general. The second approach of minimizing a classical one-class loss on the learned final layer representations, e.g., DeepSVDD (Ruff et al., 2018) suffers from the fundamental drawback of representation collapse. In this work, we propose Deep Robust One Class Classification (DROCC) that is both applicable to most standard domains without requiring any side-information and robust to representation collapse. DROCC is based on the assumption that the points from the class of interest lie on a well-sampled, locally linear low dimensional manifold. Empirical evaluation demonstrates that DROCC is highly effective in two different one-class problem settings and on a range of real-world datasets across different domains: tabular data, images (CIFAR and ImageNet), audio, and time-series, offering up to 20% increase in accuracy over the state-of-the-art in anomaly detection. Code is available at https://github.com/microsoft/EdgeML. © 2020 by the author(s).Item DROCC: Deep robust one-class classification(International Machine Learning Society (IMLS), 2020) Goyal, S.; Raghunathan, A.; Jain, M.; Simhadri, H.; Jain, P.Classical approaches for one-class problems such as one-class SVM and isolation forest require careful feature engineering when applied to structured domains like images. State-of-the-art methods aim to leverage deep learning to learn appropriate features via two main approaches. The first approach based on predicting transformations (Golan & El-Yaniv, 2018; Hendrycks et al., 2019a) while successful in some domains, crucially depends on an appropriate domain-specific set of transformations that are hard to obtain in general. The second approach of minimizing a classical one-class loss on the learned final layer representations, e.g., DeepSVDD (Ruff et al., 2018) suffers from the fundamental drawback of representation collapse. In this work, we propose Deep Robust One Class Classification (DROCC) that is both applicable to most standard domains without requiring any side-information and robust to representation collapse. DROCC is based on the assumption that the points from the class of interest lie on a well-sampled, locally linear low dimensional manifold. Empirical evaluation demonstrates that DROCC is highly effective in two different one-class problem settings and on a range of real-world datasets across different domains: tabular data, images (CIFAR and ImageNet), audio, and time-series, offering up to 20% increase in accuracy over the state-of-the-art in anomaly detection. Code is available at https://github.com/microsoft/EdgeML. © 2020 by the author(s).Item Identification of Reliability for an Automobile Sub-system Maruti Suzuki Alto(Springer Science and Business Media Deutschland GmbH, 2023) Varghese, L.; Jain, P.The present work aims to develop a statistical model to assess the reliability of different cars of the same model number. The input data were collected based on fault identification from an automobile service station. The data collection is categorized according to the distance covered by the cars, and this covered distance further converted into time function by considering the speed of cars as 60 km (km)/hour (hr). The data collection was categorized into three categories: (i) up to 25,000, (ii) 25,001–50,000 and (iii) 50,001–75,000 km’s to identify the various parameters of the study. To calculate the reliability, the Weibull distribution of two parameters, slope and scale was selected and applied. Results of reliability in terms of clutch, brake and suspension were calculated, and remedies were also suggested based on data received from the analysis. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
