2. Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/1/7
Browse
9 results
Search Results
Item Petri net based verification of a cooperative work flow model(2009) Annappa, B.; Jiju, P.; Chandrasekaran, K.; Shet, K.C.This paper exploits the theory of Petri nets to verify reachability and soundness of a cooperative workflow model. First, we outline a cooperative workflow model, which is a modified version of Bonita workflow model. Bonita is open source cooperative workflow management software which is an ongoing project from object web consortium. Then we describe the cooperative workflow model using a special kind of Petri net called Wf-net. Next we employ WF-net for verification of the model for reachability and soundness properties. The Petri net based verification shows that the model is reachable and sound. �2009 IEEE.Item Parallelized K-Means clustering algorithm for self aware mobile Ad-hoc networks(2011) Thomas, L.; Manjappa, K.; Annappa, B.; Ram Mohana Reddy, GuddetiProviding Quality of Service (QoS) in Mobile Ad-hoc Network (MANET) in terms of bandwidth, delay, jitter, throughput etc., is critical and challenging issue because of node mobility and the shared medium. The work in this paper predicts the best effective cluster while taking QoS parameters into account. The proposed work uses K-Means clustering algorithm for automatically discovering clusters from large data repositories. Further, iterative K-Means clustering algorithm is parallelized using Map-Reduce technique in order to improve the computational efficiency and thereby predicting the best effective cluster. Hence, parallel K-Means algorithm is explored for finding the best effective cluster containing the hops which lies in the best cluster with the best throughput in self aware MANET. Copyright � 2011 ACM.Item Optimization of prefetching in peer-to-peer video on demand systems(2011) Bafna, P.; Annappa, B.In Peer-to-Peer Video on Demand System like Video Cassette Recording (VCR) various operations (i.e. forward, backward, resume) are found to be used very frequently. The uncertainty of frequent VCR operations makes it difficult to provide services like play as download. To address this problem, there exist algorithms like random prefetching, distributed prefetching, etc. But each such algorithm has its own advantage and disadvantages. So to overcome the problem of prefetching we propose optimize prefetching for Peer-to-Peer(P2P) Video on Demand systems.The simulation result proves that the proposed prefetching algorithm significantly reduces the seeking delay as compared with the random prefetching scheme. � 2011 Springer-Verlag.Item Meta-level constructs in content personalization of a web application(2010) Annappa, B.; Chandrasekaran, K.; Shet, K.C.In today's business environment, web applications become more and more complex but they still need to be flexible for changes, easy to maintain and the development life cycle need to be short. A reflective technique seems to be the best way to achieve flexibility of the web applications when adding the personalization features like recommendations, special offers, etc. Most of the algorithms help to achieve personalization; but, little attention has been paid to the design and modeling process of internet applications. Personalization will help to cope with increasing complexity of Business Enterprise level Applications. High-level, cleanly layered solutions open up promising possibilities to overcome these difficulties. This paper gives an insight into the content personalization of a web application using meta-level constructs. �2010 IEEE.Item CAMP: Congestion adaptive multipath routing protocol for VANETs(2012) Raviteja, B.L.; Annappa, B.; Tahiliani, M.P.Long congestion periods, frequent link failures and hand-offs in VANETs lead to more number of packets being dropped and incur high end to end delay, there by degrading the overall performance of the network. Congestion control mechanism, though mainly incorporated in transport protocols, if coupled with the routing protocols, can significantly improve overall performance of the network. In this paper we propose Congestion Adaptive Multipath Routing Protocol (CAMP) that aims to avoid congestion by proactively sending congestion notification to the sender. The proposed CAMP routing protocol is implemented in Network Simulator-2 (NS-2) and its performance is compared with Ad-hoc On Demand Multipath Distance Vector (AOMDV) in terms of Packet Drop due to Congestion, Packet Delivery Fraction, Throughput and Average End-to-End Delay. Simulation results show that CAMP routing protocol achieves significant performance gain as compared to that of AOMDV. � 2012 Springer-Verlag.Item Application of parallel K-means clustering algorithm for prediction of optimal path in self aware mobile ad-hoc networks with link stability(2011) Thomas, L.; Annappa, B.Providing Quality of Service (QoS) in terms of bandwidth, delay, jitter, throughput etc., for Mobile Ad-hoc Network (MANET) which is the autonomous collection of nodes, is challenging issue because of node mobility and the shared medium. This work is to predict the Optimal link based on the link stability which is the number of contacts between 2 pair of nodes that can be effectively applied for prediction of optimal effective path while taking QoS parameters into account to reach the destination using the application of K-Means clustering algorithm for automatically discovering clusters from large data repositories which is parallelized using Map-Reduce technique in order to improve the computational efficiency and thereby predicting the optimal effective path from source to sink. The work optimizes the previous result by pre-assigning task for finding the best stable link in MANET and then work is explored only on that stable link hence, by doing so we are able to predict the optimal path in more time efficient way. � 2011 Springer-Verlag.Item Analyzing design patterns for extensibility(2011) Annappa, B.; Rajendran, R.; Chandrasekaran, K.; Shet, K.C.A system is said to be extensible, if any changes can be made to any of the existing system functionalities and/or addition of new functionalities with minimum impact. To achieve extensibility, it has to be planned properly starting from the initial stage of the application development. Keeping in mind all the possible future changes to be made, the designer should select the proper design patterns and finish the design for the application. Once the application design is finished, it should be analyzed to make sure that the application is extensible. � Springer-Verlag Berlin Heidelberg 2011.Item A scalable cloud platform using matlab distributed computing server integrated with HDFS(2012) Dutta, R.; Annappa, B.The Hadoop Distributed File System (HDFS) is a large data storage system which exhibits several features of a good distributed file system. In this paper we integrate Matlab Distributed Computing Server (MDCS) with HDFS to build a scalable, efficient platform for scientific computations. We use an FTP server on top of HDFS for data transfer from the Matlab system to HDFS. The motivation of using HDFS for storage with MDCS is to provide an efficient, fault-tolerant file system and also to utilize the resources efficiently by making each system serve as both data node for HDFS and worker for MDCS. We test the storage efficiency of HDFS and compare with normal file system for data transfer operations through MDCS. � 2012 IEEE.Item Utilization of map-reduce for parallelization of resource scheduling using MPI: PRS(2011) Thomas, L.; Annappa, B.Scheduling for speculative parallelization is a problem that remained unsolved despite its importance [2]. In the previous work scheduling was done based on Fixed-Size Chunking (FSC) technique which needed several'dry-runs' before an acceptable finalized chunk size that will be scheduled to each processors is found. There are many other scheduling methods which were originally designed for loops with no dependences, but they were primarily focused in the problem of load balancing. In this work we address the problem of scheduling tasks with and without dependences for speculative execution. We have found that a complexity between minimizing the number of re-executions and reducing overheads can be found if the size of the scheduled block of iterations is calculated at runtime. We introduce here a scheduling method called Parallelization of Resource scheduling (PRS) in which we first analyze the processing speed of each worker based on that further division of the actual task will be done. The result shows a 5% to 10% speedup improvement in real applications with dependences with respect to a carefully tuned PRS strategy. Copyright � 2011 ACM.