Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRaghavendra, P.-
dc.contributor.authorBehki, A.K.-
dc.contributor.authorHariprasad, K.-
dc.contributor.authorMohan, M.-
dc.contributor.authorJain, P.-
dc.contributor.authorBhat, S.S.-
dc.contributor.authorThejus, V.M.-
dc.contributor.authorPrabhu, V.-
dc.identifier.citationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010, Vol.6081 LNCS, PART 1, pp.476-486en_US
dc.description.abstractToday, the challenge is to exploit the parallelism available in the way of multi-core architectures by the software. This could be done by re-writing the application, by exploiting the hardware capabilities or expect the compiler/software runtime tools to do the job for us. With the advent of multi-core architectures ([1] [2]), this problem is becoming more and more relevant. Even today, there are not many run-time tools to analyze the behavioral pattern of such performance critical applications, and to re-compile them. So, techniques like OpenMP for shared memory programs are still useful in exploiting parallelism in the machine. This work tries to study if the loop parallelization (both with and without applying transformations) can be a good case for running scientific programs efficiently on such multi-core architectures. We have found the results to be encouraging and we strongly feel that this could lead to some good results if implemented fully in a production compiler for multi-core architectures. � Springer-Verlag Berlin Heidelberg 2010.en_US
dc.titleA study of performance scalability by parallelizing loop iterations on multi-core SMPsen_US
dc.typeBook chapteren_US
Appears in Collections:2. Conference Papers

Files in This Item:
File Description SizeFormat 
7142.pdf58.41 kBAdobe PDFThumbnail

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.