Please use this identifier to cite or link to this item:
|Title:||A study of performance scalability by parallelizing loop iterations on multi-core SMPs|
|Citation:||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010, Vol.6081 LNCS, PART 1, pp.476-486|
|Abstract:||Today, the challenge is to exploit the parallelism available in the way of multi-core architectures by the software. This could be done by re-writing the application, by exploiting the hardware capabilities or expect the compiler/software runtime tools to do the job for us. With the advent of multi-core architectures ( ), this problem is becoming more and more relevant. Even today, there are not many run-time tools to analyze the behavioral pattern of such performance critical applications, and to re-compile them. So, techniques like OpenMP for shared memory programs are still useful in exploiting parallelism in the machine. This work tries to study if the loop parallelization (both with and without applying transformations) can be a good case for running scientific programs efficiently on such multi-core architectures. We have found the results to be encouraging and we strongly feel that this could lead to some good results if implemented fully in a production compiler for multi-core architectures. � Springer-Verlag Berlin Heidelberg 2010.|
|Appears in Collections:||2. Conference Papers|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.