Parallel computing has become an important subject in the field of computer science and has proven to be critical when researching high performance solutions. The evolution of computer architectures (multi-core and many-core) towards a higher number of cores can only confirm that parallelism is the method of choice for speeding up an algorithm. In the last decade, the graphics processing unit, or GPU, has gained an important place in the field of high performance computing (HPC) because of its low cost and massive parallel processing power. Super-computing has become, for the first time, available to anyone at the price of a desktop computer. In this paper, we survey the concept of parallel computing and especially GPU computing. Achieving efficient parallel algorithms for the GPU is not a trivial task, there are several technical restrictions that must be satisfied in order to achieve the expected performance. Some of these limitations are consequences of the underlying architecture of the GPU and the theoretical models behind it. Our goal is to present a set of theoretical and technical concepts that are often required to understand the GPU and its massive parallelism model. In particular, we show how this new technology can help the field of computational physics, especially when the problem is data-parallel. We present four examples of computational physics problems; n-body, collision detection, Potts model and cellular automata simulations. These examples well represent the kind of problems that are suitable for GPU computing. By understanding the GPU architecture and its massive parallelism programming model, one can overcome many of the technical limitations found along the way, design better GPU-based algorithms for computational physics problems and achieve speedups that can reach up to two orders of magnitude when compared to sequential implementations.
Scientific Computing Using Consumer Video-Gaming Hardware Devices - http://hgpu.org/?p=16277
"Commodity video-gaming hardware (consoles, graphics cards, tablets, etc.) performance has been advancing at a rapid pace owing to strong consumer demand and stiff market competition. Gaming hardware devices are currently amongst the most powerful and cost-effective computational technologies available in quantity. In this article, we evaluate a sample of current generation video-gaming hardware devices for scientific computing and compare their performance with specialized supercomputing general purpose graphics processing units (GPGPUs). We use the OpenCL SHOC benchmark suite, which is a measure of the performance of compute hardware on various different scientific application kernels, and also a popular public distributed computing application, Einstein@Home in the field of gravitational physics for the purposes of this evaluation."
GPU-accelerated algorithms for many-particle continuous-time quantum walks - http://www.sciencedirect.com/science/article/pii/S0010465517300668
"On the other hand, the evolution of computer architectures towards multicore processors even in stand-alone workstations enabled important cuts of the execution time by introducing the possibility of running multiple threads in parallel and spreading the workload among cores. This possibility was boosted up by the general purpose parallel computing architectures of modern graphic cards (GPGPUs). In the latter, hundreds or thousands of computational cores in the same single chip are able to process simultaneously a very large number of data. It should also be noted that an impressive computational power is present not only in dedicated GPUs for high-performance computing, but also in commodity graphic cards, which make modern workstations suitable for numerical analyses. In order to exploit such a huge computational power, algorithms must be first redesigned and adapted to the SIMT (Single Instruction Multiple Thread) and SIMD (Single Instruction Multiple Data) paradigms and translated then into programming languages with hardware-specific subsets of instructions. Among them, one of the most diffuse is CUDA-C, a C extension for the Compute Unified Device Architecture (CUDA) that represents the core component of NVIDIA GPUs. As a matter of fact, the use of GPUs for scientific analysis, which dates back to mid and late 2000s [31]; [32]; [33]; [34] ; [35], dramatically boosted with a two-digit yearly increasing rate since 2010. Just looking at the computational physics realm, several GPU-specific algorithms have been proposed in the last three years, e.g., for stochastic differential equations [36], molecular dynamics simulations [37] ; [38], fluid dynamics [39] ; [40], Metropolis Monte Carlo [41] simulations, quantum Monte Carlo simulations [42], and free-energy calculations [43]."
No comments:
Post a Comment