"This paper presents the design and implementation of low-level library to compute general sums and products over multi-dimensional arrays (tensors). Using only 3 low-level functions, the API at once generalizes core BLAS1-3 as well as eliminates the need for most tensor transpositions. De- spite their relatively low operation count, we show that these transposition steps can become performance limiting in typical use cases for BLAS on tensors. The execution of the present API achieves peak performance on the same order of magnitude as for vendor-optimized GEMM by utilizing a code generator to output CUDA source code for all computational kernels. The outline for these kernels is a multi-dimensional generalization of the MAGMA BLAS matrix multiplication on GPUs. Separate transpositions steps can be skipped because every kernel allows arbitrary multi- dimensional transpositions of the arguments. The library,including its methodology and programming techniques, are made available in SLACK."
https://github.com/frobnitzem/slack
Efficient Primitives for Standard Tensor Linear Algebra - https://dl.acm.org/citation.cfm?id=2949580
No comments:
Post a Comment