Wednesday, August 17, 2016

Theano

"Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

http://www.deeplearning.net/software/theano/ 

https://github.com/Theano/Theano

http://on-demand.gputechconf.com/gtc/2016/presentation/s6845-frederic-bastien-theano-python-library.pdf 

http://hgpu.org/?p=15859

https://github.com/rsennrich/nematus 

https://github.com/nouiz/Theano-Docker 

Synkhronos: a Multi-GPU Theano Extension for Data Parallelism - https://hgpu.org/?p=17617

We present Synkhronos, an extension to Theano for multi-GPU computations leveraging data parallelism. Our framework provides automated execution and synchronization across devices, allowing users to continue to write serial programs without risk of race conditions. The NVIDIA Collective Communication Library is used for high-bandwidth inter-GPU communication. Further enhancements to the Theano function interface include input slicing (with aggregation) and input indexing, which perform common data-parallel computation patterns efficiently. One example use case is synchronous SGD, which has recently been shown to scale well for a growing set of deep learning problems. When training ResNet-50, we achieve a near-linear speedup of 7.5x on an NVIDIA DGX-1 using 8 GPUs, relative to Theano-only code running a single GPU in isolation. Yet Synkhronos remains general to any data-parallel computation programmable in Theano. By implementing parallelism at the level of individual Theano functions, our framework uniquely addresses a niche between manual multi-device programming and prescribed multi-GPU training routines.

No comments:

Post a Comment