Tuesday, September 13, 2016

FPGA Bibliography

Survey of Domain-Specific Languages for FPGA Computing - http://hgpu.org/?p=16212

"High-performance FPGA programming has typically been the exclusive domain of a small band of specialized hardware developers. They are capable of reasoning about implementation concerns at the register-transfer level (RTL) which is analogous to assembly-level programming in software. Sometimes these developers are required to push further down to manage even lower levels of abstraction closer to physical aspects of the design such as detailed layout to meet critical design constraints. In contrast, software programmers have long since moved away from textual assembly-level programming towards relying on graphical integrated development environments (IDEs), highlevel compilers, smart static analysis tools and runtime systems that optimize, manage and assist the program development tasks. Domain-specific languages (DSLs) can bridge this productivity gap by providing higher levels of abstraction in environments close to the domain of application expert. DSLs carefully limit the set of programming constructs to minimize programmer mistakes while also enabling a rich set of domain-specific optimizations and program transformations. With a large number of DSLs to choose from, an inexperienced FPGA user may be confused about how to select an appropriate one for the intended domain. In this paper, we review a combination of legacy and state-ofthe-art DSLs available for FPGA development and provide a taxonomy and classification to guide selection and correct use of the framework."

 A Survey of FPGA Based Neural Network Accelerator - https://hgpu.org/?p=17900

Recent researches on neural network have shown great advantage in computer vision over traditional algorithms based on handcrafted features and models. Neural network is now widely adopted in regions like image, speech and video recognition. But the great computation and storage complexity of neural network based algorithms poses great difficulty on its application. CPU platforms are hard to offer enough computation capacity. GPU platforms are the first choice for neural network process because of its high computation capacity and easy to use development frameworks. On the other hand, FPGA based neural network accelerator is becoming a research topic. Because specific designed hardware is the next possible solution to surpass GPU in speed and energy efficiency. Various FPGA based accelerator designs have been proposed with software and hardware optimization techniques to achieve high speed and energy efficiency. In this paper, we give an overview of previous work on neural network accelerators based on FPGA and summarize the main techniques used. Investigation from software to hardware, from circuit level to system level is carried out to complete analysis of FPGA based neural network accelerator design and serves as a guide to future work.

No comments:

Post a Comment