25–27 Feb 2019
European Space Research and Technology Centre (ESTEC)
Europe/Amsterdam timezone

On-board FPGA- based Deep Neural Networks processing unit

26 Feb 2019, 11:10
20m
Erasmus (European Space Research and Technology Centre (ESTEC))

Erasmus

European Space Research and Technology Centre (ESTEC)

ESTEC (European Space Research & Technology Centre) Keplerlaan 1 2201 AZ Noordwijk The Netherlands Tel: +31 (0)71 565 6565
Oral presentation Deep Learning in On-Board Systems Deep Learning in On-Board Systems

Speaker

Mr Krzysztof Czyz (FP Space)

Description

Deep learning is enabling technology for many application like image processing, pattern recognition, objects classification and even autonomous space craft operations. But, there is a price to be paid, these methods are computationally intensive and require supercomputing resources - that can be challenging, especially on-board of a space craft. However, FPGA-accelerated computing is becoming a triggering technology for building low-cost power-efficient supercomputing systems, which are accelerating deep learning, analytics, and engineering applications. The objective of this presentation is to present a dedicated FPGA- based Deep Neural Networks (DNNs) processing unit. This unit is built on top of the NVIDIA Deep Learning Accelerator (NVDLA) - a standardized, open source deep learning acceleration architecture. The NVDLA architecture is providing interoperability with the majority of modern Deep Learning networks and frameworks, including TensorFlow. The unit is taking a performance advantage by parallel execution of a large number of operations, like convolutions, activations and normalizations, which are fairly typical for DNN structures. The NVDLA was implemented in Xilinx Zynq UltraScale+ MPSoC FPGA providing a significant boost in terms of performance and power consumption comparing to non-accelerated processing of DNNs. The main limiting factor for usage of unmodified NVDLA for space applications is lack of fault tolerance, therefore architecture modifications providing fault detection and triple modular redundancy are proposed. The implementation details and system-on-chip features will be summarized and DNN accelerator efficiency in terms of performance and power consumption will be discussed during the presentation.

Paper submission No

Primary authors

Mateusz Maciag (FP Space) Jacek Lach (FP Space) Marcin Kurczalski (FP Space) Jakub Nalepa (FP Space) Mr Krzysztof Czyz (FP Space)

Presentation materials