Speaker
Description
Radiation-hardened-by-design (RHBD) reconfigurable devices have gained a lot of attention thanks to their excellent compromise between costs and performance. Being of very limited use due to a lack of performance a few years ago, these devices are now capable of implementing a wide range of applications requiring high computational capabilities. However, to further enhance computing capabilities and permit the effective implementation of Vision-Based Navigation (VBN) algorithms, an ad-hoc HW accelerator able to elaborate multi-dimensional arrays (tensors) is needed. The Tensor Processing Unit (TPU) is an architecture customized for image elaboration algorithms and machine learning. It can manage massive multiplications and additions at high speed with a limited design area and power consumption. Several design strategies investigated the efficient implementation of TPU on FPGA architectures by improving the pipeline strategy and resource sharing towards the TPU processing elements (PEs) or by unifying the tensor computation kernel. In this work, we present the first results achieved with an implementation of a TPU architecture on NG-Medium Radiation-Hardened FPGAs manufactured by NanoXplore.