25–27 Mar 2025
European Space Research and Technology Centre (ESTEC)
Europe/Amsterdam timezone
Draft Agenda published

Extending the FPG-AI Framework for Automatic DNN Acceleration on NanoXplore FPGAs

25 Mar 2025, 15:15
35m
Einstein (European Space Research & Technology Centre)

Einstein

European Space Research & Technology Centre

Postbus 299 2200 AG Noordwijk (The Netherlands)
Poster session Poster Session Poster session

Speaker

Pietro Nannipieri (University of Pisa)

Description

In recent years, the space community has shown a growing interest in the use of AI onboard satellites. FPGAs have emerged as competitive accelerators for these algorithms in the space harsh environment, and methods for automating their design have gained significant attention. Among the available frameworks, FPG-AI stands out as a promising solution for facilitating the deployment of AI in satellite systems.
The framework combines the use of model compression strategies with a fully handcrafted HDL-based accelerator that poses no limit on device portability, thus enabling the implementation of CNNs and RNNs on components from different vendors and with diverse resource budgets. On top of that, an automation process merges the two design spaces to define an end-to-end and ready-to-use tool, keeping a high degree of customization with respect to the user’s constraints on resource consumption or latency. The framework exploits a fully handcrafted, human-readable HDL, which avoids the use of third-party IPs, thereby enhancing the explainability and reliability of the architecture.
This session describes the extension of the FPG-AI framework to NanoXplore radiation-hardened FPGAs, a strategic European solution for space systems. We conducted a reengineering of the FPG-AI flow to ensure compatibility with NanoXplore technology. We successfully deployed the LeNet-5 neural network on the NG-ULTRA FPGA, demonstrating the device’s capability to support AI hardware accelerators. The implemented prototype achieved an inference time ranging from 0.720 to 0.393 ms, utilizing a maximum of 7% of the available Functional Elements.
Exploiting FPG-AI versatility, we implemented the LeNet-5 model on devices of other vendors (AMD Xilinx, Microchip) to compare the performance of the different technologies for AI workload.
Finally, we provide a comparison of FPG-AI with state-of-the-art High-Level Synthesis (HLS) frameworks for AI acceleration on the NG-ULTRA device to evaluate the trade-offs of the different design flows.

Affiliation of author(s)

Tommaso Bocchi, Tommaso Pacini, Luca Zulberti, Pietro Nannipieri, Luca Fanucci: Department of Information Engineering, University of Pisa, Pisa, Italy

Silvia Moranti: European Space Research and Technology Centre, European Space Agency (ESA), Noordwijk, The Netherlands

Track Artificial Intelligence/Machine Learning

Primary authors

Prof. Luca Fanucci (University of Pisa) Luca Zulberti (University of Pisa) Pietro Nannipieri (University of Pisa) Tommaso Bocchi (Università di Pisa) Tommaso Pacini (University of Pisa) silvia moranti (ESA)

Presentation materials

There are no materials yet.