Development methods and deployment of machine learning model inference for two Space Weather on-board analysis applications on several embedded systems

17 Nov 2021, 11:40
20m
Let's Get Digital (Virtual)

Let's Get Digital

Virtual

Speakers

Mr Hugo Marques (ESA/ESTEC)Ms Kyra Foerster (ESA/ESTEC)Mr Malte Bargholz (ESA/ESTEC)Dr Maris Tali (ESA/ESTEC)

Description

As spacecraft missions continue to increase in complexity, both system operations and amount of gathered data demand more complex systems than ever before. Currently, mission capabilities are constrained by on-board processing capacity, depending on a high number of commands and complex ground station systems to allow spacecraft operations. Thus, computing capacity and increased autonomous capabilities are of the utmost importance. Artificial intelligence, especially in the form of machine learning, with its vast areas of application scenarios allows tackling these and more challenges in spacecraft designs. Unfortunately, the execution of current machine learning algorithms consumes a high amount of power and memory resources, and qualification of correct deployment remains challenging, limiting their possible applications in space systems.
An increase in efficiency is therefore a major enabling factor for these technologies. Software level optimization of machine learning algorithms and maturity on the required tool chain will be key to deploy such algorithms in current space hardware platforms. At the same time, hardware acceleration will allow for broader applications of these technologies with a minimum increase in power consumption. Additionally, COTS embedded systems are becoming a valid alternative to space fight hardware, especially in NewSpace applications, representing a valid option to deploy such algorithms.
In this work, two different approaches to deploy machine learning algorithms in a Zynq UltraScale+ XCZU9EG-2FFVB1156 are presented. In the first approach, a CNN model is deployed with Xilinx’s Vitis AI tool; the result was evaluated based on relevant performance and efficiency parameters. In the second approach, the Accelerated Linear Algebra (XLA) tool from Tensorflow was used to deploy a MNIST model. The implementation of a tool chain to make compatible XLA with target FPGA is described and the result is presented. Finally, benefits, drawbacks and future steps to automatize and improve the entire workflow are presented.
As spacecraft missions continue to increase in complexity, both system operations and amount of gathered data demand more complex systems than ever before. Currently, mission capabilities are constrained by on-board processing capacity, depending on a high number of commands and complex ground station systems to allow spacecraft operations. Thus, computing capacity and increased autonomous capabilities are of the utmost importance. Artificial intelligence, especially in the form of machine learning, with its vast areas of application scenarios allows tackling these and more challenges in spacecraft designs. Unfortunately, the execution of current machine learning algorithms consumes a high amount of power and memory resources, and qualification of correct deployment remains challenging, limiting their possible applications in space systems.
An increase in efficiency is therefore a major enabling factor for these technologies. Software level optimization of machine learning algorithms and maturity on the required tool chain will be key to deploy such algorithms in current space hardware platforms. At the same time, hardware acceleration will allow for broader applications of these technologies with a minimum increase in power consumption. Additionally, COTS embedded systems are becoming a valid alternative to space fight hardware, especially in NewSpace applications, representing a valid option to deploy such algorithms.
In this work, two different approaches to deploy machine learning algorithms in a Zynq UltraScale+ XCZU9EG-2FFVB1156 are presented. In the first approach, a CNN model is deployed with Xilinx’s Vitis AI tool; the result was evaluated based on relevant performance and efficiency parameters. In the second approach, the Accelerated Linear Algebra (XLA) tool from Tensorflow was used to deploy a MNIST model. The implementation of a tool chain to make compatible XLA with target FPGA is described and the result is presented. Finally, benefits, drawbacks and future steps to automatize and improve the entire workflow are presented.

Presentation materials