25–27 Feb 2019
European Space Research and Technology Centre (ESTEC)
Europe/Amsterdam timezone

HW-RECONFIGURABLE PROCESSING AVIONICS FOR SPACE VISION-BASED NAVIGATION

25 Feb 2019, 16:50
20m
Erasmus (European Space Research and Technology Centre (ESTEC))

Erasmus

European Space Research and Technology Centre (ESTEC)

ESTEC (European Space Research & Technology Centre) Keplerlaan 1 2201 AZ Noordwijk The Netherlands Tel: +31 (0)71 565 6565
Oral presentation On-Board Processing Algorithms On-Board Data Processing Systems and Architectures

Speaker

David Gonzalez-Arjona (GMV Aerospace and Defence)

Description

Autonomous Vision-Based Navigation in a landing spacecraft involves very demanding processing tasks as it shall be executed at higher frequency than what a space-qualified processor can provide, forcing the implementation of the algorithms in ad-hoc HW-accelerated solution. HW/SW co-design is employed in order to use FPGA as a HW accelerator to implement the most computationally demanding tasks which are mostly devoted to parallelizable computer vision parts. The Navigation systems of on-going Sample Return missions to asteroids or planets as well as navigation, localization and mapping algorithms of future space robotics vehicles (rovers) can take advantage of these HW/SW implementations in System-on-Chip, embedded FPGA and processor in a single chip device. There is a lack of flexibility in the re-use of the HW and SW within different space mission phases that require different computer vision and navigation solutions. FPGA are used in space to substitute un-affordable development of ASIC components but losing one of the main advantages of these devices, HW reconfiguration to instantiate and interchange different bitstream configuration of the FPGA logic at different moments. We are proposing and evaluating within ENABLE-S3 European Commision project a cost-efficient reconfigurable instrument to provide multiple vision-based co-processing solutions within a mission depending on the distance to the target and the phase of that mission. We evaluated Xilinx Zynq Soc and Zynq Ultrascale+ MPSoC devices for allocate implementation and reconfiguration of three different computer vision algorithms which cannot fit in one single device. In the current exploration and landing architecture missions design at least 3 FPGA would be needed to allocate these 3 FPGA implementation. GMV collaborates with UPM to show the results on an avionics architecture that allows reconfiguring the same FPGA device, only 1 FPGA, simplifying the architecture and reducing mass and power budgets providing a single product that may be used over the different phases of the mission. The scenario is defined by a spacecraft lander carrying on a rover to be deployed on the surface of a planet. During navigation phase to celestial body in close range operations the spacecraft lander makes use of the camera installed in the rover for the descent and landing part using an absolute navigation image processing interchanged by fast HW reconfiguration with a relative navigation image processing implementation in the FPGA. Once that the probe is on ground, the FPGA is once again reconfigured for surface operations to host a solution based on stereo vision disparity SGM in the same FPGA. In order to allow safety critical operations it is required a high-speed FPGA reconfiguration, therefore the re-programming time in which the FPGA is not operative is a critical factor and one of the performance parameters that are presented in the project. ARTICO3 architecture provides the reconfigurable framework in the Zynq devices to allow smooth interchange of the vision-based navigation modules at high-frequency. The complete reconfigurable system is managed at SW level from one task running into the embedded ARM processors of Zynq boards integrated into real-time operating system RTEMS. In the 2nd year review of the ENABLE-S3 project we created a Demonstrator of this solution which is interfaced in closed loop to a Matlab-Simulink-based GMV-DL-Simulator in order to evaluate the HW-reconfigurable vision-based navigation performances into a emulated Phobos descent & landing scenario including kinematics and models for the Phobos environments and GMV implementation of the Phooprint-GNC autocoded into the embedded ARM processors.

Summary

The Navigation systems of on-going Sample Return missions to asteroids or planets as well as navigation, localization and mapping algorithms of future space robotics vehicles (rovers) can take advantage of these HW/SW implementations in System-on-Chip, embedded FPGA and processor in a single chip device. There is a lack of flexibility in the re-use of the HW and SW within different space mission phases that require different computer vision and navigation solutions. FPGA are used in space to substitute un-affordable development of ASIC components but losing one of the main advantages of these devices, HW reconfiguration to instantiate and interchange different bitstream configuration of the FPGA logic at different moments. We are proposing and evaluating within ENABLE-S3 European Commision project a cost-efficient reconfigurable instrument to provide multiple vision-based co-processing solutions within a mission depending on the distance to the target and the phase of that mission. We evaluated Xilinx Zynq Soc and Zynq Ultrascale+ MPSoC devices for allocate implementation and reconfiguration of three different computer vision algorithms which cannot fit in one single device. In the current exploration and landing architecture missions design at least 3 FPGA would be needed to allocate these 3 FPGA implementation.

Paper submission Yes

Primary authors

David Gonzalez-Arjona (GMV Aerospace and Defence) Mr Alvaro Jimenez-Peralo (GMV Aerospace and Defence) Mr Paul Bajanaru (GMV Innovating Solutions)

Co-authors

Mr Arturo Perez (Universidad Politecnica de Madrid) Mr Alfonso Rodriguez (Universidad Politecnica de Madrid) Mr Ruben Domingo (GMV Aerospace and Defense) Mr Antonio Pastor (GMV Aerospace and Defense) Mr Miguel Angel Verdugo (GMV Aerospace and Defense) Mr Andres Otero (Universidad Politecnica de Madrid) Mr Eduardo de la Torre (Universidad Politecnica de Madrid)

Presentation materials