The 15th ESA Workshop on Avionics, Data, Control and Software Systems (ADCSS) covers topics related to avionics for space applications, in the form of a set of round tables. The workshop acts as a forum for the presentation of position papers followed by discussion and interaction between ESA and Industry and between participants. Each theme part of ADCSS workshops will be first introduced and then expanded by presentations on related developments from technical and programmatic points of view. A round table discussion may follow, concluded by a synthesis outlining further actions and roadmaps for potential inclusion into ESA’s technology R&D plans.
All material presented at the workshop must, before submission, be cleared of any restrictions preventing it from being published on the ADCSS website.
The hot redundant ADPMS3 OBC (used in our IBDM) is currently in its qualification campaign under the direct supervision of ESA in the frame of the IBDM project, and is currently atTRL7. The cool redundant version of OBC, along with the other ADPMS3 PCDU and RTU products, are due to complete the qualification campaign under the supervision of ESA, later this year.
Use of vision-based AI in critical embedded systems
Artificial Intelligence methods such as Deep Neural Networks are able to provide performances that surpass those of more classical techniques. The potential improvements do not come for free, as the problematique with these techniques lies in the explainability of their results. For this reason, introducing them into critical embedded systems poses some challenges.
In this work, result of an ESA project, we analyze two use cases where Deep Neural Networks are introduced into vision-based navigation systems and implement them on representative space avionics. We developed one model for each use case, the first aiming at detecting and localizing craters in a descent and landing phase over the Moon, and the second aiming at identifying previously selected patches of an unknown satellite on a life visual camera feed.
We present the work developed for each use case, the implementation on avionics representative of space, and the approach followed for verification and validation.
With the advantages and appealing performances of artificial Intelligence (AI) in different applications, space scientists and engineers have shown great interest in AI-based solutions to space scenarios. However, different from terrain applications, the decision of the space vehicles offered are critical and should be trustable in the uncontrolled and risky environment, resulting in the most significant challenge in the use of AI-based techniques for space missions of acting sensibly for the unanticipated and complex situations. We, therefore, study the Explainable Artificial Intelligence (XAI) techniques that are potentially applicable to software-based GNC scenarios, including relative navigation for spacecraft rendezvous, crater detection and landing on asteroids or the Moon. The explainable tools related to the XAI algorithms are developed to make onboard intelligent techniques transparent to ensure a trustable decision while meeting the level of performances required by the space applications within an uncertain and acceptable boundary level. A comprehensive framework is proposed to address the XAI-base space software for spacecraft GNC systems, including syntenic dataset generation, relative navigation scenario building, XAI model developing, software verification and testing, etc.
Today AI cannot be verified and validated for critical applications as the ECSS requirements on software quality are not adapted to be applied for learning systems but for deterministic control algorithms. Likewise, there are a multitude of potential use-cases for AI within the space domain, e.g. health monitoring, AOCS, VBN, image processing and more, but the qualification of such developments has to consider application specific constraints.
An ECSS working group has been initiated this year. It is aiming at creating guidelines on how machine learning (ML) models, e.g. neural networks, shall be qualified for space application on-board & on-ground in space missions for Cat B/C/D software. Cat A software is for the time being excluded.
The working group, its objectives and activities will be presented.
As spacecraft missions continue to increase in complexity, both system operations and amount of gathered data demand more complex systems than ever before. Currently, mission capabilities are constrained by on-board processing capacity, depending on a high number of commands and complex ground station systems to allow spacecraft operations. Thus, computing capacity and increased autonomous capabilities are of the utmost importance. Artificial intelligence, especially in the form of machine learning, with its vast areas of application scenarios allows tackling these and more challenges in spacecraft designs. Unfortunately, the execution of current machine learning algorithms consumes a high amount of power and memory resources, and qualification of correct deployment remains challenging, limiting their possible applications in space systems.
An increase in efficiency is therefore a major enabling factor for these technologies. Software level optimization of machine learning algorithms and maturity on the required tool chain will be key to deploy such algorithms in current space hardware platforms. At the same time, hardware acceleration will allow for broader applications of these technologies with a minimum increase in power consumption. Additionally, COTS embedded systems are becoming a valid alternative to space fight hardware, especially in NewSpace applications, representing a valid option to deploy such algorithms.
In this work, two different approaches to deploy machine learning algorithms in a Zynq UltraScale+ XCZU9EG-2FFVB1156 are presented. In the first approach, a CNN model is deployed with Xilinx’s Vitis AI tool; the result was evaluated based on relevant performance and efficiency parameters. In the second approach, the Accelerated Linear Algebra (XLA) tool from Tensorflow was used to deploy a MNIST model. The implementation of a tool chain to make compatible XLA with target FPGA is described and the result is presented. Finally, benefits, drawbacks and future steps to automatize and improve the entire workflow are presented.
As spacecraft missions continue to increase in complexity, both system operations and amount of gathered data demand more complex systems than ever before. Currently, mission capabilities are constrained by on-board processing capacity, depending on a high number of commands and complex ground station systems to allow spacecraft operations. Thus, computing capacity and increased autonomous capabilities are of the utmost importance. Artificial intelligence, especially in the form of machine learning, with its vast areas of application scenarios allows tackling these and more challenges in spacecraft designs. Unfortunately, the execution of current machine learning algorithms consumes a high amount of power and memory resources, and qualification of correct deployment remains challenging, limiting their possible applications in space systems.
An increase in efficiency is therefore a major enabling factor for these technologies. Software level optimization of machine learning algorithms and maturity on the required tool chain will be key to deploy such algorithms in current space hardware platforms. At the same time, hardware acceleration will allow for broader applications of these technologies with a minimum increase in power consumption. Additionally, COTS embedded systems are becoming a valid alternative to space fight hardware, especially in NewSpace applications, representing a valid option to deploy such algorithms.
In this work, two different approaches to deploy machine learning algorithms in a Zynq UltraScale+ XCZU9EG-2FFVB1156 are presented. In the first approach, a CNN model is deployed with Xilinx’s Vitis AI tool; the result was evaluated based on relevant performance and efficiency parameters. In the second approach, the Accelerated Linear Algebra (XLA) tool from Tensorflow was used to deploy a MNIST model. The implementation of a tool chain to make compatible XLA with target FPGA is described and the result is presented. Finally, benefits, drawbacks and future steps to automatize and improve the entire workflow are presented.
In the context of Space Situational Awareness (SSA), on-board detection of space weather events – such as solar flares or Coronal Mass Ejections (CME) – can decrease the system latency for critical information reaching the end-users. Unfortunately, such detection requires complex on-board image analysis not commonly deployed on institutional missions, due to lack of high-performance space qualified processors and available qualified software tools. However, in recent years there has been an increasing use of on-board machine learning inference in NewSpace missions – such as the achievements of the inclusion of an AI-accelerator processor on-board Phisat-1, allowing an increase in on-board autonomy.
In this work, we have developed and optimized two machine learning applications based on ANN (Artificial Neural Networks) targeting Space Weather applications: on-board CME detection and detection of radiation upsets in optical imagers for on-board radiation scrubbing. The CME detection algorithm was initially trained and provided by University di Torino, which was later retrained at ESTEC and adapted for candidate hardware targets.
In addition, we have evaluated several commercial flows for training optimising and deploying ML models in embedded systems. Deep learning applications for space can be deployed on Commercial off-the-shelf (COTS), radiation-hardened by design (RHBD) and radiation-tolerant (RT) devices, for which especially the two latter are important to consider for future institutional missions.
The presentation shows the deployment of the two applications on several targets, including several FPGAs from Xilinx using the Vitis AI development flow: Zynq 7000, Zynq UltraScale+ (ARM A53 processors), Kintex Ultrascale and Versal AI. Furthermore, the Unibap ix5 computer was targeted, both AMD CPU and GPU as well as Myriad X was tested. The deployment towards the processors have been done using TensorFlow Lite (TFLite). As part of an In-Orbit Demonstration (IOD,) the model versions adopted for the ix5 computer are currently flying on-board the Wild Ride mission.
In addition, we have evaluated TensorFlow Lite Micro (TFLite Micro) to be used for LEON4/SPARC processors. Several patches to the TFLite Micro code base was necessary. The code has been tested on the GR740 rad-hard processor – which is qualified for use on ESA institutional missions.
The presentation demonstrates the enhancements towards an optimized neural network and the deployment with different tool chains. It ends with an overview of the lessons learned during deployment and highlights important future steps – to allow deployment of deep learning on institutional missions.
The Wild Ride satellite mission currently in orbit has demonstrated the ability to orchestrate a set of 23 “applications” from a variety of partners, including “WorldFloods”, an ML payload developed by the Frontier Development Lab (FDL), a partnership led by UK-based
Trillium Technologies with the University of Oxford and ESA’s Phi-lab.
This presentation will cover 3rd party application management, development and deployment using SpaceCloud framework orchestration on heterogenous hardware. The presentation will exemplify how applications are containerized using docker and scheduled for execution.
Applications targeted for flight is validated prior to launch in a process where they are uploaded to a hosted docker registry and pulled onto flight hardware and executed. Allowing rapid software development and maximize software reuse in the field of Earth Observation and robotics contributed to the selection of core CPU architecture and Linux based operating system.
On the Wild Ride mission, a full version of the geospatial software ENVI® and language IDL® was included to allow application developers a feature rich existing library to leverage. In-flight applications test results and hardware health data will be presented together with an outlook for future missions.
This presentation provides an overview of the GNC subsystem of DLR’s Reusability Flight Experiment. It describes the system architecture, system design, and avionics. It focuses on the two main elements: The Guidance and Control (G&C) subsystem, which includes the functions for guidance, control, control allocation and management of the actuation systems; and the Hybrid Navigation System (HNS), which is responsible for estimating position, velocity, attitude, angular rates, and other parameters of the flight state. The requirements and boundary conditions set by the mission, the current design baseline for both subsystems, and some details about the guidance, navigation and control functions are described and discussed
Developing Flight Dynamics techniques and algorithms to ensure high level of autonomy for next generation of space missions is one of the main objectives of the flight dynamics department in CNES. One example of these autonomous techniques is the Autonomous Orbital Control (AOC) which consists of delegating to onboard satellite system the identification, planning and realization of orbital corrections allowing to stay in the mission reference orbit. For several years, ASTERIA, a concept of on-board autonomy combining station keeping and collision risk management for the low earth orbit (LEO) satellites, has been developed by CNES. ASTERIA, acronym for Autonomous Station-keeping Technology with Embedded collision RIsk Avoidance system, enables both in-track and cross-track control for different LEO missions. To ensure the accuracy and reactivity of the autonomous control, the solution developed by CNES integrates the complete management of the risk of collision. Station-keeping and avoidance maneuvers are therefore closely linked by an on-board management.
Managing the risk of collision requires the best possible knowledge of the future trajectory of the primary satellite and up-to-date information on secondary objects. Thereby a calculation process with a large amount of data and the propagation of orbital states and covariance is required. That is why, so far, the collision risk management has always been a ground segment activity. But this severely limits the autonomy on board by imposing a schedule and the knowledge of the station keeping maneuvers which are not compatible with reactivity needed by an Autonomous Orbit Control (AOC). On the contrary, management by the satellite itself offers interesting prospects for reactivity and increased autonomy, but requires carrying out a risk calculation on board and identifying the relevant risk assessments to implement effective avoidance solutions onboard.
The presentation aims to show the exhaustiveness of the ASTERIA concept.
First of all, the principles of collision risk management on board are described. Then, the study of the computational load, the accuracy of the risk estimate and the performance of risk avoidance strategies are presented. The ability to the operations of such a system is then demonstrated by introducing the associated operational management process. CNES took advantage of new on-board architecture on ESOC OPS-SAT 3-Units CubeSat demonstrator to experiment the feasibility and good reliability of its AOC. This presentation will illustrate some results of this innovative experience.
Finally, issues related to an autonomous system will be discussed, in particular those concerning the space traffic management
PILOT (Precise and Intelligent Landing using On-board Technologies) is a European system that is designed to be used for precise and safe landings on the moon surface, currently in C/D1 development phase. The system is scheduled to be integrated onboard the Russian Luna-Resource-1 lander (Luna-27 mission – scheduled for launch in 2025) as part of its functional avionics, with the main goal of improving the soft landing performance, safety and ultimately validating in-flight the system for future missions.
PILOT is composed of three hardware elements (Camera Optical Unit - COU, LIDAR, Landing Processing Unit - LPU) and two functions directly related to the landing phase (Vision-Based Navigation - VBN and Hazard Detection and Avoidance - HDA). From the perspective of avionics, PILOT has recently passed a series of maturity gates. The programme of qualification of the COU was anticipated to secure a flight demonstration (PILOT-D) onboard Luna-25 (due for May 2022), significantly derisking the PILOT development though a collection of lessons-learned. With a view to perform the HDA function within the imposed programmatic constraints, a new LIDAR was selected for phase C, having a higher TRL and a more extensive flight heritage with rendez-vous missions, while also being sufficiently robust to be adapted to a moon landing mission with minimum modifications and less programmatic risks. For the VBN function, a transition was made to a full software solution, following thorough investigations on other potential solutions like hardware/software partitioning using FPGA. The full software solution implements state-of-the-art image processing algorithms, tested in the scope of the GENEVIS activity. The LPU design has also evolved along the project, currently materialising a multi-core architecture in the shape of the GR740 Quad-Core LEON4 processor from Gaisler. Having a multicore architecture has objective advantages, among which the possibility to run the VBN and HDA functions with the required real-time performances while maintaining the adequate processing load margins. The VBN function will also be tested before flight using real images from the PILOT-D COU, a major step of technological maturity. Post-flight analyses will not only address the replay of the actual Luna-27 descent and landing GNC scenario, but the moon images acquired during flight will also serve the validation of visual navigation algorithm variants for future lunar exploration missions.
In terms of challenges, a number of areas have proved demanding. For the validation approach, two different strategies are combined: open-loop validation in Europe and closed loop in Russia, adding to the overall complexity and imposing some constraints on the ESA-ROSCOSMOS cooperation. For the HDA function, some optimisations are still to be implemented, requiring the support of PI and scientists, so as to confirm the specified performances over the moon terrain at the South pole. Given the original purpose of the project, i.e. reusability, a challenging task is to keep the development of the system affordable and robust, in order to be used on other landing platforms/mission (e.g. EL3) with moderate adaptation efforts. Lastly, the NewSpace paradigm has shifted the emphasis on the commercial aspect, and space exploration is also part of it. In this context, PILOT ambitions to have its share in setting the standards of moon robotic exploration by putting ESA turn-key automatic landing solutions on the commercial market, seizing opportunities broadly in the international community
Space Rider is a reusable spacecraft platform for multiple commercial and institutional application (microgravity, IOV/IOD, sience, robotics) able to perform in-orbit payload operations, de-orbit, re-enter and land on ground and be relaunched after limited refurbishment.
Main challenges from the programme are the reusability and re-fliability as well as the commercialisation of a novel service
The GNC function manages in a full autonomous sequence the coasting, entry and TAEM (Terminal Area Energy Management) phases with elevons and thursters control, a descent phase with parachute as well as the final approach & soft touchdown landing phases with parafoil control.
The presentation is about A re-usable high reliability computer for autonomous missions (formally know as PNEXT and nowadays as ADPMS3).
ADPMS stands for Advance Data and Power Management System, and is actually a family of products. Besides the high reliable computer (that I will present today), it also features aMass Memory Module, a Power Condition Unit, a Remote Terminal Unit, and whatever the future will bring us. Due to a variety of interfaces, ADPMS is also compatible with a lot of third-party products making this a great solution for a lot of missions. Duringthe presentation, we will only focus on the ADPMS OBC.
It all started in 2001 as a Compact PCI System and has ever since a very successful portfolio. It is the main on-board computer for the Proba2 and Proba V missions and will serveas the main OBC in Proba3 as well. Even though some of these missions has long reached their lifetime, the ADPMS2 OBC has now accumulated over 15 years of flawless operation and is nowadays easily recognized as being RTL9. Unfortunately, due to its designthat aged quite well, some parts became obsolete so it was time to update it to ADPMS3. The drastic change in mechanics and electronics, resulted in an ADPMS3 OBC that is more powerful, and more affordable to manufacture while maintaining the heritage from the previous design.
The hot redundant ADPMS3 OBC (used in our IBDM) is currently in its qualification campaign under the direct supervision of ESA in the frame of the IBDM project, and is currently atTRL7. The cool redundant version of OBC, along with the other ADPMS3 PCDU and RTU products, are due to complete the qualification campaign under the supervision of ESA, later this year.
Avionics system is a key element in launcher development ensuring the heterogeneous services of GNC, telemetry, power and safeguard.As such, dedicated effort is engaged in FLPP to mature avionics technology with the objective to fulfil the numerous challenges dictated by the evolving market and institutional needs for future launchers. These challengesinclude drastic cost reduction (including AIT and operations), versatility (to comply with new services or growth potential) and launcher reusability.
ESOC executed its first collision avoidance manoeuvre in 1997, when ERS-1 had a close approach with a defunct Kosmos spacecraft.
Since then, the process for collision risk management and eventually avoidance manoeuvres execution has evolve up to a point where it has become routine for spacecraft in Low Earth Orbit.
The number of escalated events has increased significantly in the recent years following several in orbit breakups and the upcoming mega constellations,
requiring quite some effort since the process is still quite manual and requires the involvement of different teams with different expertise.
The presentation describes the process as it is today in ESOC, including also some statistics and a few remarkable events.
Artificial Intelligence methods show fantastic results in specific applications. However, Deep Learning approaches still suffer from the back-box phenomena such that it remains impossible to understand the way a model for a given taskworks. This is unproblematic in a large range of applications, but in safety critical environments, this changes drastically: if human lives are involved, some properties like predictiveness, provability and robustness can become of high importance and thenrepresent a competing objective during the engineering phase.
In this talk I will present some examples from our work at Fraunhofer FKIE, department of sensor data and information fusion, where we have been working on perception, data analytics, and sensor based situational awareness in both,civil and military applications. I will discuss examples from the fields of autonomous driving and surveillance and reconnaissance. Moreover, approaches from the AI certification community are presented to check for robustness and to make the engineering processtestable.
GENEVIS presents full-software, Vision-Based Navigation (VBN) solutions for space rendez-vous and landing. Emphasis is put on the genericity of the algorithms, where the image processing techniques used to retrieve vehicle localization information are applicable to a wide range of environments. The proposed hybrid navigation filter that fuses inertial and (terrain or target-spacecraft) relative measurements is reconfigurable automatically and ensures the best navigation estimate is always provided for any sensor association selected as input.
Results on two representative scenarios are obtained: the close-range approach of a cooperative rendez-vous and a lunar landing. PIL and HIL tests are performed to further validate the solution for the landing scenario. Thanks to this successful demonstration, the solution has been pushed to PILOT, a European navigation solution on-board the Russian Luna-27 mission.
Extensive analysis and testing was performed in order to check that the selected processor (LEON-4) was adequate in terms of memory and data processing for the Space RTTB (Real-Time Test Bed, using simulated images. Additionally, a Flight RTTB was developed and embarked on a helicopter as flying platform for processing real images in a landing scenario. Enough margin was found and additional algorithms could be incorporated with further potential parallelization and synchronization of the different tasks. An extension of the original activity explored different HW architectures for pushing further the genericity of the disruptive VBN landing strategy.
The increased complexity of computer-vision algorithms in space on-board applications and also the data fusion of measurements acquired from various on-board instruments and sensors (e.g. LIDAR and hyperspectral images) mandate the development and use of high-performance avionics to provide one or two order of magnitude faster execution than today's conventional space-grade processors and even those marked as “new-space” COTS or rad-tol ones. With this in mind, GMV provides Vision-Based Navigation solutions from concept to implementation, including SW and HW developments. The HW development includes co-processor avionics based on FPGA for HW acceleration of the most demanding algorithmic parts, normally devoted to the computer-vision modules. GMV is developing different HW co-processor versions in the form of Image Processing Unit board (IPU / IPB), or combination of IPU and Interface Controller Units or even a combination of Navigation Camera and IPU into one embedded unique enclosure. A key goal of this solutions is performance per watt ratio to reduce processing units’ power consumption maximizing the processing performance capabilities.
One example is the HERA-IPU. GMV is in charge of developing the GNC subsystem of HERA mission, currently going through Phase C. The HERA Image Processing Unit provides isolation of Image Processing Function and Interfaces Function, as it is relying on a two FPGAs architecture. The design and development of the computer-vision algorithms for HERA IPU are facilitated by the architectural design of the processing FPGA code, which provides an internal interfacing wrapper to integrate the required image processing module satisfying a client-consumer simple interface. HERA IPU also includes pre-processing functions for the image received from navigation camera and allow FPGA reprogramming in-flight to accommodate different image processing algorithms that may be required for different mission phases. The two FPGAs included by HERA IPU allow flexibility, scalability and many options for the design and implementation of complex functionalities, such as high-data rate interfaces management and hardware accelerators. Different computer-vision accelerators which are not used in the same moment of time can be used during the mission by replacing bitstreams in the processing FPGA in-flight to save a potentially needed second FPGA unit
In addition to HERA-IPU, a parallel IPU development named GMVision board offers a highly versatile space oriented Image Processing Unit (IPU) fully redundant, with spacewire interfaces to redundant Narrow-Angle Camera and redundant Wide-Angle Camera. The main electronics are fully European rad-hard components as the BRAVE NG-MEDIUM, NG-LARGE or ArcPower DC-DC converters. This technology development program board is being validated for Descent & Landing scenario, for Rendezvous operations and for long-range detection. The project includes the FPGA code development of 4 different image processing techniques used in the presented navigation scenarios. Similar concept is presented for Multispectral Rendezvous Navigation, including a HW/SW solution combining LEON4 processor and NG-LARGE FPGA co-processors.
GMV IPU, IPCU, ICU avionics equipment with direct applicability for HERA mission can easily be re-used or adapted for other space missions such as ADR, Space Exploration, Satellite Servicing, Debris Tracking/Monitoring or even scientific purposes.
The aim of this presentation is to describe the foreseen avionic solution for the Vision Based Navigation (VBN) functions of the European Large Logistic Lander (EL3)
The VBN functions of EL3 require computationally intensive algorithms, which results in the need for an appropriate supporting avionic hardware. In particular, the need for a quad-core processor hardware and for an additional FPGA for the image processing tasks of the absolute Crater Navigation (CNav), Terrain Relative Navigation (TRN) and Hazard Detection and Avoidance (HDA) have been identified.
The platforms that have been considered for the EL3 landing GNC offer multi-core processing based on ARM processors. A trade-off has been performed considering the performances of the available OBC alternatives, the FPGA used and the availability of a development or breadboard environment suitable for the Processor In the Loop (PIL) testing. The RUAG Lynx single board computer has been selected. It offers a modern state-of-the-art full space-qualified processing platform based on the ARM Cortex-A72. The Cortex-A72 is a quad-core 64-Bit processor based on the ARMv8-A architecture that offers sufficient performance for the most demanding phase of the landing.
The VBN functions of EL3 are accompanied by the additional GNC functions which perform further tasks such as Mission Vehicle Management (MVM), guidance, navigation filter, and control. These functions have in general a lower processing performance demand than the VBN functions. Therefore, a segregation of the functions between the main OBC and the RUAG Lynx is adopted in EL3.
Last advances of Thales Alenia Space on the integration and validation of visual-based navigation will be presented based on the lead of the recent H2020 projects I3DS and EROSS. The avionics architecture of the proposed solution will be presented along with the sharing chosen between the On-Board Computer (OBC) functions related to the platform control, and the Instrument/Robotic Control Unit (I/RCU) related to the specific sensors used for rendezvous and robotic operations. The current progresses on the RCU maturation will be introduced from these R&D projects to a space-grade board, on both the hardware and software sides. Eventually, the EROSS project outcomes will be presented with the porting of two different image processing solutions on this RCU, and with the validation level reached through the closed-loop tests performed with RCU and camera hardware in the loop
The ClearSpace-1 mission will be the first attempt in the history to remove a debris in space. Funded in the framework of ESA’s Active Debris Removal/ In-Orbit Servicing (ADRIOS) project, the mission aims at capturing and deorbiting Vespa, a rocket upper stage left in an approximately 800 km by 660 km altitude orbit after the second flight of ESA’s Vega launcher back in 2013.
Spaceborne rendezvous is known to be a risky and challenging task. Dealing with a truly noncooperative target makes this mission even more challenging. Contrary to classical in-orbit rendezvous and docking activities which can rely on some degree of cooperation from the target such as stabilized attitude, exchange of information, or special markers to ease the navigation, the ClearSpace-1 mission intends to capture an object which will not ease this task by any sort of means.
Therefore, it is necessary to design robust Guidance, Navigation and Control (GNC) and capture strategies able to cope with an unknown target state which can first be more precisely analyzed during the rendezvous in orbit. In addition, limited ground visibility in low-Earth orbit and the desire to limit the operational costs require a high level of onboard autonomy, creating additional challenges in terms of autonomous system integrity monitoring.
After a quick overview of the mission, the presentation aims at describing in more detail the specific GNC challenges associated to this mission and the resulting high-level requirements posed on the system architecture. Overall, a proper balance needs to be achieved between the desire to pave the way for a commercial mission, thus seeking for cost-effective solutions with low recurring costs, while ensuring safety during critical phases, which is usually achieved relying on costly redundant and segregated architectures. Furthermore, measuring the state of a noncooperative object makes it necessary to embark innovative sensors and advanced technology with limited technological maturity,
creating additional constraints in terms of onboard resources and large data handling capabilities for ground-based commissioning and analyses.
In February 2020 the first GEO satellite servicing mission started with the successful docking of Northrop Grumman’s (NG) Mission Extension Vehicle (MEV) MEV-1 with Intelsat IS901. The automated docking was guided by a set of vision systems of which Jena-Optronik GmbH provided the 3D LIDAR system, a visual camera system with six optical heads and two star sensors. Jena-Optronik’s 3D LIDAR, called RVS®3000-3D, benefits from the legacy of 48 delivered RVS® sensors and its successor, RVS®3000. All RVS® flew flawlessly to the International Space Station (ISS) on board of ATV, Cygnus and HTV spacecrafts. RVS®3000 reached TRL9 via several flights to ISS on NG Cygnus vehicles from 2019. While RVS® and RVS®3000 are dedicated to tracking and identification of retro-reflector objects on flights to ISS, the RVS®3000-3D enables docking without retro reflectors as demonstrated in the frame of the MEV mission. To this end, powerful algorithms and hardware for real-time 6 Degree-of-Freedom (6DOF) pose estimation of an uncooperative satellite in space have been implemented. Here we present and discuss performance of the RVS®3000-3D using in-orbit data from MEV. Thereby, we focus on lessons learned regarding the integration of LIDARs in GNC/avionics processing chains.
The Lunar Gateway is under development with major contributions by ESA and European companies, Thales Alenia Space in particular. It is the first space installation that will deploy a new federated and modular approach to avionics and data-handling based on the International Avionics System Interoperability Standards (IASIS).
Thales Alenia Space and partners will build and integrate the avionics of the International Habitat which will be added to the Gateway from 2024 onward. Building blocks for this safety-critical system are currently being developed and qualified in an ESA GSTP activity by TTTech and RUAG Space (switch and network interface card).
In this presentation we explain the overall architecture, explore how the elements being developed in Europe will be combined with elements coming from the US or Japan to build for the first time real distributed integrated modular avionics (DIMA) in space, also the current qualification status will be briefly described.
The Advanced Payload Processors (APPs) was developed targeting in-orbit demonstration (IOD) in the GOMX-5 mission. GOMX-5 is a flight demonstration for next generation cubesat missions, which will demand advanced attitude control, large processing capabilities, and high throughput data exchange between space and ground segments. APPs was jointly conceived by Cobham Gaisler, GMV, CBK, and UFSC in order to demonstrate multiple processing technologies developed within ESA activities. APPs is a 1U size payload with five stacked boards: two GNSS boards from GMV/CBK, to process GNSS signals providing on-board PVT solution and to transfer raw samples to BRAVE and GR7XX boards; BRAVE board from UFSC using Nanoxplore's NG-Large FPGA for space radiation experiments and in-orbit FPGA reconfiguration; and two high performance and fault tolerant microprocessor boards from Cobham Gaisler, using the GR740 and GR716 LEON SPARC V8 processors. The combination of processors and reconfigurable logic in APPs allows for multiple IOD experiments
The use of Artificial Intelligence (AI) is of rising interest for both ground space-segment applications despite the associated challenges, like lack of dedicated environment for testing and experimentation and the limited technology demonstrators on-board flying spacecraft.
The advantages of AI capable spacecrafts have been convincingly demonstrated in past missions, where autonomous on-board decision-making has supported ground operators in manners that significantly reduced operational costs and effort all while increasing operations and science uptime. Targeted on-board AI applications have succeeded in automating scheduling, planning, classifying acquired payload data, and detecting events of scientific interest. At the same time, ground based AI applications will allow for an unprecedented level of monitoring, simulating and optimised automated planning in view of the next generation constellation missions of hundreds of spacecraft. This presentation will cover a wide range of AI applications and use cases for both space and ground segments, focusing on the re-distribution of processes and functions between ground and space and a system-level overview of future AI-enabled missions.
Following the successful development of microprocessors using the SPARC open Instruction Set Architecture (ISA) conducted during the past 25 years, this presentation will introduce the plans and efforts undertaken by ESA and related programmes to introduce the emerging RISC-V open ISA to space.
The Failure Detection, Isolation and Recovery (FDIR) subsystem is a critical function on board all spacecraft since it is vital for ensuring the safety, autonomy and availability of the system during the mission lifetime. Together with software hard coded and hardware protection mechanisms, the majority of modern satellites implement as well a PUS-based FDIR design. The latter mechanism uses the concept of parameter and functional monitoring based on dedicated unit level TM and is used to determine the correct functioning of the individual unit as well as subsystem level monitoring to ensure the correct functioning of mode specific tasks (i.e. applied for AOCS subsystem). The limitations of PUS-based FDIR are linked to the limited amount / type of checks that can be performed on the parameters and to the implementation itself; in order to work, the anomalies and their signatures need to be know at service definition in order to make the necessary parameters observable and to properly set the monitoring checks liked to them.
The use of machine learning and/or deep learning algorithms can significantly enhance the performance of the on-board FDIR, especially in identifying and isolating failures at the lowest level possible (equipment level) thus fostering mission availability and autonomy. Indeed, artificial intelligence algorithms have the capability to identify non-nominal on-board behaviour without classical limits, but based on past telemetry signatures and trends (e.g. orbital conditions, telemetry patterns etc.) or through interrelationships between telemetry parameters. An implementation of AI algorithms purely in software and integrated in the On-board Software of a classical LEON based On-Board Computer (OBC) is feasible, however the complexity of the model is limited by the processor (i.e. AI-based FDIR would increase CPU load). For this reason, this type of solution would not scale well for an AI algorithm looking at hundreds, potentially thousands of TM/TC parameters.
This work deals with the development of a fully-fledged solution of an Anomaly Detection and Anomaly Prognosis (ADAP) system implemented as a hardware unit in the programmable logic of a SoC based OBC or in the FPGA co-processor for a classical OBC. The Anomaly Detection Module will take advantage of anomaly detection algorithm(s) in a purely unsupervised manner. Hence, without a need on knowing a priory all the potential failures modes, an ML-based FDIR can capture anomalous behaviours or failures even when only small symptoms are present, identifying and isolating failures at the lowest level possible (equipment level) thus fostering mission availability and autonomy. The Anomaly Prognosis Module is instead trained on historical telemetry data in order to cover the other use-case for on-board FDIR applications; identify specific anomaly signature(s) that relate to a (specific) observed failure (for example, due to design errors) and apply a targeted recovery action which otherwise would require a complex software patch.
The ADAP system workflow which is presented in this study is based in combining the most common Machine-Learning workflow provided in the literature with the classical workflow of developing a satellites' failure tolerance technique, required by the specified target of missions' reliability, availability, maintainability and operational autonomy requirements. The objective is to present a solution which does not jeopardize the mission whatever the failure, but that it is also sufficiently generalized to be deployed to satellite constellations without the need of specific tailoring.
The introduction of ML-based technology on-board the spacecraft is expected to significantly increase early anomaly detection and prediction capabilities by providing benefits to FDIR functions and on board autonomy, as well as enabling a new concept of space segment operations, bringing great benefits to the overall mission space efficiency, especially in scenarios with very large fleets of satellites. In this process, the analysis of the user needs, the definition of the prerequisites and the design drivers related to the FDIR strategy and the Operations that leverage the on-board ML-based models are a key point, taking into account the new design and constraints of space-qualified hardware components. Indeed, the implementation constraints imposed by low resources space qualified devices such as the OBC based on Leon4 processor will require to adaptations to lower memory usage and processing resources need.
Advanced machine-learning/deep-learning models are promising technology in space domain and can be suitable for onboard satellite failure prognostic and detection through telemetry data processing. This category of solutions possesses the peculiarity to better handle the complex correlations hidden inside the data. Preliminary architectures and learning strategy based on autoencoder neural network are proposed for multivariate data processing approaches, analyzed and assessed vs. other traditional approaches.