The 14th ESA Workshop on Avionics, Data, Control and Software Systems (ADCSS) covers topics related to avionics for space applications, in the form of a set of round tables. The workshop acts as a forum for the presentation of position papers followed by discussion and interaction between ESA and Industry and between participants. Each theme part of ADCSS workshops will be first introduced and then expanded by presentations on related developments from technical and programmatic points of view. A round table discussion may follow, concluded by a synthesis outlining further actions and roadmaps for potential inclusion into ESA’s technology R&D plans.
All material presented at the workshop must, before submission, be cleared of any restrictions preventing it from being published on the ADCSS website.
Your moment to form future ISVV
Your time to influence ECSS standards
Recent advances in machine learning and availability of massive computational power have attracted a lot of researchers to apply learning techniques to resolve control problems applied to space systems.
Although the results shown in literature so far appear to be promising, we question ourselves on how to develop a machine learning framework to handle the specificity of space systems. To be of relevance in general and in particular for space systems the new framework needs to be equipped with tools relying on a formal mathematical foundation for the realisation of guaranteed and predictable outcomes. When designed as such its mechanisation will lead to a series of tool sets that provide understandable, predictable, scalable and reliable products.
The controls machine learning framework tool set, to be able to cope with space systems in a changing and uncertain environment shall contain at least formal tools for dynamical systems modelling (first principles, off-line online system identification), for systems perception (sensing, estimation, sensor fusion, prediction) and systems action (real-time planning, prediction, real-time decision making, real-time control actuation).
We will look at parallels between machine learning and controls from a numerical optimisation perspective. A simple controls problem shall illustrate the performance and robustness of a machine learning design and its impacts.
We can conclude with some promising research directions that need special attention to meet future needs. What are the requirements for building machine learning assisted space systems providing a better understanding of systems dynamics operating in a complex and
constrained physical environment?
What can be achieved in modelling beyond what is known today in solving the limitations of today's problems? In a similar way we ask ourselves the same questions for the of perception and action problems. Furthermore, what are the impacts downstream in the controls development process?
Typically, what validation and verification challenges, software and implementation impacts does the technology bring with it?
Scalability of the technology targeting avionics architectures, their computational infrastructure will all need to be addressed on their own.
Recently reinforcement learning (RL) methods have been used to solve a wide range of complex control problems. One reason for the increased interest in applying RL methods to control problems is due to their ability to generate control laws solely through interaction with the plant. The control law can be trained directly out of interaction with the real-world system or by using a simulation model. Training the controller in simulation is a promising approach, because it is fast, scalable, safe and has shown positive results. Therefore, RL methods are increasingly applied in motion control of aerial, marine and ground vehicles.
In this talk we want to give insights into the application of model free RL to control problems. We will showcase the RL design process through application to the path following problem of an over-actuated robotic vehicle. Moreover, we will discuss potential benefits as well as unsolved problems when applying RL to control problems
In this talk, we will show how machine learning and automatic control are mutually enriched.
To do so, we will review some of the classical questions in automatic control and we show how the usual control techniques can be improved using scientific computation and AI.
Big-data data alone is not sufficient, but it can help. Some examples in fluid dynamics and in navigation will be given.
Vice-versa, we will show how automatic control paradigms should be fruitful to solve some open issues in AI.
Through a long history of rigorous techniques and processes, the field of spacecraft Guidance Navigation and Control (GNC) has evolved over the years to safely and robustly meet the GNC challenging needs of rather complex mission (e.g. high pointing accuracy, agile manoeuvring). In particular, the field of Robust Control has achieved this in complex, uncertain and partially observable dynamic environments, allowing the GNC community to tackle the stability and performance robustness formally from specification to V&V.
Despite the great advances in the last decades, there are still important limitations for the system control discipline to solve. As a primary example, Robust Control often requires strong assumptions, such as linearity and time-invariance, which rarely apply in the real world. Furthermore, characterisation of system uncertainty for both, the “known unknown” and the “unkwnown unknown” in practice leads to rather conservative designs at the expense of performance. Finally, the most extended control paradigms do not offer on-line adaptation, missing the chance to take advantage of the real world information to improve once the GNC solution is deployed.
Besides, more modern and complex control techniques offer partial solutions to the conservatism and/or adaptivity needs. In particular techniques such µ-Synthesis, LPV, MPC, Fuzzy Logic or Extended Kalman filters were born to address those issues. However, the reality is that these techniques are still to be fully embraced by industry as they are subject to other constraints such as model availability, formulation complexity, convergence guarantees, on-board computational cost and/or scalability.
Finally, in a broader sense GNC techniques and solutions are heavily relying on models. However, GNC solutions are sometimes required to work under physical phenomena of a highly (or even mostly) uncertain nature such as micro-gravity slosh, poorly characterised environmental effects or unforeseen hardware behaviour for which today’s GNC approach is significantly limited.
The application of AI techniques to both GNC solution and processes promises to enhance the way in which real/realistic data is exploited in the GNC system decision making offering potentially cheaper and smarter solutions. In particular promising aspects of AI for GNC are:
· The relaxation of system hypothesis/assumptions and the ability to take into account non-linearity and time invariance at design and implementation phase through a rather generic framework (or at least less specialised than Robust Control frameworks)
· The on-line adaptation capability and/or potential against uncertain physical phenomena of certain branches of AI
· The data fusion capacity/semantic abstraction power to efficiently handle extremely large inputs (e.g. visual information) for complex perception, navigation, guidance or high level decision making tasks
· Their efficiency as universal approximators on applications requiring real-time optimisation
However, for AI techniques to succeed in GNC these applications, they will need to provide sufficiently generic, safe, robust, efficient and verifiable solutions. Today, only a limited subset of AI techniques are of this nature and thus both fields will have to evolve.
This presentation discusses the place of AI within GNC solutions and processes from an industrial perspective, covering use cases of all range of complexity, from quick wins on hybrid AI systems all the way to on-line Reinforcement Learning systems. These can make GNC solutions cheaper, smarter and more scalable. We discuss the results and potential shown on some initial use cases as well as set out some preliminary conclusions. Finally, the presentation addresses some of the most important gaps and challenges that industry will face to embrace and rip the full potential of AI not only within the GNC system but more broadly across the Functional Avionics chain.
Based on the search and first results obtained on machine / deep learning on several domains such as automotive, UAV or internet, implementing IA algorithms in GNC product has recently emerged.
Presentation will present THALES ALENIA SPACE perspectives on the:
• Major current issues GNC engineers are facing and that IA algorithms may or not tackle.
• Possible support of the machine / deep learning algorithms
• Challenges to integrate such algorithms
All GNC domains will be addressed: Navigation / Guidance / Control / GNC FDIR / GNC Verification and Validation.
Next generation launcher shall follow and anticipate market evolution with wider spectrum of mission which means more versatility for GNC. GNC shall also enable mastering vertical or horizontal landing for stages reusability. Furthermore, use of SMART upper stages and liquid reusable engines open new perspectives for launcher mission extension, this means modulation use, adaptation, failure detection, reconfiguration and on board planification. It also opens the door for new on board strategies with less ground intervention.
All this this requires autonomous GNC, covering the ability to succeed given complexity, and, the combination of knowledge based and learning capacities.
The purpose of this presentation is to depict the context, the challenges and the opportunies for using articifical intelligence (AI) and machine learning theory for launcher GNC. It will give you our vision of what was already achieved and what remains to be done.
Autonomy is not a dream anymore. Each step will be a gain keeping in mind that we still have to have to answer a lot of questions: “What can we gain?”, “How can we trust AI?”, or, “What remains to be developped by all of us”?
Airbus with the DDMS transformation program is reshaping its product development from engineering to manufacturing. Product Line Strategy and Implementation is a key pillar of such transformation. This keynote presents the vision and strategy of product line development through Product Line Engineering (PLE) and some latest examples of its real-life application focused on the use of PLE in combination with MBSE.
With the objective to rationalize its platform product lines among France and Italy, Thales Alenia Space had initiated the development of M.I.L.A platform product line, its European platform product line. This approach is fully in line with Standard platform initiative from ESA, also called “Common PF”. This new product line, named M.I.L.A, encompasses Copernicus, but also other institutional and commercial markets.
To master the double source concept and large variability between application missions, a specific approach has been put in place at MILA product line level to manage the variability and the scalability of the sub-systems.
Variability and scalability can be seen as:
• A function or equipment may be implemented or not,
• A function or equipment may be implemented with different sources,
• A function or equipment may be implemented with different performances,
• A function or equipment may be implemented with different sizing.
The presentation will focus on the selected approach:
• Tooled Product Line Engineering approach (Orthogonal Variability),
• Generic and instantiated specifications,
• Capitalization of the verification.
STR SW centralization originally used for LEO constellation is beeing generalized for most spacecraft applications.
The SW integration takes benefit from to the increased CPU of PF computer.
Presentation will provide the design concept, industrial impacts, majors lessons learns and key features for a STR SW standardization.
ADHA concept has raised along to a study performed by two consortium led by TAS and RUAG/ADS primes. Its concept and resulting backplane selection i.e. cPCI serial Space adopted by 32 stakeholders are well mapped to several trends shaping the development of future on highly integrated spacecraft CDHS allowing the execution of multiple software applications sharing common set of hardware modules.
It is proposed to have a session at the ADCSS 2020 dedicated to ADHA. The two consortium will present the results of the study, the possible future family of ADHA products and finally the space applications that are targeted with these products. This session will also be a global forum giving the possibility to the participants to exchange about ADHA and the perspectives it opens.
The convergence of artificial intelligence (AI) and embedded systems (ES) on terrestrial applications are creating a revolution in the domain of computer science, communications and information technology with diverse engineering applications.
For Space applications, a large number of new applications thanks to the use of artificial intelligence and Machine learning are today investigated for future spacecraft.
It is proposed to have at the ADCSS 2020 a specific session about on-board AI/ML applications for launchers and satellites (platform and instruments) and address their implementations in spacecraft computer and data handling systems (CDHS). Some examples of practical implementations and techniques using AI/ML on spacecraft CDHS will be presented. This session will also be a global forum giving the participants the possibility to exchange and share knowledge about current and future research axis in this domain.
This historic event has been achieved onboard Ф-sat-1, the European Space Agency’s (ESA) Artificial Intelligence (AI) demonstration cubesat that was launched on a Vega rocket on September 3rd. Initial data downlinked from the satellite today has shown that the AI-powered automatic cloud detection algorithm has correctly sorted hyperspectral Earth Observation (EO) imagery from the satellite’s sensor into cloudy and non-cloudy data. Ф-sat-1 is part of an ambitious and ground-breaking programme, funded by ESA, for the demonstration and validation of state-of-the-art Deep Learning technology applied in-orbit for autonomously processing Earth Observation data. Today’s successful application of the Ubotica CVAI™ Artificial Intelligence technology, developed with ESA GSTP, which is powered by the Intel Movidius Myriad 2 Vision Processing Unit, has demonstrated real on-board data processing autonomy, laying a foundation stone for the path to advanced Deep Learning applied to satellite data at source. Decision making on-board Ф-sat-1, rather than on the ground, has been shown to enable pre-filtering of EO data so that only relevant images with usable information are downlinked to the ground, thereby improving bandwidth utilisation and significantly reducing aggregated downlink costs.
The miniaturization of electronical components was one of the major improvement that happened during the last decades. Spatial agencies followed this trend and satellites became more and more complex. However, by reducing the size of their components, satellites became subject to space radiation. Indeed, satellites aren't protected by the atmosphere and defects caused by energetic particles can happen. These events are called "Single Event Effect" (SEE) and as they can be destructive, it is important to protect the satellites against them. However, Single Event Effects are random and can take many forms, thus detecting and preventing them is complicated.
With the emergence of AI algorithms, new diagnostic methods appeared. The goal of this thesis project is to improve the threshold circuit protection using machine learning algorithms. The algorithms should be able to "learn" the nominal behavior of the satellite to detect a defect caused by radiation.
The aim of this presentation is to assess the feasibility and on-board hardware performance requirements for on-board telemetry forecasting by implementing a Recurrent Neural Network(RNN)onlow-costmulticoreRISC-Vmicroprocessor.Gravity ﬁeld and steady-state Ocean Circulation Explorer (GOCE) public telemetry data was used for training RNNs with different hyperparameters and architectures. The prediction accuracy of these models was evaluated using mean error and R-squared score on the same test dataset. The implementation of the RNN on a RISC-V embedded device, representative of future spacegrade hardware, required some adaptations and modiﬁcations due to the computational requirements and the large memory footprint. The algorithm was implemented to run in parallel on the 8 cores of the microprocessor and tiling was employed for the weight matrices. Further considerations have also been made for the approximation of sigmoid and hyperbolic tangent as activation functions. Index Terms—Deep Neural Networks, RISC-V, Space Systems, Artiﬁcial Intelligence
The future Copernicus Hyperspectral Imaging Mission for Environment (CHIME) will provide continuous acquisitions during the daylight part of the orbit, with numerous bands in VNIR and SWIR domains. Considering the significant presence of clouds hiding the ground, a work has been performed in the frame of CHIME A/B1 to explore the possibilities of increasing the on-board data reduction, with a selective compression applied to the clouds. The compression chain includes a Machine Learning-based cloud detection built on Support Vector Machine (SVM) approach that has been selected for its performances and high adaptability for future evolutions. The SVM is defined with appropriate spectral bands and indexes, and the training is performed on-ground, making cloud detection implementable on-board. The output cloud map is then considered by a selective compressor based on CCSDS 123.0-B-2 standard to apply a higher loss on pixels detected as cloud. The design has been coded in VHDL and C language (transformed into VHDL by using High Level Synthesis techniques) and validated in a Xilinx evaluation board, mounting a KU040 FPGA, device representative of flight hardware. The results allow to have an estimation of FPGA resources needs and will be used to select the CHIME flight FPGA.
The historic success of the recent ESA Φ-sat-1 mission has demonstrated for the first time that COTS hardware acceleration of AI inference on a satellite payload in-orbit is now possible. The Deep Learning cloud detection solution deployed on Φ-sat-1 utilises an Intel Movidius Myriad 2 vision processor for inference compute. The Myriad has performance-per-watt and radiation characteristics that make it ideally suited as a payload data processor for satellite deployments, providing state-of-the-art Neural Network (NN) compute within an industry-low power envelope. Building on the hardware and software deployed on Φ-sat-1, the UB0100 CubeSat board is the next generation AI inference and Computer Vision (CV) engine that addresses the form factor and interface needs of CubeSats while exposing the compute of Myriad to the payload developer. This presentation discusses the requirements of an AI CubeSat payload data processing board (hardware, firmware, software), and demonstrates how the UB0100 solution addresses these requirements through its custom CubeSat build. An overview of the CVAI software that runs on the UB0100 will show how, in addition to AI inference and integration with popular AI frameworks, the user now has direct access to the hardware-accelerated vision functionality of the Myriad VPU. This unlocks combined image pre-processing and AI compute on a single device, enabling direct processing of data products at different levels on-satellite. The flexibility provided to the user by the UB0100 solution will be demonstrated through a selection of use cases.
This presentation will focuse on the High Performance Compute Board (HPCB) development, including both a high-performance Xilinx CU060 FPGA and high-performance software-programmable VPU/VLIW multi-core processors (Myriad 2) will provide very high-performance computational resources on-board spacecraft to handle higher payload data rates and allow processing before downlink, reducing bandwidth requirements and improving reaction times of space systems.