The European Workshop on On-Board Data Processing (OBDP2019) covered topics related to on-board data processing for space applications. The event aimed to provide an interface between on-board processing equipment manufacturers and data processing users.
OBDP is co-organised by ESA, CNES and DLR and is hosted at ESTEC in the Netherlands. The 2019 edition of the event will take place in the High-Bay in the Erasmus Building.
The workshop brings together representatives from the European space agencies and industry to create a forum for presentations on past, current and future data processing topics and to organise round-tables on the topics of future development of on-board data processing systems and devices.
At OBDP2019, the round-table sessions aimed to extract the needs and possibilities for on-board data processing in the future, as an input for future development activities.
All material presented at the workshop must, before submission, be cleared of any restrictions preventing it from being published on the OBDP website.
OBDP2019 presentations and papers can be found in the "Timetable" tab in the menu, under each presentation title.
The host of the event will give a brief introduction to the workshop and the technical topics of the first day.
NASA’s Office of the Chief Technologist has recently developed a set of Strategic Technology Roadmaps that call out the need for more capable on-board flight computing. Future missions will necessitate robust and reliable on-board computing elements to execute the functions of pinpoint landing and hazard avoidance, and rendezvous-proximity operations-capture. The success if these types of missions is directly tied to the availability of high-performance space-qualified computing elements. Similar to ESA, NASA understands that advances in high performance, low-power onboard computers are central to more capable space operations in support of robotic science, robotic spacecraft servicing, and crewed exploration missions. Many of the complex objectives of future missions can be mitigated by making decisions in the in-situ flight environment on-board the spacecraft. This goal is coupled with the widespread need for increased autonomy on all classes of missions, especially for the functions of autonomous Guidance, Navigation, and Control.
In particular advanced space-qualified on-board processor performance requirements will be driven by the challenges of the optical (vision-based) terrain-relative navigation functions that are intrinsic to performing autonomous planetary and small body landing and hazard avoidance functions as well as the similar optical target-relative navigation functions that permit autonomous space rendezvous, proximity operations and docking. Advanced on-board computing is of high importance to NASA since optical Terrain Relative Navigation (TRN) systems are baselined on upcoming robotic landing missions to the Moon and Mars. Likewise there is an Intelligent Lander System (ILS) in development at NASA JPL for a Europa lander concept. JPL is also developing a Lander Vision System (LVS) for the Mars 2020 mission. In addition the need to perform autonomous in-orbit space platform assembly and servicing as well as in-flight planetary/small body sample retrieval also imposes a significant demand for on-board optical navigation processing capability. Beyond just achieving higher processing capabilities the Size, Weight and Power (SWaP) requirements for on-board computing elements will also need to be directly addressed, especially for the significantly mass constrained planetary/small body exploration and landing missions.
This proposed OBDP2019 presentation will cover three distinct, but closely related, areas. In the first part of this presentation a brief summary of the NASA technology roadmaps for advanced space computing will be provided and the current set of driving requirements will be defined.
In the second part of this presentation the on-going work that NASA’s Space Technology Mission Directorate (STMD) is sponsoring to develop a High Performance Spaceflight Computing (HPSC) “chiplet” will be discussed. The HPSC chiplet is a 64-bit RISC multicore radiation-hard flight processing chip for use within a general purpose processor. Briefly stated the HPSC chiplet technology is conceived, in a reference spacecraft avionics architecture, as a dual quad-core building block, with provisions for extensibility and interoperability with other computing devices, and with native architectural support for power scaling and energy management, as well as hosting of software-based fault tolerance methods. A multi-year technology development plan has been formulated by NASA and our industrial partner (the Boeing Company) which is expected to deliver a next-generation rad-hard space processor based on the ARM processor architecture that will provide optimal power-to-performance for upgradeability, software availability, ease of use, and affordability. The HPSC project will use Radiation Hard By Design (RHBD) standard cell libraries, as well as the ARM A53 processor with its internal NEON Single Instruction Multiple Data (SIMD) design. The ultimate goal of NASA’s HPSC activities is to develop a next-generation flight computing system addressing the computational performance, energy management, and fault tolerance needs of NASA missions through 2030. A description will be provided of an envisioned Descent & Landing Computer (DLC) which will be architected to include the HPSC multicore ARM A53 plus Field-Programmable Gate Arrays (FPGAs) as well as multiple interfaces for navigation sensors such as lidars, laser altimeters, visible and infrared cameras, and IMUs. This DLC technology is currently at TRL 3 with plans to be matured to TRL 5 by FY2020.
The third portion of this presentation will consist of into the future, well beyond the current HPSC technology. A new and higher requirements bar will be described to meet the on-board processing challenges to support optical navigation and other GN&C critical functions (e.g. autonomous low-thrust guidance, autonomous maneuver planning, autonomous fault protection) in the time frame beyond 2030. The need for reconfigurable computing capabilities for supporting various GN&C functions will also be discussed in this third and last section of the presentation.
An overview of the evolution of digital needs for satcom. From past and current CNES activities in the field, the impact of the new possibilities offered by digital products on payload architecture and telecom missions is presented.
The presentation aims to highlight the trends and future technology needs for the on board Earth Observation data processing.
Starting from trends of actual and planned ESA Earth observation missions, the presentation will analyze the future needs and technologies to support Earth Observation Program of the European Space Agency.
Abstract— Space robotics covers a wide field of applications. It ranges form exploration tasks (robotic vehicle with measuring equipment [1]) to servicing or maintenance tasks (on-orbit servicing or deorbiting of satellites [2], [3]). These tasks comprise different grades of autonomy from telepresence scenarios [4] to semi autonomous supporting task [3] to completely autonomous applications. Depending on this also the system complexity of the robot varies from small modules developed for a dedicated task [5] to highly complex versatile robotic systems (e.g. a rover for exploration [1] or a humanoid robot which performs tasks on a planetary surface in supervised autonomy scenario [6]). However all robotic systems follow regarding data processing an equal approach: a control model processes actual data sets acquired from sensors to data sets for actuators to obtain the desired reaction of the robot. Depending on the task and the robotic system the involved data processing have to be performed from a single processor up to a distributed system. Especially highly complex robotic system with cascaded control loops spatially distributed over arbitrary processing units demand for strict realtime requirements regarding synchronization and main control loop frequency [7]. These requirements together with the introduction of big data and machine learning approaches into the robotic domain issues a big challenge to on-board data processing.
New space applications and technologies induce a significant increase of the on-board data processing Performance requirements while low power consumptions remains a constraint and reconfigurability becomes a must. These assigned targets for future space data processors can not only rely only on classsical rad-hard devices as mostly developped so far. To cover the needs of the different future missions categories and markets, the roadmap has to be build on two axis addressing :
- Highly robust and high performance space data processor products based on a familly of "rad-hard" or "rad tolerant" technologies with devices such as RTG4, GR740, HPDP, BRAVE, DAHLIA...;
- Data processing products built on the basis of "Commercial Off The Self" (COTS) components with adapted radiations effects mitigation techniques. Such components are selected among the growing non-space embedded systems markets using mainstream advanced micro-electronics technology. FPGAs, MPSoCs, GPUs and other new parallel and or heterogeneous processing devices can provide extreme performance thus enabling a new kind of Low-Earth-Orbit applications for the "new-space" market;
This presentation will address driving requirements and technology trends for our on-board high performance data processing roadmap with an overview of the advanced studies activities performed by the space division of Airbus Defence and Space Engineering.
The JUpiter ICy moon Explorer (JUICE) spacecraft will provide a thorough investigation of the Jupiter system in all its complexity with emphasis on the three potentially ocean-bearing Galilean satellites, Ganymede, Europa and Callisto, and their potential habitability.
The JUICE spacecraft will carry a powerful remote sensing, geophysical, and in situ payload complement consisting of 10 instruments. Each instrument includes a Digital Processing Unit (DPU) and a dual redundant SpaceWire interface towards the Command and Data Management Unit (CDMU) and its embedded Science Mass Memory (SSMM).
This presentation describes the activity lead by the ESA project to procure a common DPU core and Basic Software (BSW) package aiming at the harmonization of the interface towards the CDMU to reduce development and integration risks.
The resulting hardware and software platform is based on GR712RC LEON3-FT that Cobham Gaisler developed in accordance with SAVOIR BOOT software generic specification and system requirements common to the 10 instruments.
The radiation hardened DPU platform features EDAC protected boot, application memory and working memory of configurable sizes, and SpaceWire, FPGA I/O-32/16/8, GPIO, UART and SPI I/O interfaces.
The hardware design has undergone PSA, WCA, Radiation analyses etc. to justify component and design choices resulting in a robust design that can be used in spacecraft requiring a total dose up to 100krad(Si).
The validated BOOT software includes low-level DPU initialization and the Standby Mode (part of the BOOT) handles SpaceWire/PUS communication with the CDMU as well as the selection, loading and execution of the instrument-specific Application Software.
Among 10 instruments, 7 use the common BOOT software, 6 of which also adopted the common DPU design.
Both DPU and Basic Software products are suitable for reuse in other scientific missions.
Selecting the right technologies for a Next Generation Data Handling System is of significant importance – especially for a mission of such kind like the Deep Space Gateway Habitat and Esprit modules.
Furthermore, the design and verification processes shall be simplified and standardized to allow more efficient development and maintenance.
Intelligent devices, distributed modular avionics, run-time reconfigurable flight software frameworks and platform abstractions are evolving significantly and some are already used in different space missions.
Standardized open architectures and interfaces are the basis for such an approach.
Airbus DS will use the existing OBC-SA/SPINAS infrastructure to implement an Integrated Modular Avionics demonstrator for the Deep Space Gateway Habitat and Esprit avionics and software architecture.
But also a huge range of different other applications are possible, since the architechture is flexible and modular.
In the field of Visual Based Navigation (VBN) the architecture is used to build up a demonstrator in the ESA COMRADE study and it is a candidate for the Landing Processing Unit of the PILOT program (lunar lander). These VBN applications benefit of the number crunching abilities of the architecture.
Furthermore, Robotic Control incl. tactile control systems is another promising application. The mix of required high performance and required high reliability make high demands to the avionic sub-systems, but these are fulfilled by the SPINAS architecture. Promising first applications are SpaceTug or Robotic Arm Control on the ISS Bartholomeo platform.
Finally future applications like autonomous systems are important points to be considered by todays avionics developments. Airbus DS prepares different approaches to provide the processing capability for Artificial Intelligence (AI) in orbit to support real-time monitoring. These processing capabilities can be added into a SPINAS based processing hardware.
The SPINAS Next Generation Data Handling System is comprised of three main ingredients:
Airbus Defense & Space Bremen has developed a multi-purpose space infrastructure computer, which is configurable and scalable with respect to function and redundancy. The radiation hard and high-reliability 4-Core CPU is based on the GR740 LEON4 implementation and is accompanied by an interface module (FPGA-based peripheral extension card). The EQM is available and the associated environmental tests for this computer will be completed in Q4 2018.
To ensure the needed flexibility the system builds up on the standard "compactPCIserial for Space". It guarantees the extensibility of the system as HW components can be easily extended or altered due to the fully standardized backplane. Several additional boards from different vendors are already existing (ranging from additional interfaces to high-performance CPU boards).
The operating system is a real-time operating system which provides capabilities for time and space partitioning. This means that:
Each application can be developed by different entities and be safely integrated as one common platform is ensured. This enables distributed SW development and helps to reduce costs as “make-or-buy” strategy is already considered.
On top of this modular architecture is the software framework layer. The second key technology which is implemented is the extension of the cFS with mission specific Apps. cFS defines a standardized communication layer between applications. This enables easier development, integration, and testing for such applications.
Furthermore, cFS enables more flexibility as stopping, restarting and even re-allocation of applications from one computer to another computer is possible – and of special interest for developers and integrators.
Finally, the main communication channel is based on TT-Ethernet (Time Triggered Ethernet) which enables a safe, stable, and deterministic communication between systems and applications.
Due to the time triggered approach, safety related messages arrive in time and ensures well-balanced network load.
To conclude, by using these three technologies and combining them into one computer platform, we are confident to build the Next Generation of Data Handling System – for the Deep Space Gateway modules and all other missions.
The development of High Performance Processing capability is necessary to face future space application needs. One of the technology path explored by Airbus Defence and Space is to build generic and flexible platforms enabling the use of Commercial Off The Shelf (COTS) components through appropriate and scalable radiations effects mitigation techniques. The ESA study “High performance COTS Based Computer” has validated an architectural concept with a prototype implementation. This concept with a mitigation mechanism based on a SmartIO element implementing the monitoring and error mitigation functions has been applied and tailored for being implemented on key programmes: COTS ARM processors and reconfigurable FPGA devices are now used within e.g. central computers products for LEO orbit or within high performance instrument processing devices in GEO.
The COTS Based Reliable Architecture (COBRA) is one of the in-going development of the Airbus Defence and Space Real-Time Lab in Toulouse. This development targets a new generation of very high performance generic and scalable fault-tolerant architecture demonstrator for on-board payload processing with cost and power consumption containment by enabling the use on-board of the last generation of MPSoC devices that features extremely high performances. A TRL 4 demonstrator has been developed with Xilinx Zynq UltraScale+ devices and Serial Rapid IO data links to achieve multi-Gbps of processed data throughput and is currently being evaluated. The first evaluation results will be presented with the application use case developed for Vision Based Navigation in support to robotic applications. Further use cases may also apply in the future for various on-board processing domains such as image, telecom or radar signals or for supporting innovative algorithms aiming at e.g. artificial intelligence on-board.
Reconfigurable architectures can enhance on-board processing with unprecedented levels of flexibility, enabling the adaptation of the system to functional and/or fault-tolerance requirements that may change during mission lifetime. When combined with commercial off-the-shelf (COTS) devices, they lead to cost-effective solutions which can benefit from state-of-the-art devices and technologies. However, additional fault-tolerance mechanisms are required to maintain the reliability rad-hard devices provide.
In this work, a Reconfigurable Video Processor (RVP) for space applications has been implemented on a Zynq UltraScale+ device. The diversity of computing fabrics is exploited to harden the system against the occurrence of SEUs and to improve the performance of certain tasks using hardware acceleration. In fact, the ARTICo3 architecture is available in the hardware fabric, enabling run-time tradeoffs between computing performance, energy efficiency and fault tolerance. Hierarchical scrubbers (slow software-based readback scrubbers, fast hardware-based error redundancy coding scrubber) complement the hardware redundancy provided by ARTICo3. The software fabric relies on the RTEMS operating system running on the Cortex-R5 processors operating in lock-step mode to ensure real-time behaviour.
As an example, a lossy hyperspectral image compressor based on the CCSDS123.1 standard has been implemented. This application is executed using a variable number of hardware accelerators, obtaining a speedup of 10x when compared against a software-based implementation. Moreover, all fault mitigation techniques have been tested using fault injection to change the configuration memory of the FPGA. The design of the platform, algorithms and hardware/software components has been driven by industrial requirements in the context of the ENABLE-S3 project (ECSEL).
RUAG Space has designed a very powerful and flexible processing board - Lynx. This product is ready to meet tomorrows processing needs driven by increased digitalisation and new services such as Software Defined Radio, real-time Image Processing, Artificial Intelligence and enhanced Compression techniques.
Lynx is designed around a highly performant Processor connected via PCI-express to a modern FPGA, managing the external interfaces. This modular design makes the Processor and FPGA easily exchangeable, allowing different flavours of the product to achieve an optimal balance between price, performance and radiation hardening for a particular application. The Lynx product provides the following main characteristics:
Performance - Multi-core ARM CPU of more than 10.000 Dhrystone MIPS (DMIPS) provides ample general purpose processing power. Modern FPGA for implementation of critical processing tasks.
Connectivity - Provides custom high-speed interfaces using SERDES as well as PCI-express to interface e.g. High Speed DAC/ADC, GPU, DSP or FPGA. Also provides traditional interfaces such as SpaceWire, M1553, CAN, GPIO and UART to interface other units.
Flexibility - Prepared for several levels of flexibility; Processor and FPGA choice for optimal price/performance/reliability balance, Mezzanine connector allows for daughter board extension, several boards can be connected to increase performance or implement redundancy/voting mechanisms, only the necessary interfaces are implemented.
Reliability - State-of-the-art error detection and correction capabilities to cope with the use of commercial components with retained reliability. Radiation hardened alternatives are available for missions with very high availability/reliability requirements.
The new RTEMS version 5 provides full support for SMP on real-time space applications. GMV with its TSP/IMA AIR hypervisor virtualizes RTEMS 5 and fully supports RTEMS SMP implementation on multicore On-board Computers, opening a door to optimal performant solutions for the application of Remote Terminal Units and also in Buses & Communication protocols.
GMV with AIR is currently developing two major use cases demonstrating these capabilities using both the GR-740 and NGMP (N2X) On-board computers using the LEON4 multicore processor.
The first use case is being developed under ESA’s MORA-TSP (Multicore Implementation of the On-Board Software Reference Architecture with IMA Capability) activity, AIR is used as the hypervisor of an upgrade to ESA’s EagleEye virtual space mission, within the activity the execution platform of the On-Board Software Reference Architecture (OSRA) as defined by the SAVOIR Advisory Group is upgraded to support TSP, SMP.
The second use case is being developed under ESA’s GNSSW-LEON4 for Space LEON4) activity, an on-board Software Defined Radio GNSS receiver has been implemented for the GR-740 LEON4 Onboard-Computer GNSS harnessing the usage of multi-core through RTEMS SMP and AIR hypervisor.
The preliminary results of both use cases will be presented comparing AIR with RTEMS in SMP and AMP modes, where several new partition and data handling scenarios were built as consequence of supporting SMP inside a partition.
Under the same activities, GMV updgraded the IO Partition where it will be showcased AIR’s specialized dedicated communications partition and how it is useful when providing data to multiple subsequent independent OBDH partitions. The new IO Partition is currently supporting Spacewire, Ethernet, 1553 and CAN bus
Based on the acquired experience on both use cases with the new IO solution, new possible data handling scenarios were identified given the flexibility of specifying tasks dedicated to a single or multiple processor cores with TSP scheduling, allowing a more efficient management of the multiple independent OBDH applications communications executed on a single multicore computer.
We present the Data Processing Units (DPUs) based on Commercial Off-The-Shelf (COTS) electronic devices for two different instruments aboard of the Sunrise-III stratospheric balloon solar observatory. The DPU performs the entire high-level control of each instrument and carries out the on-board data processing which consists in a very demanding image acquisition and processing real-time pipeline. We explain the main design milestones of this DPU approach and we compare it with our previous DPU development for both balloon and space instruments. Special interest is devoted to design features for dealing with the on-board data processing: hardware devices, software tools, availability and affordability of both of them.
Sunrise-III is the third edition of a balloon mission that takes a one-meter solar telescope to the stratosphere. It will be launched from Kiruna (Sweden) in the summer of 2021. Sunrise-III will fly to Canada above the Arctic Ocean during six days approximately. Such trajectory and altitude enable a continuous observation of the Sun during 24 hours a day.
In this work we present the Data Processing Unit (DPU) design for two payload instruments aboard Sunrise-III. On one hand, IMaX+ (Imaging Magnetograph eXperiment Plus) will study solar magnetic fields at high spatial resolution (100 km on the solar surface). It makes images of the solar surface magnetic field by alternatively measuring the state of polarization of light within two selected spectral lines. On the other hand, SCIP (Sunrise Chromospheric Infrared spectro-Polarimeter), a spectrograph based instrument, is designed to observe two spectral regions at once. By combining these spectral regions, it can cover the photosphere and the chromosphere and obtain the 3D magnetic and velocity structure of the solar atmosphere.
Both instruments are very complex which involves several mechanisms, optical elements, and hardware devices for carrying out the scientific observation modes. The DPUs perform the high-level control of the entire instruments and carry out the on-board data processing. Each instrument uses one instance of the same DPU design with some particular features for addressing specific scientific aims. It performs the real-time acquisition, processing, and management of the images. The telemetry is critical during the flight. We can barely communicate with the DPU from ground. Only a few telecommands can be sent and some status information received. This implies a complete processing and compression of the collected images and other data prior to being stored on board. Finally, the gondola, the payload instruments, and the valuable data are retrieved after a controlled landing.
Basically, the ad-hoc DPU design is split into two main processing blocks: an NVIDIA Jetson TX2 module, which acts like the system controller, and a Xilinx FPGA (XCKU040) which behaves as a frame grabber. Both devices are connected using a high bandwidth PCI-express bus. The NVIDIA device contains an ARM multiprocessor architecture and a complete Linux-based operating system. It is suitable for carrying out tasks like storage management and communications with the balloon platform and with the ground support software. The frame-grabber FPGA has to control the most intensive image processing tasks as the acquisition, polarization demodulations, accumulations, and lossless compression of the images. It is connected directly to the cameras: 3 cameras in the SCIP case and 2 cameras in the IMaX+ one. The links are based on CoaxPress with a throughput of 3.125 Gbps per camera that come from up to 48 frames per seconds of 2,048 x 2,048 pixels. Our proposed specific FPGA architecture uses two DDR4 memory banks for dealing with the image streams in real time. Using this DPU approach we can reduce the data down to almost 10% for the SCIP instrument and 4% for IMaX+ by means of the on-board processing and without sacrificing polarimetric information in the data.
Since the budget for this type of mission is only a fraction of a space one, we use a design based on COTS devices. We do not use a redundant design but our electronics is protected in a pressurized and temperature-controlled box. The reason is twofold: on the one hand, we can use commercial devices without any concern about vacuum; on the other hand, we can manage the necessary high-voltage power supply. The latter would not have been possible under the stratospheric pressure conditions (of a few millibars). The development time is another important factor to have into account: the use of general-purpose, modern devices together with state-of-the-art software tools sharply reduces the engineering efforts.
This is the second DPU generation for the Sunrise missions. This time we have designed our specific DPU board meanwhile in previous flights we used a DPU composed entirely by commercially developed boards. Our current and modern approach reduces the number of final devices from a set of several DSPs, FPGAs and processor boards to a unique board containing the NVIDIA and Xilinx devices. In this talk, we will explain how we have taken advantage of the previous expertise and of the new technology and tools during this new development.
Additionally, after the first two flights of Sunrise, we have been developing another instrument. This is the Polarimetric and Helioseismic Imager (PHI) instrument aboard the Solar Orbiter mission, where the scientific targets are similar but the electronic development is completely space focused. During the presentation of our DPU design we will constantly compare the differences between the space and the COTS-based designs and will highlight the design trade-offs and the main lesson learned about affordability and availability.
Autonomous Vision-Based Navigation in a landing spacecraft involves very demanding processing tasks as it shall be executed at higher frequency than what a space-qualified processor can provide, forcing the implementation of the algorithms in ad-hoc HW-accelerated solution. HW/SW co-design is employed in order to use FPGA as a HW accelerator to implement the most computationally demanding tasks which are mostly devoted to parallelizable computer vision parts. The Navigation systems of on-going Sample Return missions to asteroids or planets as well as navigation, localization and mapping algorithms of future space robotics vehicles (rovers) can take advantage of these HW/SW implementations in System-on-Chip, embedded FPGA and processor in a single chip device. There is a lack of flexibility in the re-use of the HW and SW within different space mission phases that require different computer vision and navigation solutions. FPGA are used in space to substitute un-affordable development of ASIC components but losing one of the main advantages of these devices, HW reconfiguration to instantiate and interchange different bitstream configuration of the FPGA logic at different moments. We are proposing and evaluating within ENABLE-S3 European Commision project a cost-efficient reconfigurable instrument to provide multiple vision-based co-processing solutions within a mission depending on the distance to the target and the phase of that mission. We evaluated Xilinx Zynq Soc and Zynq Ultrascale+ MPSoC devices for allocate implementation and reconfiguration of three different computer vision algorithms which cannot fit in one single device. In the current exploration and landing architecture missions design at least 3 FPGA would be needed to allocate these 3 FPGA implementation. GMV collaborates with UPM to show the results on an avionics architecture that allows reconfiguring the same FPGA device, only 1 FPGA, simplifying the architecture and reducing mass and power budgets providing a single product that may be used over the different phases of the mission. The scenario is defined by a spacecraft lander carrying on a rover to be deployed on the surface of a planet. During navigation phase to celestial body in close range operations the spacecraft lander makes use of the camera installed in the rover for the descent and landing part using an absolute navigation image processing interchanged by fast HW reconfiguration with a relative navigation image processing implementation in the FPGA. Once that the probe is on ground, the FPGA is once again reconfigured for surface operations to host a solution based on stereo vision disparity SGM in the same FPGA. In order to allow safety critical operations it is required a high-speed FPGA reconfiguration, therefore the re-programming time in which the FPGA is not operative is a critical factor and one of the performance parameters that are presented in the project. ARTICO3 architecture provides the reconfigurable framework in the Zynq devices to allow smooth interchange of the vision-based navigation modules at high-frequency. The complete reconfigurable system is managed at SW level from one task running into the embedded ARM processors of Zynq boards integrated into real-time operating system RTEMS. In the 2nd year review of the ENABLE-S3 project we created a Demonstrator of this solution which is interfaced in closed loop to a Matlab-Simulink-based GMV-DL-Simulator in order to evaluate the HW-reconfigurable vision-based navigation performances into a emulated Phobos descent & landing scenario including kinematics and models for the Phobos environments and GMV implementation of the Phooprint-GNC autocoded into the embedded ARM processors.
The possibilities to observe and interact with any given spacecraft are naturally limited compared to ground-based systems due to a number of factors. These include but are not limited to the availability and bandwidth of their connection to ground, the availability of staff, communication latencies and power budgets.
While a minimum level of autonomy is required for every spacecraft, past experiments and missions have shown that introducing more sophisticated autonomy mechanisms can drastically increase the efficiency for many missions in terms of reliability, science output and required operational effort. This automation can also result in a significant drop of cost for missions that would otherwise require extensive human operation.
The emerging use of Commercial-Off-The-Shelf (COTS) components and their increased computational power opens the door for more complex mission scenarios including an increasing number of sensors and actuators to assess and influence the current status. This, however, also enlarges the search space for solutions regarding operation scheduling and planning and to estimate the environmental and health status of the spacecraft up to a degree that cannot be handled manually. Hence, there is an increasing need for mechanisms and algorithms to make spacecraft more self-aware and autonomous. This can also enable mission scenarios that require the spacecraft to come to its own decisions in uncertain environments and to operate without or with only limited human intervention.
The number of techniques and variants of artificial intelligence available in the literature is, however, just as diversified as their potential field of application.
To provide an overview of the current state of the art of artificial intelligence and its application for space systems, this paper provides an extensive survey on existing techniques and algorithms as well as existing and potential applications on board spacecraft and on ground. The survey focuses on autonomous planning and scheduling of operations, self-awareness, anomaly detection and Fault Detection Isolation and Recovery (FDIR), on-board data analysis as well as on-board navigation and processing of earth-observation data.
Specific technological innovations are required to accomplish more ambitious commercial and scientific goals for space missions. One of the key areas of potential innovation is mission autonomy: an increased degree of on-board autonomy helps in implementing more effective mission operations. In particular, functionalities like event detection, autonomous planning and goal management, if implemented on-board, introduce several benefits to the way operations are managed:
The characteristic that enables the presented level of autonomy is the ability to extract useful information from the data generated on the spacecraft, directly on-board. At AIKO, we develop Artificial Intelligence on-board algorithms to automate spacecraft operations, and the present work explores the use of Deep Learning to enable state of the art E4 autonomy on small satellite class missions. The paper will provide an overview of the algorithms currently under development (Convolutional and Recurrent Neural Networks), presenting the use cases (observation missions, telemetry processing) and presenting an evaluation of the performances of the algorithms, including compatibility with current and future on-board processing units.
Modern imaging sensors create frames of several megapixels at a high frame acquisition rate. These imaging sensors are placed onboard spaceborne platforms and can greatly enhance their capabilities. However, an industry trend towards smaller satellites - with smaller antennas, less power and worse pointing accuracy- leads also to an expectation that the downlink capability will remain well below the data generation capability for such imaging satellites. In order to use more acquisitions and have a high ‘usability’ of the satellite, onboard processing of payload data is a possible solution. With the rise of smartphones and tablets, the world of onboard processing has at its capacity a multitude of powerful and energy-efficient computing platforms available on the market today.
The FONDA (Flexible ONboard Data Analysis) project will determine and test the technology platform that is best suited for onboard intelligent processing of imaging payload data. Within the project, requirements will be collected from leading industry partners pushing boundaries on payload capabilities, such as SSTL and Honeywell. A range of suitable technology platforms for onboard processing will be identified and evaluated across the industry and market requirements, ultimately leading to a set of recommendations for technologies optimally suited for high capacity image processing onboard small spaceborne platforms.
During the FONDA project, the use of deep learning-based image processing is investigated. This is a more recent approach to image processing that has shown good results in many imaging inverse problems (e.g. denoising, super-resolution, deconvolution). The use of deep learning requires large amounts of training data and large computational resources to learn how to produce the desired outcomes from the examples. This learning can be done offline, and the resulting model is relatively lightweight and can be used onboard for the image processing. We use our latest experience from deep convolutional neural networks for interpreting large multispectral datasets within the Copernicus program, to assess their suitability and usefulness for implementation onboard smaller spaceborne platforms.
The current state of the art for onboard processing on imaging satellites is to reduce the resolution and create thumbnails which can be used for determining interesting images to downlink. The FONDA project explores the technological capabilities for creating informative data products that could then be downlinked instead of raw images, thus maximizing useful output per downloaded data unit. This would widen the applicability of the microsatellite technology including areas such as early warning and rapid response systems.
Even if it should be possible to utilize intelligent onboard payload processing for creating the desired outputs, another aspect of the investigation concerns the concrete algorithms and implementation choices and their resulting impact on how performant such intelligent processing can be. This would provide additional insights into the usability of intelligent onboard imaging satellite systems and the desired operational output.
Satellite based spectroscopic observations produce massive volumes of data at very high velocities. Analysis of such observation via deep learning algorithms, e.g. for the precise estimation of the redshift associated with individual galaxies, requires massive floating point operations, generally performed on Graphics Processor Units (GPU). This form of processing is fast but requires substantial energy for the computation, and thus necessitates the transmission of the acquired measurements to ground stations for processing in large data centers. Such data transmissions are increasingly becoming a bottleneck, as transmission speed improvements do not keep up with the pace of on board data generation. An alternative technology to GPUs is Field Programmable Gate Arrays (FPGAs), which often require substantially less energy per computation compared to GPUs, but are considered too slow for deep learning based inference. In this work, through the collaboration of two EU Horizon 2020 projects, namely EuroExa and DEDALE, we demonstrate experimentally that using (i) appropriate data structures to reduce memory bandwidth, (ii) compressed fixed point indices to clustered floating point weights and (iii) massive pipelining, FPGA-based computing can yield extremely high (in the order of 99%) classification accuracy vs. GPUS in the context of top-one classification, at an order-of-magnitude less energy. For this work we considered optimized Tensorflow codes running on various GPUs vs. our proposed FPGA-based architecture for galaxy redshift estimation from extended wavelength range spectroscopic measurements using simulated measurements which are in-line will be publicly available specification of the upcoming ESA Euclid deep space mission. We show on actual runs in hardware that the EuroExa-developed Quad FPGA Daughter Board (QFDB) offers substantially lower latency vs. a similar-technology Nvidia P1000 GPU for batches of any size, it offers better throughput for batches up to 30 images (which can scale out to any batch size), and it offers roughly an order of magnitude better energy consumption for the same computations, thus becoming an interesting technology for on-satellite deep learning classification. An important aspect of this work is that the FPGA model used in this work has an equivalent rad-hard part qualified for space applications.
Deep learning is enabling technology for many application like image processing, pattern recognition, objects classification and even autonomous space craft operations. But, there is a price to be paid, these methods are computationally intensive and require supercomputing resources - that can be challenging, especially on-board of a space craft. However, FPGA-accelerated computing is becoming a triggering technology for building low-cost power-efficient supercomputing systems, which are accelerating deep learning, analytics, and engineering applications. The objective of this presentation is to present a dedicated FPGA- based Deep Neural Networks (DNNs) processing unit. This unit is built on top of the NVIDIA Deep Learning Accelerator (NVDLA) - a standardized, open source deep learning acceleration architecture. The NVDLA architecture is providing interoperability with the majority of modern Deep Learning networks and frameworks, including TensorFlow. The unit is taking a performance advantage by parallel execution of a large number of operations, like convolutions, activations and normalizations, which are fairly typical for DNN structures. The NVDLA was implemented in Xilinx Zynq UltraScale+ MPSoC FPGA providing a significant boost in terms of performance and power consumption comparing to non-accelerated processing of DNNs. The main limiting factor for usage of unmodified NVDLA for space applications is lack of fault tolerance, therefore architecture modifications providing fault detection and triple modular redundancy are proposed. The implementation details and system-on-chip features will be summarized and DNN accelerator efficiency in terms of performance and power consumption will be discussed during the presentation.
Nanosatellites typically operate on a basis of scheduled, routine procedures, defined by users on the ground and dictated via pass uplinks. The development of machine learning algorithms, combined with constant advancement in the efficiency and capabilities of nanosatellite systems, has led to the point where artificial intelligence may be deployed on small satellites via low-power components such as FPGAs. The use of onboard AI facilitates a wealth of new capabilities and applications in nanosatellites, including improved processing and filtering of EO imagery and responsive onboard decision making with a diminished reliance on schedule uplinks. As part of a larger focus on responsive and intelligent spacecraft operations, Craft Prospect is developing a CubeSat payload which identifies user-chosen features in imagery captured via an integrated camera. This payload, the Forwards Looking Imager (FLI), provides the satellite with advanced knowledge of ground and atmospheric features ahead of the nadir, allowing it to make decisions on where to direct other onboard sensors, how to prioritise and filter downlink data, and when to respond to targets of opportunity. This presentation covers the development of the FLI, including system architecture, testing, performance metrics, capabilities and its integration into the standard CubeSat platform. The FLI marks the first system in a planned larger family of autonomy-enabling products, leveraging new technologies such as vision processing units (VPU). This family is being developed in alignment to CCSDS’s Mission Operations Service concept and its road map will be illustrated within the context of fitting into this concept.
We present the deep learning platform (N2D2) and the neural network hardware IPs (PNeuro and DNeuro) developed at CEA, which are specifically tailored to design and integrate deep networks in highly constrained embedded systems (using low power GPU, FPGA or ASIC). The software platform integrate database construction, data pre-processing, network building, benchmarking and optimized code generation for various targets. Hardware targets include CPU, DSP and GPU with plain C, OpenMP, OpenCL and TensorRT(+Cuda/cuDNN) programming models as well as our own hardware IPs. We developed the PNeuro, a programmable DSP-like processor targeting ASIC SoCs and the DNeuro, a dataflow RTL library targeting FPGA. Both PNeuro and DNeuro are currently at the demonstrator level, in ASIC and FPGA respectively (DNeuro is shown in the exhibit).
We report on the ongoing hardware and software developments to implement cloud screening in-orbit and in real-time. We leverage a Vision Processing Unit to accelerate state-of-art artificial Intelligence algorithms applied on hyperspectral and thermal imaging data. The instrument on which the developments are implemented is HyperScout-2, second generation of a very compact hyperspectral system already in space since Feb 2018. The experiments are expected to be carried out in orbit in third quarter 2019.
In this paper, we describe compression strategies currently under consideration in the H2020 EO-ALERT project. In particular, we investigate the performance of the CCSDS-123.0-B Issue 2 standard for image compression when used for the purpose of compression of synthetic aperture radar (SAR) raw data onboard of satellite systems.
The task of compressing SAR raw data presents a great challenge compared to the compression of optical images as this kind of data consists of complex samples with low correlation among each other. Furthermore, the compression algorithm must have a low complexity due to the hardware constraints of the satellite systems.
Historically the most used algorithm and the de-facto standard in the field of raw SAR compression is the BAQ algorithm [1]. This algorithm is based on the assumption that the data can be modeled as a complex random process, where the imaginary and real parts are independent Gaussian samples with a slowly varying standard deviation. The technique consists of partitioning the data into blocks, over which the process can be assumed stationary, followed by the quantization of the data inside the blocks using a Max-Lloyd quantizer. Several versions have been proposed such as Entropy Constrained Block Adaptive Quantization [2], Block Adaptive Vector Quantization [3], Flexible Block Adaptive Quantization [4] which improve upon the performances of BAQ at the price of increased complexity.
These techniques only take advantage of the first-order statistics of the raw data, however in the past some approaches have been proposed, which try to exploit the correlation between the SAR raw data samples. Among such approaches there is the possibility to apply the concept of transform coding, for example using the Fourier Transform , Discrete Cosine Transform [5] or Wavelets [6], but usually these approaches are not adopted as they are too computationally complex.
The Standard CCSDS 123.0 “Low-Complexity Lossless & Near-Lossless Multispectral & Hyperspectral Image Compression” describes an algorithm for the compression of multispectral images on-board of satellites, and it is based on a DPCM-scheme followed by an entropic coder.
As the viability of this kind of algorithms for the purpose of compression of SAR captures was already acknowledged in papers such as [7], we tested the performances of this standard on SAR raw data in terms of rate and distortion.
We compressed the real part and the imaginary part of the SAR raw data separately using this standard and the obtained performance is equal to or better than that obtained by the BAQ technique on a dataset of images on real-worlds scene captured by the SIR-C/X-SAR mission [8].
The viability of this algorithm for SAR raw data compression is very advantageous as on satellites which capture both optical images and SAR data it would be possible to use the same algorithm to compress both types of data, instead of having to implement two different techniques.
REFERENCES.
R. Kwok, W. T. K. Johnson, "Block adaptive quantization of Magellan SAR data", IEEE Trans. Geosci. Remote Sensing, vol. 27, pp. 375-383, July 1989.
Algra, T. “Data compression for operational SAR missions using entropy-constrained block adaptive quantisation.” In IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2002), Vol. 2, Toronto, Canada, 2002, 1135-1139.
Moreira, A., and Blaser, F. “Fusion of block adaptive and vector quantizer for efficient SAR data compression.” In 1993 International Geoscience and Remote Sensing Symposium (IGARSS ’93), Vol. 4, Tokyo, Japan, 1993, 1583-1585.
I. H. McLeod and I. G. Cumming, "On-board encoding of the ENVISAT wave mode data," Geoscience and Remote Sensing Symposium, 1995. IGARSS '95. 'Quantitative Remote Sensing for Science and Applications', International, Firenze, 1995, pp. 1681-1683 vol.3.
U. Benz, K. Strodl, A. Moreira, "A comparison of several algorithms for SAR raw data compression", IEEE Trans. Geosci. Remote Sensing, vol. 33, pp. 1266-1276, Sept. 1995.
V. Pascazio, G. Schirinzi, "Wavelet transform coding for SAR raw data compression", Proc. IGARSS, 1999.
E. Magli and G. Olmo, "Lossy predictive coding of SAR raw data," in IEEE Transactions on Geoscience and Remote Sensing, vol. 41, no. 5, pp. 977-987, May 2003.
M. Zink, R. Bamler, "X-SAR radiometric calibration and data quality", IEEE Trans. Geosci. Remote Sensing, vol. 33, pp. 840-847, July 1995.
Earth observation (EO) data delivered by remote sensing satellites provides a basic service to society, with great benefits to the civilian. The data is nowadays ubiquitously used throughout society for a range of diverse applications, such as environment and resource monitoring, emergency management and civilian security. Over the past 50 years, the EO data chain that has been mastered involves acquisition process of sensor data on-board the satellite, its compression and storage on-board, and its transfer to ground by a variety of communication means, for later processing on ground and the generation of the downstream EO image products. While the market is growing, the classical EO data chain generates a severe bottleneck problem, given the very large amount of EO raw data generated on-board the satellite that must be transferred to ground, slowing down the EO product availability, increasing latency, and hampering applications to grow in accordance with the increased User Demand for EO products.
Increasing transmission throughput is one possible solution to the bottleneck problem. A different approach, which is establishing itself as a new trend for several space missions, is the implementation of processing capabilities on-board the satellite, with the goal of producing EO image products on-board that can be quickly and reliably transferred to ground given their relatively low data volume. They can also be used on-board to support increased autonomy. In recent years, several solutions have been proposed for the on-board processing of optical images, multispectral data and even SAR data, to be implemented on micro- and nano-satellites, LEO satellites and GEO satellites.
In this context, Artificial Intelligence is playing an increasing role in many aspects of space missions, and intelligent space agents, able to autonomously perform several operations, such as data processing and command and control of the space system, have been proposed, designed and in some cases successfully used. The potential for space missions to use on-board decision-making have been proposed and in some bases demonstrated by, for example, several operations of the Autonomous Sciencecraft on EO-1 [Sherwood et al.] [Tran et al.], and Sensorweb [Davies et al.] tracking volcanoes, flooding and wildfires, Machine Learning to triage enormous data streams in radio (V-FASTR) [Burke et al.] and visual (i-PTF) astronomy, Automated Targeting onboard the MER and MSL rovers (AEGIS) [Estlin et al.], automatic semantic indexing of science features (Mars Target Encyclopedia), and automation of data management for Rosetta Orbiter operations [Ferri&Sørensen].
In this paper, we provide an overview of the H2020 EU project EO-ALERT (see, EO-ALERT project website) and specify the preliminary on-board (OB) image generation and processing solution that is being developed within the project, that makes use of a combination of classical and AI solution concepts. The aim of the EO-ALERT project is to propose the definition and development of the next-generation Earth observation (EO) data and processing chain, based on a novel flight segment architecture moving optimised key EO data processing elements from the ground segment to on-board the satellite. In particular, we will focus in this paper on the on-board image generation and processing algorithms, showing the capability of the system of producing EO image products on-board and with very low latency (minutes), in two main scenarios defined for testing the potential of the proposed high-speed data chain: ship detection and extreme weather monitoring.
The first step of the processing chain, identified as Image Generation (IG), is responsible for the conversion of the raw data captured by the optical sensor into a radiance calibrated and artefact-free image, which could optionally be sent to the ground as an intermediate EO product. The second step, identified as Image Processing (IP), is responsible for extracting the product information from the images, and producing a scenario related alert that will be transferred to the ground; since the size of the final alert will be considerably smaller than the acquired raw data, the EO product and alert will be available to the final user with very low latency, much lower than in the classical EO data chain.
The paper will describe the functional blocks of the proposed IG/IP chain, as part of the overall avionics architecture. It is based on a combination of computer vision algorithms for efficient image generation and extraction of visual features from optical images, tailored to the specific scenario of interest and optimized to the lower computational capabilities of on-board hardware, in conjunction with machine learning algorithms for visual feature classification and discrimination of events of interest. In particular, a pre-trained Support Vector Machine (SVM), a popular classification tool that uses supervised machine learning theory to maximize the predictive accuracy and whose efficient implementation in field programmable gate arrays (FPGAs) has been proven by several studies, will be used as the final classifier.
The paper will provide examples of the performance of the on-board IG/IP algorithms using acquired raw data from the DEIMOS-2 payload (sub-meter) and Meteosat Second Generation (MSG) payloads from these satellite missions, showing the performance of on-board solution against the ground-based classical solution. The examples are presented to show the detection capabilities of the system in the two proposed scenarios. The paper will also present an analysis of the avionics needs for the implementation of the proposed solution, showing its feasibility in an FPGA-based on-board avionics architecture.
[Sherwood et al.] “Autonomous Science Agents and Sensor Webs: EO-1 and Beyond” Rob Sherwood, Steve Chien, Daniel Tran, Benjamin Cichy, Rebecca Castano, Ashley Davies, Gregg Rabideau
[Tran et al.] “The Autonomous Sciencecraft Experiment Onboard the EO-1 Spacecraft” Daniel Tran, Steve Chien, Rob Sherwood, Rebecca Castano, Benjamin Cichy, Ashley Davies, Gregg Rabideau
[Davies et al.] “Artificial Intelligence in the NASA Volcano Sensorweb: Over a Decade in Operations” Ashley G. Davies, Steve Chien, Joshua Doubleday, Daniel Tran, David Mclaren
[Burke et al.] “Limits on Fast Radio Bursts from Four Years of the V-FASTR Experiment” S. Burke-Spolaor, Cathryn M. Trott, Walter F. Brisken, Adam T. Deller, Walid A. Majid, Divya Palaniswamy, David R. Thompson, Steven J. Tingay, Kiri L. Wagstaff, and Randall B. Wayth
[Estlin et al.] “Automated Targeting for the MER Rovers” Tara Estlin, Rebecca Castano, Benjamin Bornstein, Daniel Gaines, Robert C. Anderson, Charles de Granville, David Thompson, Michael Burl, Michele Judd and Steve Chien
[Ferri&Sørensen] “Automated Mission Operations for Rosetta” Paolo Ferri and Erik M. Sørensen
Compression of multispectral and hyperspectral images is becoming more and more important as the spatial and spectral resolution of the instruments keep increasing. New techniques are therefore needed to cope with such high data rates. Recently, significant work has been devoted to methods based on predictive coding for onboard compression of hyperspectral images. This is supported by the new draft CCSDS 123.0 recommendation for lossless and near-lossless compression. While lossless compression can achieve high throughput, it can only achieve limited compression ratios. The introduction of a quantizer and local decoder in the prediction loop allows to implement lossy compression with good rate-performance. However, the need to have a locally decoded version of a causal neighborhood of the current pixel under coding is a significant limiting factor in the throughput such encoder can achieve.
In this work, we study the rate-distortion performance of a significantly simpler and faster onboard compressor based on prequantizing the pixels of the hyperspectral image and applying a lossless compressor (such as the lossless CCSDS 123.0) to the quantized pixels. While this is suboptimal in terms of rate-distortion performance compared to having an in-loop quantizer, we compensate the lower quality with an on-ground post-processor based on modeling the distortion residual with a convolutional neural network. The task of the neural network is to learn the statistics of the quantization error and apply a complex dequantization model to restore the image.
The most important challenge underpinning the transition to next generation of Space missions design is the discrepancy between the dramatic increases in observation rate and the marginal increase in downlink capacity, enforcing the shift from the traditional “acquire-compress-transmit” paradigm to highly efficient intelligent on-board processing of observations, minimizing downlink requirements while respecting the limitations in power and bandwidth resources. Solar Orbiter (SO), an ESA/NASA mission, is a milestone both in the purely technological and scientific sphere.
SO is devised to study the connection between the Sun and the heliosphere, with particular interest to open issues such as the sources of solar wind streams and turbulence, the heliospheric variability, the origin of energetic particles and the solar dynamo. The science payload is designed to link in-situ and remote sensing observations, and is composed of ten suites of instruments including spectrometers, imagers, wave and particle instruments - result of a large international consortium. In particular, the plasma suite Solar Wind Analyzer (SWA) comprises: Proton-Alpha Sensor (PAS), Electron Analyzer System (EAS), Heavy Ion Sensor (HIS) together with the Data Processing Unit (DPU), and will provide high-resolution 3D velocity distribution function of ions and electrons, together with ion composition, necessary to infer the thermal state of solar wind and its source regions, identify structures such as shocks, CME's and other transients, and determine the link between particle dynamics and waves. SO will explore new distance and latitude combinations that thus far remain unexplored, even if you take into account old Helios and upcoming Parker Solar Probe observations.
Such challenges leave room to heavy constraints like the limited bandwidth available to SWA for downlink, so that the whole set of particles raw data collected cannot be transmitted back to Ground. Data processing is used to evaluate concise scientific properties of the solar wind, the moments, and making feasible the transmission of full data distributions only at low frequencies. Then processing is re-adopted on these distributions to meet the required (lossless) compression rates (2-8).
Another step towards the aforementioned paradigm shift is represented by the SWA Book-Keeping Algorithm (BKA), which has been designed to ensure that the individual sensors remain within the allocated telemetry rate on an orbit-averaged basis. The philosophy of the SWA book-keeping scheme has been applied to all instruments with ESOC’s Operations Team introducing the concept of Operations Telemetry Corridors (OTC) to finely tune the rate of telemetry generation by the instruments.
This paper introduces the analyses performed while developing a hyperspectral compression technique based on a novel solution able to satisfy the needs, and thus to candidate for, the next generation hyperspectral missions (e.g. the forthcoming Sentinel-10).
The proposed solution leverages the achievements obtained by the OP3C (On-board Processing for Compression and Clouds Classification) technique, evolving it both on the technological and the algorithmic points of view. OP3C is now able to exploit the acquired data’s information content (e.g. their classification) in order to improve compression efficiency. The improved algorithm aims at exploiting a-priori known information in order to help and improve the on-board real time processing tasks. The information to be used on-board are estimations supporting classification and un-mixing (i.e. abundance fraction estimation for each image pixel), based on hyperspectral data background knowledge. In fact, this kind of information can be used directly as results or can support the image compression task. The availability of their preliminary estimations/ approximations on-board makes the innovative knowledge-based compression techniques possible. The knowledge-based compression can also be utilized for an improved understanding of the image’s content, making possible a consequent real time adaptation of the acquisition plan or even transmitting only relevant semantic information to the ground. And this in turn, actually makes possible to expand mission objectives (like real-time event detection, object detection etc.), without any compromise in quality and quantity of the acquired images.
Furthermore, the image content evaluation opens up to a number of possible decisions, such as whether the information content of acquired images justifies their memory and download bandwidth consume or, on the contrary, the cloud cover overcomes the usability threshold, in order to evaluate a sort of priority in access to limited resources.
The outcome of the analyses is to complement the design and development of spaceOP3C as the next generation on-board data processing integrated solution, composed of:
• a complete on-board SW solution for multispectral/hyperspectral satellites, implementing the evolved OP3C payload data compression techniques;
o enhance the features detection functionalities (from cloud cover detection to image classification);
o improve of the baseline algorithm by means of “knowledge based” compression techniques.
• A complete HW/SW partitioning scheme, aimed at the development of a custom configured FPGA module intended to the most demanding processing steps;
CHEOPS, the Characterizing Exoplanets Satellite, is a Swiss-led ESA-S mission to be launched in early 2019. It is an optical space telescope in a polar orbit to carry out ultra-high precision photometry providing radii of transiting exoplanets.
The University of Vienna (UVIE) together with PnP Software have developed the Instrument Flight Software (IFSW). This software controls the instrument and processes the science data in real time. It is a fully-fledged ECSS software implementing over 100 ECSS TM/TC services. The implementation of the services and the state machines is aided by the CORDET framework library. The data processing tasks on the 'system side' range from recognition of the target star and providing centroiding information to the spacecraft in a closed loop to thermal control and a wealth of FDIR procedures. On the 'science side', we have implemented a very flexible on-board data processing chain with heritage from the Herschel on-board reduction software, which combines lossy and lossless compression tasks in a way that allows us to optimize the scientific content given the limited telemetry resources.
The IFSW is tightly fit to the flight hardware, which uses a GR712RC dual-core LEON3 with 64 MiB RAM and 16 GiB of FLASH. SpaceWire interfaces are used toward the detector unit and the communication to the S/C is via Milbus. No operating system is used, instead we are working directly on the hardware, with both cores operating in parallel.
We will present the approach that we took towards specification, design, implementation and qualification, talk about the lessons learned and what we are going to re-use or change for our current SW development projects, especially for SMILE.
The complete CHEOPS IFSW sources are available under GPL2 license.
In the flight software projects carried out at the University of Vienna, we prefer to work as close to the hardware as possible. The faster pace of projects that we face, coupled with the need to adapt early to next-generation processing architectures, has led us to develop a light-weight operating system, which offers the possibilities that we need, without putting any unnecessary restrictions on the software.
LeanOS emerged out of an evaluation study of the next-generation payload data processor Scalable Sensor Data Processor (SSDP) and has subsequently been improved to support LEON based platforms in general.
The primary design philosophy behind LeanOS is for it to "get out of the way". This means that all components are implemented such, that users ideally select their desired configuration of functionality and hardware drivers and supply an Application Software executable that is automatically loaded once the boot process is complete. If needed, library functions are provided, which enable interaction with operating system services, such as the creation of threads and memory allocation.
LeanOS is designed with Symmetric Multiprocessing (SMP) in mind and can be configured support the SPARC Reference Memory Management Unit (SRMMU), including Paging and Virtual Memory for address space separation. It comes with Loadable Kernel Module support, offering a means of dynamic reconfiguration and driver updates even at run-time. The kernel module infrastructure also provides an easy to use way of implementing custom extensions to the OS, without the need to fully understand its intricacies. Examples are custom hardware features, such as mass memory or ADCs. This approach reduces the time and effort needed in the re-use of the higher-tier application software, as abstraction layers can be introduced, allowing an identical interface to changing underlying components, while maintaining a short path to the hardware.
LeanOS will see its first full in-flight use in the Solar wind Magnetosphere Ionosphere Link Explorer (SMILE) instrument flight software (IFSW), which is currently being developed. A subset of its components was already used in the CHEOPS IFSW.
Both IFSWs are open-source developments of the University of Vienna.
Earth observation (EO) has been and still is a very complex topic in the Space Engineering field. Yet there is a growing number of new requirements and needs for new EO nano and microsatellites, working either standalone or in synchronized constellations. Higher download data rates, improved image quality and enhanced onboard computing capabilities are only a few examples of the challenges to be faced in this discipline. Furthermore, the need to speed up the time to market and reduce costs of development, has sparkled the use of modular solutions over monolithic ones for payload computer design, where purpose specific components - hardware and software - are reused and assembled for different spacecrafts and missions. In this work, we present a novel modular approach to on-board image processing based on a distributed computing setup with functionality specific computers connected to an on-board CAN-BUS using CANopen protocol. Optimal use of the resources in the network is achieved by a two-fold design consisting in using FPGAs for data processing and compression and in integrating a high performance software library data handling and data communications. The feasibility of this approach is tested in a two-node network for an EO payload with an image capture and compress node and an Earth Link node. Results are presented for different scenarios based on image size and performance parameters. Moreover, the reuse of this novel approach to other image based applications and designs is discussed. In particular, the applicability to vision-based navigation solutions is also presented in this paper.
Space missions are continuously increasing in complexity, especially as spacecraft explore unknown terrains and encounter unpredictable situations. This complexity is compounded by the extreme distances some spacecraft operate from Earth, which makes real-time ground control impractical. Higher levels of autonomy and greater on-board navigational precision are needed to solve these issues.
Currently machine learning is an active area of research by automotive and other industries that is being used in conjunction with electro-optical sensing systems for object detection, classification and tracking, enabling cars to build a scene of their surroundings. This enables a wide range of automation levels for these vehicles, from advanced driver assistance systems (ADAS) to fully autonomous driving.
Can machine-learning techniques be similarly applied to space systems to improve the capabilities of the spacecraft, while simultaneously overcoming cost and resource challenges of today’s complicated missions? A fundamental challenge to this approach is the traditional conservatism of the industry, which has historically favored reliability and testability over performance.
This talk will discuss how machine learning is already seeing applications in the industry and explore some methods for overcoming the obstacles, such as verification and validation concerns, preventing its further adoption.
OPAZ is a new generation of Earth observation system with an embedded optical payload designed to fly on Zephyr-S, the stratospheric UAV developed by Airbus Defence and Space. The OPAZ prototype has been used several times in operational context. First, tests flights of few hours on stratospheric balloons have been done. Then a long duration successful demonstration has been made with a flight at 20km altitude during over 25 days on the Zephyr-S in July 2018, thus establishing a world endurance record. This flight was just the first one of many others scheduled.
The OPAZ payload avionics and data processing chain is built around COTS components (HW and SW) and common state of the art industrial standards. The coupling of the avionics and data processing chains on-board enables advanced users functionalities such as enhanced high resolution pictures or stream live video with active stabilization and an excellent agility to rapidly select the field of view. The OPAZ payload also acquires and process AIS signal and other sensors could be added. Focusing the rich software environment available on the chosen on-board processing devices, this development demonstrates the benefit of using commercial standards to get high quality and added-value services on-board together with a limited development effort. Such project shows the possible innovations and benefits that can be enabled through usage of COTS components and standards for payload data processing and avioncs on Earth-Observation satellites for instance.
Deep neural networks (DNNs) have achieved unprecedented success in a variety of pattern recognition tasks, including medical imaging, speech recognition, image colorization, and satellite imaging. An extremely rapid development of remote sensors made the acquisition of hyperspectral image data (HSI), with up to hundreds of spectral bands over a given spatial area, much more affordable. However, efficient analysis, segmentation, transfer and storage of such imagery became a big issue in practical applications, and it is currently being faced by the machine-learning and image-processing communities worldwide. It is especially important in hardware-constrained environments (e.g., on-board of a satellite), where resource frugality of classification engines as well as memory and transfer efficiency are the big and important real-life concerns. In this talk, we will focus on techniques which are aimed at reducing the size of HSI data, and verify how they affect the HSI segmentation accuracy using both conventional machine-learning approaches (including support vector machines and decision trees) and DNNs. The latter models will include spectral and spatial-spectral DNNs which either benefit exclusively from spectral information about the pixel being classified, or both spatial-spectral characteristics. Finally, we will verify whether multispectral imagery (simulated using hyperspectral data) is enough to accurately segment regions of interest. Our rigorous experiments were performed using benchmark datasets (Pavia University and Salinas Valley) acquired ROSIS and AVIRIS sensors. The results obtained using our methods were compared with the current state of the art, and underwent thorough statistical tests.
PLATO (Planetary Transits and Oscillations of Stars) is an ESA-M mission which is currently being built. Its scientific goal is the discovery of a large number of exoplanetary systems down to terrestrial planets by means of photometric transits. To do so, the spacecraft has 26 cameras to cover a large part of the sky, each one consisting of 4 CCDs with 4510x4510 pixels each. These are read at equal intervals of 25 seconds to measure changes in the stellar brightness.
Several Data Processing Units (DPUs) are used to extract many thousands of smaller windows (imagettes) with the observed stars, reducing the amount of data from several Gigabytes down to 25 MB for each exposure. After this step, the remaining data are sent to the Instrument Control Unit (ICU), where they are lossless compressed. The amount of data to compress is still too large to be processed within the available CPU resources, hence a specialized hardware data compressor was developed using a RTAX-2000 Field Programmable Gate Array (FPGA).
This hardware compressor, which ensures the fast compression of the data, is realized as a separate electronics board developed in collaboration of IWF Graz and the University of Vienna. Even if the implementation of the compression algorithm is in hardware, we are still limited by the speed of the interfaces and the amount of local memory. We had to consider these restrictions when we devised the algorithm.
Basically, the implemented compression decorrelates the data temporally with a running average that has an exponential tail, resulting in an almost geometric distribution of the residuals. This is a suitable input for the Golomb encoder, which we decided to use as it allows live code generation that is faster in FPGA than if look-up tables were used. The set of parameters that control the filter and the encoder are semi-adaptive, i.e. they adjust to the data in certain intervals.
We show how the compressor is implemented and tested. We also explain the details of the algorithm and the implementation specifics in the FPGA.
Space market is going change with the emergence of new kind of applications and new kind of business models based on more and more private funding. Space industry need to answer to different kind of challenges, on different kind of market field with at starting the same structure and teams. Indeed on top of more traditional class 1 or 2 projects driven by GEO Operators & Space Agencies, there are much more demand to look for alternative solutions to achieve significant cost reductions and better time to market to address LEO constellations ramp up & Space vehicle explosion (Launchers, Rovers, Robots, …). COTS solution are more and more used in space applications to face this new trend but it generates some concerns as soon as orbit evolution, mission duration or customer requirements are changing. So follow this trend and play in all market segments by reusing most of the electronic sub system is more and more a challenge. Microchip is going to introduce a unique ARM processor System on Chip (SoC) offering in the space electronic market with the possibility to reuse hardware and software development from an automotive COTS design to the highest level of radiations performances & space grade qualified solution.
Today, most of the space actors are facing those challenges to address different kind of demand and requirements, to look for COTS solutions to reduce costs and lead-time but also to continue to serve more traditional space demand with the right level of quality and robustness.
To support this new market set up and enable more possibilities for our customers, Microchip is proposing a unique scalable product portfolio for space applications to deal with the different kind of projects and to capitalize on proven COTS architectures and devices. In this presentation, we propose to take the example of SAMV71 High End Automotive ARM microcontroller that Microchip is going to propose in three different variant to address the full market demand based on the same SoC architecture. This unique approach enable full hardware scalability while reusing software developments and limiting system changes for end customer.
In Space compute intensive applications, the current dilemma we observe is the following: how could one perform very complex and powerful data processing into Space, while:
- ensuring a decent level of radiation tolerance,
- participating to SwaP optimization (Size, Weight & Power, translating here at least in a limited power consumption of the computing platform),
- having either significant Space heritage or at least sufficient advantages and risks mitigation techniques to convince Space systems designers.
This paper will first introduce the two solutions proposed by Teledyne e2v in using state of the art powerful processing solutions for Space Payload applications:
- A Teledyne e2v Space specific and qualified version of Qormino® Common Computer Platform, based on NXP LS1046 (Quad Cortex ARM Cortex A72 @ 1.8GHz), 4GB of DDR4, on a custom Teledyne e2v proprietary substrate.
- A Space qualified NXP based LS1046 used as a standalone Microprocessor (Quad Core ARM Cortex A72 running at 1.8GHz).
Further to these products descriptions, this paper will describe the dedicated processes that Teledyne e2v will put in place to de-risk and validate the use of such innovative solutions in Space applications:
- Radiation testing.
- Mitigation of Radiation effects (SEU)
- Vibration Testing
NanoXplore is a privately owned fabless company based in France, created by veterans of semiconductor industry with roughly 30 years experience in the design, test and debugging of FPGA cores. That talk will outline NanoXplore Radiation Hardened By Design (RHBD) FPGA solutions and address both DSP and Embedded Processing features from both hardware and software side. A special focus will be done on NXcore and NXscope, especially our IP core generator.
The increasing interest of deep-space exploration, commercial use of small satellites, and in-situ information value extraction on space systems (e.g. CubeSat constellations, rovers, earth observation satellites) require more on-board data processing (OBDP). There are several reasons to require increased on-board data processing. Space applications, especially constellations and robotic missions will require a high degree of autonomy, intelligent task planning, data processing (e.g. image processing) and data dissemination. The sensors used will in most cases generate more data than the cross-links or downlink can handle, which drives the need for local reasoning using artificial intelligence and classical analytical models.
In this study, we explored radiation tolerant intelligent on-board processing systems accelerated with peripherals (e.g. GPU, FPGA, DSP) which take an advantage of a new heterogeneous computer architecture, Heterogeneous System Architecture (HSA), in terms of decreasing compute latency and increasing data transfer bandwidth. The study continues prior work which is commercialized by Unibap AB with flight heritage and selected by NASA for on-board processing for the “HyTI” thermal hyperspectral mission. This presentation presents the results of investigations addressing the capabilities of future on-board big data processors. Furthermore, the experimental study covers the performance analysis by using image recognition algorithms, the open standard "OpenVX", and an open source machine learning library ”MIOpen”. Furthermore, we discuss the usability of our method in OBDP regarding heterogeneous computing.
The results show that heterogeneous architectures, especially GPU can make significant improvements in compute efficiency. Heterogeneous GPU accelerated on-board processing achieves 238 times reduced compute time and approximately 13.5 times less energy compared to the traditional CPU centered processing. In addition, the heterogeneous computing method shows 20-70% improvements of the schedulability of the entire application system given different assumptions.
On-board processing gained an ever increasing acceptance within the last twenty years. To an increasing extend payload data will be processed, formatted, analysed, compressed and encrypted on-board – new features and functionality will be transferred to the spacecraft. The dramatic augmentation of processing performance even add to this momentum. The functional spectrum spans from motor control to image processing, cryptography or data interpretation, on-board activities. In this context, after a long development period between Airbus GmbH and ISD S.A. the high performance data processor (HPDP) has become a reality and the first tests have already been performed. In the past years, several tests with different kinds of algorithms have been run to evaluate the performance of the HPDP. This paper presents both the summary of activities up to present time and results of tests performed.
NOGAH space systems pack multiple RC64 processor chips and COTS components, mounted on multiple PCBs, in space-ready enclosures.
RC64 is rad-hard, high-performance, low-power manycore combining 64 DSP cores, large (4 MB) shared memory, a hardware scheduler, and twelve serial links achieving the fastest-ever SpaceFibre data rate. RC64 achieves 16 GFLOPs and 32 fixed-point GMACs, dissipating 0.5W—5W, depending on computing load and on I/O activity, demonstrating highly competitive performance per Watt. RC64 supports up to 4 GB of error-corrected DDR3 memory and 100 Gbit/sec high-speed I/O. High performance interfaces to other components such as FPGA, ADC and DAC, are provided. RC64 contains strong FDIR means and is designed to protect itself, as well as attached COTS devices such as memories, against all space hardships. As a result, RC64 can recover from almost all types of SEFI. RC64 is the fastest, largest, lowest power processor in the global rad-hard, high-reliability arena. It is available off-the-shelf, eliminating the risks of development time and cost.
RC64-based NOGAH systems are preferred over COTS-based solutions because NOGAH and RC64 do not require intricate measures to assure reliability, development of NOGAH systems takes less time and less resources, and the result is economically competitive and more reliable. Rich selection of software enables fast and reliable construction of space systems for on-board processing.
NOGAH systems are based on multiple standard formats of VPX boards and enclosures, or custom form factors, and on high bandwidth SpaceFibre and SpaceWire based connectivity. RC64, as well as space-grade and COTS devices, are combined in a flexible manner, delivering high performance supercomputing in space, consuming very low power. Multiple RC64 and other space-grade processors provide extreme high reliability and extensive FDIR, mutually protecting each other and assuring SEFI-free system operation.
NOGAH software delivery includes software development tools (SDT), real-time ‘Ramon Chips Executive (RCEX), libraries for a variety of uses, and reference applications. SDT comprises a standard C compiler, a task-graph compiler (for expressing parallelism), debugger, simulators, profiler and event recorder, among other components. RCEX includes the needed run-time support of I/O and of task API, in statically linked structures. Libraries provide system services and support for applications including mathematical, data processing, DSP and machine learning primitives, as well as advanced functions such as modems. Reference applications are delivered in source code, to enable users to adapt the code to their unique applications. Such references cover areas such as communications, earth observation, navigation, robotics, and artificial intelligence inference machines in the form of neural networks for domains such as communications, computer vision and autonomy. NOGAH/RC64 applications exploit the manycore shared-memory parallelism available within RC64 processors, as well as the distributed message-based parallelism spanning over multiple RC64 chips.
Dense high-speed SpaceFibre networks interconnect the multiple RC64 chips and other devices (especially FPGAs) on each card, and also provide tight high-speed connectivity between separate cards, whether in the same enclosure or elsewhere in the satellite. That flexible infrastructure, combined with extensive networking libraries and API, enable efficient mapping of arbitrary networks onto the physical nets in a Software-Defined Network (SDN) manner. Thus, arbitrary distributed algorithms can be implemented flexibly on NOGAH systems, and can be modified and upgraded at will while in orbit.
A NOGAH architecture for Earth Observation Satellites (EOS) is demonstrated. One card employs GR712RC (dual core LEON3FT, running a cyber-protected operating system) and protected Non-Volatile Memories (NVM, either EEPROM or FLASH) to control and protect the other cards and to lead FDIR activities. Camera interface cards contain FPGAs for flexible connectivity to various cameras and read-out modes, in addition to RC64 chips for data processing, analytics, compression, storing in NVM, and preparing for downlink transmission. When needed, DAC devices are included to interface analog image sensors. In optical imaging, RC64 tasks may include Time Delay and Integrate (TDI) processing, color/multi-spectral analysis, change detection, and neural-network-based computer vision algorithms for analytics and recognition. In hyper-spectral imaging, RC64 offers real-time object/material recognition based on high-speed correlations of the pixels with pre-specified spectra. In SAR payloads, RC64 is used for either data reduction and compression or for conversion of the signal to images and apply further processing. RC64 is also useful in beam-forming for SAR.
In the framework of the Ramon Chips—Thales Alenia Space FOSTER collaboration project, RC64-based NOGAH systems are evaluated and demonstrated for telecommunication satellites. ESA DSP benchmarks have been implemented, evaluated and reported. Algorithmic libraries, e.g., for channelization, beam-forming, and modems, have been developed. Interference detection and mitigation are explored and investigated.
Technolution is a technology integrator who deploys multidisciplinary expertise in an effective way to find the best technology solution for its customers. We develop among others high-speed digital signal processing electronics, programmable logic, embedded hardware and software solutions for imaging, video, semiconductor and security applications. In some solutions we (re)use our softcore IPs for FPGA and ASIC: the FreNox RISC-V processor and the Xentium Digital Signal Processor (DSP).
The multitude of high-speed processing applications all require multi-core processor architectures with varying requirements on real-time performance, flexibility, safety and security. We will present multiple hardware design cases; real-time imaging for e-beam lithography used in semiconductor lithography machines, which is massively parallel, highly configurable and generates a net output data stream of 3 Tbit/s; and our security platform JelloPrime, which uses a softcore version of the FreNox RISC-V processor for all configuration and control and dedicated hardware acceleration for the data encryption/decryption.
Our Xentium DSP and fault-tolerant Network-on-Chip (NoC) IP have been designed targeting the next-generation DSP roadmap for Space. Next generation payload processing ASICs for space applications have to be programmable, high performance and low power. Moreover, the digital signal processors have to be tightly integrated with space relevant interfaces in heterogeneous System-on-Chip (SoC) solutions with the required fault tolerance and radiation hardness.
While combining our multidisciplinary high-speed and secure digital processing expertise (for non-space high-demanding applications) and reusing our softcore IP building blocks (Xentium DSP, FreNoX RISC-V, NoC), we present how to create multi-processor architectures for on-board payload data processing applications. We also address the programming aspects of such distributed multi-processor architectures.
A regenerative payload provides improved performance, reduced latency, support of mesh connection, simplified implementation for Non GEO constellations , and better usability in comparison to basic bent pipe designs.
On the other hand, it may require more processing power on board, in addition to the main problem of future proof design. Namely ensuring that communication protocols required by the users are supported along the whole life cycle of the satellite. With the introduction of a software defined radio ASIC, which is able to support large bandwidth in both uplink and downlink directions, the implementation of a future proof regenerative payload is closer than ever. The SX-4000 SDR ASIC designed by Satixfy for a payload provides an solution for these requirements by introducing a flexible, future proof, rad hard, software defined architecture using low power modern silicon process.
For a long time, FFT-processing was avoided in on-board processing, due to the heavy load on general purpose processors. Nowadays there are several FFT IP-cores available for eg the Virtex-5 Xilinx FPGA or independant FFT IP-cores, but most of them have lack of accuracy or have a limited FFT-size.
ESA developed, in co-operation with Astrium D&S and Atmel the SkyFFT ASIC: a FFT-processing core, in radhard technology that is capable of FFT processing at 100 Mega-samples per second (32 bits I and 32 bits Q input samples in parallel), which is available since 2015.
Today an EM-model have become available that shows all capabilities (and modes) of the SkyFFT at full speed performance and is ready for demonstration. The EM-model consist of an RTG4 FPGA for the control, SpaceWire for command interface and two SpaceFibre data interfaces.
On-board FFT processing is therefore now available for selection in missions.
Embedded GPUs can provide significant computational power at a low-power envelope for large amounts of data. For this reason, they are employed in a wide-range of embedded devices that can benefit from these properties, from handheld devices to autonomous cars to prototype supercomputers. This widespread COTS technology opens a window of opportunity to satisfy the ever increasing needs of performance for on-board data processing in space, both for increased autonomy of future missions, as well as for more advanced data processing and analysis.
In this talk we present preliminary results obtained in the framework of the GPU4S (GPU for Space) activity funded under the ITT AO9010 ESA call on Low Power GPU Solutions for High Performance On-Board Data Processing. We benchmark several latest generation embedded GPUs which have the potential to satisfy the on-board performance and power requirements. We use common algorithm building blocks extracted from a variety of space application domains such as Observation, Telecom, Radar processing and Vision-Based Navigation and present the obtained results. This work is one of the first benchmarking reports on the examined embedded GPU platforms, including Nvidia’s recently released Jetson Xavier (2018 Q4), which is used in autonomous vehicles.
PLATO (PLAnetary Transits and Oscillations) is an M-class mission of the European Space Agency’s Cosmic Vision program, whose launch is foreseen by 2026. The mission objective is to detect and characterize exoplanets via the transit method as well as the characterization of stellar mass and age of their host stars through asteroseismology.
The PLATO payload is based on a multi-telescope approach, consisting of 24 normal cameras for stars fainter than magnitude 8 and two fast cameras for very bright targets with magnitude 4-8. In order to provide high precision photometry, a precise and stable pointing needs to be achieved. Therefore, the two fast cameras are also utilized as fine guidance sensors to provide highly accurate attitude measurements to the S/C AOCS.
Following a classic star tracker approach, the attitude is calculated on-board, using measured star directions and their correspondent nominal directions from a star catalog. The star directions are determined from guide star centroids and a calibrated camera model. Due to the strict accuracy requirements a new elaborated centroid algorithm has been developed and implemented on a MDPA/LEON2-FT platform.This paper describes the PLATO Fine Guidance System algorithm, implementation, and verification procedure.
GMV works in many activities aimed at developing, validating and verifying up to TRL-5 advanced GNC and image processing algorithms to process autonomously on-board of a spacecraft a Vision-Based Navigation system for Descent & Landing scenario to small bodies. For the last year GMV has been developing under ESA activity the Engineering Model (EM) of the Vision-Based Navigation Camera (VNBC) for Phobos Sample Return (PhSR) mission which is validated and verified form, fit and function in representative environment to reach TRL-6. The VBNC solution is based on detection and tracking of remarkable features in images of Phobos surface. Wide trade-offs were performed over the optimal algorithms and on-board HW processing architecture based on high-fidelity closed-loop simulation and breadboarding on representative flight avionics. The project involves HW development of VBNC elements in high-performance avionics architecture including HW/SW implementation of VBN algorithms. The Navigation Sensor includes Engineering Model (EM) of the Image Processing Board (IPB) and Elegant BreadBoard (EBB) of Camera Optical Unit (COU).
The IPB-EM is the main contribution of this activity. IPB includes two FPGAs, a small and reliable rad-hard European FPGA dedicated to interfaces control unit and monitoring of the IPB, the other is a large rad-hard FPGA including complex processing of images to extract navigation data. The Interfaces FPGA selected is fully rad-hard European BRAVE FPGA. The large Processing FPGA copes with very demanding on-board image processing and hence a high-performance, high-density rad-hard V5QV FPGA is selected. Image processing is implemented on FPGA with performance improvement of ~200x speed-up compared to space processors. The IPB-EM is fully validated including modes, interfaces, management and image processing. The validation campaign includes IPB-EM thermal pre-tests, radiation and mechanical analysis. Functional tests will be performed from Model-in-the-loop to HW-in-the-loop, using a Graphical User Interface simulator, Phobos 3D-model and Phobian-like surface mock-up with HW-in-the-loop dynamic set-up in GMV’s plaftorm-ART® facilities by using robotic arm and cartesian robotic illumination rails. The VBNC HW is mounted on top of the robotic arm emulating Phobos environment and spacecraft dynamics. The COU provides representative optics needed for the validation and interfacing of IPB. Selected detector is CMV4000 (4K-pixels) as per PhSR mission. Optics FOV is selected as trade-off solution between QSO-operations and Navigation processing requirements. Best compromise is 20º FOV. COU-EBB implements low-level image correction functionalities including binning capability which is used to obtaine 2 different FOV with same optics and detector. For QSO operations cropping center 1024x1024 pixels provides a 10º FOV while for Descent and Landing binning the full-size 2048x2048 to 1024x1024 pixels provides the initial 20º FOV.
SCISYS is experienced in producing resilient software implementations of critical space flight algorithms. Recent developments in space qualified FPGA technologies have enabled a range of new applications for hardware accelerated algorithms for space applications. The reduction in execution time and resource usage allows for more complex algorithms to be used in a wider range of use cases. This paper presents the partial transfer of the ExoMars VisLoc Visual Odometry algorithm to an FPGA and discusses a wider range of potential applications in space.
The accurate localization of a vehicle on the Martian surface is crucial in allowing for operation of the vehicle to continue while direct contact to earth is interrupted. Due to the limited amount of time communication with a spacecraft in Mars orbit is possible, determination of the vehicle’s current position and attitude has to be carried out locally.
One possible solution to this is to make estimates based on visual data gathered by cameras on board of the vehicle. By making estimates based on the movement of the camera location the location data is independent of the terrain, achieving a high degree of accuracy in determination of both position and attitude.
The Visual Localisation flight software algorithm (VisLoc) was developed for the ExoMars rover. It is based on the core algorithm known as OVO (Oxford Visual Odometry), developed at the University of Oxford [1]. VisLoc was adapted over a number of projects to be a viable method of visual localization for Martian surface vehicles [2]. After subsequent further development by SCISYS as part of the European Space Agency’s ExoMars Rover Mission the VisLoc algorithm reached a technology readiness level (TRL) of 8.
In this paper we discuss the results of a study investigating the integration of an FPGA board into the VisLoc algorithm to accelerate the execution time of VisLoc with the aim of achieving an execution frequency of 1Hz while maintaining full parity between the software based algorithm and its accelerated counterpart. This accelerated version of the algorithm would then be deployed on European Space Agency’s Sample Fetching Rover (SFR), which intends to cover considerably larger distances than ExoMars in a similar timeframe [3].
Modular Avionic approaches have been around for 40 years but vary widely in implementation and the extent of both hardware and software levels of unification. The IMA concept, which replaces numerous separate processors and LRU with fewer, more centralized processing units, has led to significant weight reduction and maintenance savings in both military and commercial airborne platforms. Similar concepts have been developed for automotive (AUTOSAR) and in the industrial automation domain. Besides saving mass and volume the major driver in the industrial domains is cost both for development and maintenance.
Common to all these concepts is the use of standards for both hardware and software. Like in other domains space industry will have to cope with increasing system complexity but shorter development cycles and reduced budgets. Proven concepts from other domains need to be investigated, adapted and applied in space programs to meet the customer expectations wrt. quality, time and cost in a global market. Recent developments in space avionics like spaceVPX, and compact PCI Serial Space will help to achieve these objectives.
In this paper we present an approach how these modular computer concepts will help to develop high performance payload data processing computers with reasonable effort and at reasonable costs. Based on an use case for on-board space debris detection we discuss the different solutions and finally describe a modular payload data processing computer based on the CompactPCI Serial Space standard and a multicore CPU board.
As a result of 50 years of spaceflight, the attractive orbits around Earth are littered with derelict satellites, burnt-out rocket stages, discarded trash and other debris. In September 2012, the U.S. Space Surveillance Network tracked about 23,000 orbiting objects larger than 5-10 centimeters. By extrapolation it is estimated that there could be a total of 750,000 orbiting objects larger than 1 cm. The first step to avoid collisions is to detect potential objects that could impact a satellite. Space objects between 10cm in LEO and 30-100cm in GEO are typically monitored by a ground based space surveillance networks.
The ESA TRP activity "Streak Detection and Astrometric Reduction" (StreakDet) aimed at formulating and discussing suitable approaches for the detection and astrometric reduction of streaks in optical observations. Thus, the study provided the basis for potential space-based optical observations of objects in lower altitudes.
Processing is carried out in three main parts: segmentation,classification, and astrometry and photometry - as in most image analysis tasks. The aim of segmentation can be considered to be two fold: to reduce all unnecessary information from the image and to robustly extract all the desired features for further processing. The overall processing time reported in the final report of the study was 12300 ms. All processing was done on an Intel Xeon Quad Core CPU with a Clock speed of 2000MHz and 2 GByte of RAM.
The use case has been selected as it represents a full class of image processing applications. This includes image based navigation but also object detection in Earth observation images. In addition, many reference images either real or synthetic can be used as input and a reference implementation is also available.
The goal of the demonstrator described in this paper is to reduce the processing time to 1.5 seconds based on space qualifiable components.
The functionality described before needs to be implemented in software and hardware. Especially, the segmentation algorithm requires more processing power than what is available with conventional instruments controllers based on LEON2/3 processors. Therefore, new hardware and software architectures need to be investigated that go beyond the available solutions. The design space to implement the previously described functionality ranges from pure software implementations on single or manycore CPUs to full implementation in an FPGA or ASIC or anything in between.
In Germany Airbus together with Fraunhofer FOKUS, STI, Sysgo, TTTech and fortiss took a further step in order to develop an OMAC4S technology concept for space applications based on an DLR funded project called OBC-SA: While maintaining IMA segregation features, back plane solution (CPCI-S1 compliant) and the proprietary cabinet, a Satellite Deterministic Network and space domain (robotic) specific middleware API with system support services (ECSS-E-ST-70-41 compliant) and application support services ARINC653 compliant) were added. A 1Gbit switched Time Sensitive Ethernet data communication network is provided to connect all CPM (Core Processing Module) and other equipment on the spacecraft. For payload data processing where high processing performance is required the user can select among different CPUs. The NXP P4080 is the most promising solution when high speed data streams have to be processed as it provides a large number of CPU cores allowing parallel execution of tasks and many high speed links like PCIe lanes and 10Gbps Ethernet ports for transferring data from an instrument to data processing.
The algorithm has been implemented in Python under Linux operating system. The average processing time on the P4080 board is in the order of 4 to 5 seconds. Currently, the algorithm implemented in Python is converted into C and optimized to be executed on multicore computers using a tool chain from emmtrix Technologies. The final target architecture will be the P4080 board running PikeOS operating system. Results will be presented at the workshop.
Local and short-term extreme weather conditions endanger seamless maritime logistics and quasi-autonomous operation in today’s global and cross-linked economy. Not only, there is more likelihood of accidents and associated environmental damage. Shipping traffic is also increasingly the target of piracy, organized crime and terrorism. Moreover, illegal maritime activities such as illegal fishing, drug trafficking, weapon movement / proliferation and illegal immigration are constantly on the rise. Hence, there is an increasing demand of maritime security to be provided in a complex political environment which requires cost-efficient persistent maritime monitoring services to improve continuity of situation awareness picture of sea conditions and maritime target activities.
Most of today's maritime monitoring systems rely on satellites with either active radar or weather-depending optical payloads. This paper describes how the globally existing GNSS signals can be used in a persistent and cost-efficient way to improve continuity of situation awareness picture of sea conditions and maritime target activities by covered passive sensing and utilizing available weather-independent direct and ground reflected GNSS signals. The target detection is established by monitoring the targets footprint short-term time evolution in the local disturbed sea surface pattern which is captured by the reflected GNSS signal. Using satellite-to-satellite-to-ground low-volume communication for payload operation and telemetry allows for providing the in-orbit derived user products, e.g., maritime target detection and maritime surface awareness maps with low product latency to the user.
Then, this paper describes the implementation of such concept through innovative modular payload data processing equipment derived from the OBC-SA DLR studies. This computer concept allows for operating in-orbit different software app(lication)s depending on mission objectives. The OBC-SA modular architecture is based on CompactPCI Serial Space standard, providing open specifications for modular computing for space applications, based on many years heritage in the industrial automation domain. The CompactPCI Serial Space standard has been developed for extending the commercial standard with elements required in space applications to ensure high reliability and availability in extreme environments, but preserving a high performance and versatile architecture in a simple and cost-effective framework. This modular concept allows for reuse of available hardware and software elements, e.g., from other projects, to reduce cost and time for development and test. The concept is scalable with respect to, e.g., mass memory capacity and computing performance.
Applied to the use-case for processing in-orbit reflected GNSS signals for maritime target detection, high energy-efficient performance is mandatory for the payload needs to process in-orbit the direct and reflected GNSS RF L-band signals in the digital domain. Therefore, the proposed payload equipment relies heavily on commercial-of-the-shelf state-of-the art processing elements which provide a very high performance/power consumption ratio. 20 GFLOPS average processing power is required to process a 1minute raw GNSS signal stream, perform target detection and generate the user product within less than 5 minutes. This processing power cannot be provided with today's available space-qualified processors like LEON4 family. The 8 core P8040 from NXP is selected as a promising candidate, providing 12 GFLOPS average processing power. The benefit of the modular approach is that if more processing power is needed, the board can be simply exchanged by a more powerful one – keeping all other modules (including the software).