- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
All FPGAs share several design methodologies, yet each of them faces specific challenges. The anti-fuse FPGAs continue to be highly used in most electronic equipment for space, yet there are other technologies whose use in space has been growing: Flash-based and SRAM-based. The use of COTS FPGA has also increased, especially for space missions with shorter lifetimes and fewer quality constraints.
The aim of the workshop is to share experiences and wishes among FPGA designers, FPGA vendors, and research teams developing methodologies to address radiation mitigation techniques and reconfigurable systems.
The topics related to FPGAs for space are covering (not limited to):
The main FPGA vendors will present updates and will be available for questions. The detailed agenda will be published closer to the event. Presentations from the major design groups are expected.
Attendance to the workshop is free of charge.
Registration is now open
The Call for Abstracts is now closed.
The ESA Education Office is pleased to sponsor up to 3 tertiary education students to attend the 6th SEFUW as part of the ESA Academy Conference Student Sponsorship Programme. This sponsorship provides a reimbursement of up to EUR 700 towards travel and accommodation expenses.For more details, please visit our page: ESA Academy Student Grants.
ESA presentation about
- IP Core ESA portfolio
- New FPGA, ASIC, IP Cores development standard
- RnD FPGA activities on-going
SRAM-based FPGAs today need high-density configuration memories that are radiation-hardened for space applications. The MNEMOSYNE project aimed to design a radiation-hardened ASIC memory for boot and configuration.
I. RAD-HARD DESIGN:
IMEC designed the analog blocks (LDO, PMU, OSC, IVref, voltage monitors, etc …) using the RH DARE22 platform.
The digital design was done by 3D PLUS, while the physical implementation was performed by IMEC. During that step, radiation hardened techniques were also used:
• Redundancy and restricted cell sets for SEU critical parts.
• High SET immune cells for clock and reset trees
• Glitch filters on strategic nodes
Leakage, an important issue, is greatly reduced in the SOI process compared to bulk technologies. Additionally, this technology allows for body biasing independent of substrate biasing, further reducing leakage.
Finally, derating was used to account for normal device aging and TID impact.
II. MNEMOSYNE TV TESTS RESULTS
For evaluation, the test vehicle underwent TID, SEE, functional, and life tests. For SEE tests, a SEL/SEU LET threshold > 60 MeV.cm²/mg was achieved. These tests confirmed the device's immunity to SEL and revealed low sensitivity to SEU and SEFI. The embedded ECC mitigated all SEU, though some SEFI were observed. A laser SEE test identified the root causes of PMU voltage reference sensitivity. Due to the small technology node and thin gate oxide, TID results showed no significant variation in MOS threshold. Test samples endured TID levels above 100 krad (Si).
The test vehicle passed 1000 hours life test and measurements performed at -55°C, Tamb and +125°C. As for the TID tests, no functional degradation and small drifts in parameter were observed.
Functional tests verify that the memory operates as the boot memory for SRAM-based FPGA using the SPI/QSPI interface. The bypass interface was used due to a controller bug. The tests enabled read operations through the SPI/QSPI interface. The embedded ECC performed as expected, correcting 100% of naturally occurring bit errors. A 128 Mbit ASIC prototype was designed to address issues in the 64 Mbit test vehicle and to add features like a parallel interface.
The tests performed on the 128 Mbit ASIC confirmed the results observed on the former test vehicle with a SEE LET threshold > 85 MeV.cm²/mg.
III PRODUCTS DERIVED FROM MNEMOSYNE
The MNEMOSYNE ASIC is derived in three family lines, using the stacking of multiple 128 Mbit prototype ASIC to achieve higher densities.
MNEMOSYNE 1.8 V: Available in 512 Mbit and 1 Gbit density, supporting SPI, QSPI, OSPI interfaces and designed for the latest FPGAs, MCUs and processors with 1.8 V I/O.
MNEMOSYNE 3.3 V SPI: Available in 512 Mbit and 1 Gbit density, supporting SPI and QSPI interfaces and designed for compatibility with FPGA providing 3.3 V I/Os.
MNEMOSYNE 3.3 V Parallel: Available in 128 Mbit density, supporting EEPROM Protocol and targeting processors and micro-controllers supporting parallel interface and EEPROM protocol.
Each product line is tailored to meet specific requirements, ensuring that the MNEMOSYNE family address a wide range of configuration or boot memory needs.
To be updated
NanoXplore will provide an overview and an update on the latest rad-hard FPGA NG-ULTRA and ULTRA300.
The presentation will cover the following elements:
• Component overview and key technology differentiators
• Update on ESCC 9030 qualification
• New space industrialization
• Radiation performance
• Software tools and ecosystem
In this paper the architecture of a new generation router has been presented, It is reconfigurable as it is implemented on a mixed signal Field Programmable Gate Array (FPGA) platform. It is meant to be used in a decentralized structure of new space applications in order to get fast, efficient and reliable transfer of the network packets.
This project is related to a radiation tolerant router which is developed on reconfigurable hardware with state of art microcontroller running under a Real-Time Operating System RODOS. The Real Time Objected Oriented Dependable Operating System (RODOS) was designed specially to be used in satellites. The router is designed to be used in a network of the routers for space applications. Multiple routers can build a complex network on-board. The Ethernet port of each router allows an on-board computers (OBC) to pass communication packets into the network and receive data from it to communicate with each other. The routers forward the message packets through the network to the destination. As the exemplary topology in the figure shows, the network can comprise on loop for introducing redundant communication paths. In order to avoid circular trips of message packets, a dynamic routing protocol must be implemented on the routers such as the (Rapid) Spanning Tree Protocol or Shortest Path Bridging.
The increasing demand for onboard data processing in space
applications has led to the integration of Artificial Intelligence
(AI) and Machine Learning (ML) on Field-Programmable Gate Arrays
(FPGAs). This is particularly relevant for missions requiring
optimized data transmission, such as Earth observation
applications. AI-driven techniques can enhance onboard autonomy by
performing tasks such as event detection, data filtering, and
compression, ultimately reducing downlink bandwidth requirements.
The Edge SpAIce project demonstrates the potential of FPGA-based AI
processing for space applications, focusing on plastic litter
detection in oceans using Deep Neural Networks. Since real-time
inference is not required, our approach prioritizes computational
efficiency, using pixel/second/watt as the primary performance
metric. By balancing latency, throughput, and power consumption, we
optimize FPGA utilization for space-based deployments. Leveraging
open-source tools such as hls4ml and QONNX, we implement drastic
model compression and efficient hardware deployment,
enabling high-performance, low-power computation suitable for
resource-constrained space environments.
At Thales Alenia Space Italia, implementation of FEC codecs on FPGA in collaboration with ESA and academic institutions dates back various decades [1],[2] and has contributed to CCSDS standards definition [3]. TAS-I has implemented PCCC (Parallel Concatenated Convolutional Codes) and SCCC (Serially Concatenated Convolutional Codes) codecs and, most recently, LDPC.
While the FPGA technology progresses, efficient, high speed design call for optimized usage of different hardware architectures that may favor random access to memory or switch fabrics to implement the belief propagation among processors and the bit-decision nodes. As explained in [4], all the iterative codes after the turbo code invention are implemented by extrinsic information transfer, that can be seen as a belief propagation network: this applies for SCCC, LDPC and PCCC.
The capability to flexibly design hardware codecs for various applications is addressed by in-depth study of the coding solutions and subsequent selection of make or buy of the codec IP core implemented in FPGA.
Recently the various applications call for an in-house control of the codec algorithms for added flexibility (some LDPC are being used in the AWGN channel, as in deep space (ESPRIT) and earth observation applications (PLATINO), others are needed in Binary Symmetric Channels (BSC) as in general BB84 QKD Reconciliation, and some in the Binary Erasure Channel (BEC) as in the ESTOL (ESA Standard for Terabit Optical Links).
Hence, a study group has addressed theoretical, algorithmic studies and hardware design selecting first the most suitable LDPC. The code design is normally based on regular and irregular LDPC, achieved by density evolution [5] techniques and progressive edge growth [6].
At TAS-I, the ESPRIT [7],[8] code has been analyzed and simulated into details; the simulation time has been reduced by tree analysis of the tanner graphs.
The wide set of codes addressed has been also considered for BB84 QKD, considering also techniques in [9].
[1] D.Giancristofaro, V.Piloni, R.Novello, R. Giubilei, J. Tousch: “Performances of Novel DVB-RCS Standard Turbo Code and its Applications in On-Board Processing Satellites”; IEEE EMPS-PIMRC 2000, London, 2000 .
[2] S. Benedetto, C. Berrou, C. Douillard, R. Garello, D. Giancristofaro, A. Ginesi, L. Giugno, M. Luise, G. Montorsi, “MHOMS: High Speed ACM Modem for Satellite Applications”, IEEE Wireless Communications Journal, April 2005.
[3] S.Benedetto, G.Montorsi, A.Ginesi, D.Giancristofaro, M.Fonte: “A Flexible Near-Shannon SCCC Turbo Code for Telemetry Applications”, ESA STR-250, 2005, ESTEC.
[4] EP3622642 B1 - Minimum-Size Belief Propagation Network for FEC Iterative Encoders and Decoders and Related Routing Method.
[5] T. J. Richardson and R. L. Urbanke: “The Capacity of Low-Density Parity-Check Codes Under Message-Passing Decoding”, IEEE on IT, 2001.
[6] X.Y. Hu, E. Elefteriou, D.M. Arnold, “Regular and Irregular Progressive Edge Growth Tanner Graphs”, IEEE on IT, Jan 2005.
[7] TM Synchronization and Channel Coding CCSDS 131.0-B-3
[8] W. H. Zhao, J. P. Long: “Implementing the NASA Deep Space LDPC Codes for Defense Applications”, MILCOM 2013.
[9] E.O. Kiktenko et alii, “Symmetric blind information reconciliation for quantum key distribution”, arXiv:1612.03673v2 [quant-ph] 25 Feb 2019.
In the New Space era, the challenge of determining the need for high-cost radiation-tolerant components versus the viability of using more affordable Commercial-Off-The-Shelf (COTS) parts is increasingly relevant. This decision-making process highlights the need for effective radiation monitoring within satellite payloads to ensure the reliability of systems operating in harsh environments.
The Heinrich Hertz Satellite (H2Sat), launched in 2023 into GEO, is equipped with the Fraunhofer On-Board Processor (FOBP), which features two Xilinx Virtex 5QV FPGAs running LEON3FT Softcores. This advanced architecture enables onboard processing and dynamic reconfiguration. The Total Ionizing Dose (TID) is measured using calibrated Ultra-Violet Erasable Programmable Read-Only Memory (UV-EPROM), while Single Event Upsets (SEUs) are monitored through Static Random-Access Memory (SRAM). In particular, radiation impacts on memory cells are detected, allowing to evaluate radiation effects on COTS components.
Central to this mission is a robust and extendable IT infrastructure. Our cloud-native ground station enables easy adaptation and integration, while the FOBP utilizes a powerful in-band Telemetry/Command (TM/TC) connection supported by a full TCP/IP stack allowing efficient implementation of new tasks, facilitating real-time transmission of radiation data to ground stations. Onboard processing enables immediate data analysis without transmission of large raw datasets, while critical long-term data is preserved in a ground station database for future research and analysis.
For this purpose, we present a method for creating a flexible and reconfigurable FPGA-based space segment, along with the corresponding ground segment, illustrated through the implementation of live space weather monitoring.
Edge computing in space is revolutionizing on-board data processing for Earth Observation (EO) satellites, enabling real-time analysis of optical and SAR imagery. GMV has explored novel FPGA-based AI acceleration solutions using the rad-tolerant Xilinx Versal AI Edge, leveraging Vitis AI workflows to map deep learning models for vessel detection and fire hotspot identification. Various Xilinx methodologies were tested, including DPU-based acceleration, HLS-generated AI functions, and custom FPGA modules. The final implementation employs Unify, integrating CNN layers onto both DPUs and FPGA logic, where non-DPU-compatible functions are efficiently mapped onto dedicated IP cores accessible via Unify software calls.
On-board edge computing is transforming Earth Observation (EO) by reducing data downlink requirements while enabling real-time insights. GMV developed an AI-accelerated processing pipeline leveraging the rad-tolerant Xilinx Versal AI Edge due to its lower power consumption and optimized DPU architecture (transitioning from the Versal AI Core), which better suits Deep Learning and Machine Learning workloads. Using Vitis AI workflows, CNNs are efficiently deployed onto DPUs and FPGA logic, ensuring high-performance inference in orbit.
A ground-onboard partitioning approach is implemented. On-board edge-computation includes a first-stage triage based on simple AI model processing thumbnails to discard irrelevant data (e.g., non-useful land-only images for vessel detection or cloud-covered scenes obstructing analysis). The core edge computing functions are based on a reduced, yet complex, AI model onboard that filters and processes data in real time, drastically minimizing the amount of imagery transmitted to Earth. Selected data patches are prioritized for further on-ground refinement with complete AI model. Additionally, the pipeline processes raw sensor data from L0 to L1b or L1c directly onboard, enhancing autonomy and mission efficiency.
This approach demonstrates the feasibility of deploying AI-driven EO applications in space, leveraging Versal AI Edge’s advanced AI acceleration while ensuring efficient power consumption and optimized FPGA resource allocation.
With 3GPP Release 17, satellites were incorporated in the 5G New Radio standard. This significantly strengthens the role of satellite connectivity for future mobile networks. 5G and 6G non-terrestrial networks bridge gaps in terrestrial network coverage, ensuring seamless connectivity in unconnected zones. Due to this, the ongoing project ESA 6G LINO focuses on creating an open experimental evaluation platform. Hereby, the acronym LINO stands for "Laboratory IN Orbit" and therefore will provide a flexible and reconfigurable system to deploy custom experiments. The project consortium consists of multiple partners across Europe, namely TESAT, OpenCosmos, University of Surrey, VTT, Deutsche Telekom and Fraunhofer IIS. The overall goal of the project is the creation of all necessary system parts, including space, ground and user segment, and the demonstration of a fully functional 5G New Radio base station in space. Moreover, the integration of LEO satellites in mobile communication networks will be developed which means maintaining the End2End connectivity while performing the handover procedure of a TN to an NTN connection, for instance. The satellite will be a 16U cubesat and the payload incorporates an AMD Versal platform. The launch is scheduled for next year for a polar orbit with an altitude of 500km to 600km.
In order to demonstrate different operational scenarios, a flexible onboard processing platform has to be utilized. This flexibility is provided by a system consisting of the Adaptive Versal System-On-Chip which performs the computational tasks and a power-efficient Lattice FPGA as system supervisor unit. The fundamental demonstration will be the End2End connection deploying a 5G-NR NTN base station – so-called gNodeB – onboard the satellite which is the baseline for further upgrading to 6G features. Another use case will cover the conditional handover scenario between terrestrial and non-terrestrial 5G-NR base stations. Independently of the 5G and 6G topics, an optimization of spectrum usage is a general key factor for high-throughput systems. Therefore, spectrum monitoring and accompanying AI-based training of spectrum allocation algorithms can provide significant improvement to tackle degradations due to interference. The embedded AI engines as part of the Versal platform enable this efficiently. Lastly, new 6G waveforms will be evaluated that are able to encounter the common problems with the 5G-NR OFDM waveform, e.g. the high peak-to-average-power-ratio. After testing and deploying the use cases within the project consortium, this flexible onboard processing platform will be available to the public for testing of individual experiments in any field. This is possible due to the high degree of reconfigurability that is assured by the system concept.
In recent years, the space community has shown a growing interest in the use of AI onboard satellites. FPGAs have emerged as competitive accelerators for these algorithms in the space harsh environment, and methods for automating their design have gained significant attention. Among the available frameworks, FPG-AI stands out as a promising solution for facilitating the deployment of AI in satellite systems.
The framework combines the use of model compression strategies with a fully handcrafted HDL-based accelerator that poses no limit on device portability, thus enabling the implementation of CNNs and RNNs on components from different vendors and with diverse resource budgets. On top of that, an automation process merges the two design spaces to define an end-to-end and ready-to-use tool, keeping a high degree of customization with respect to the user’s constraints on resource consumption or latency. The framework exploits a fully handcrafted, human-readable HDL, which avoids the use of third-party IPs, thereby enhancing the explainability and reliability of the architecture.
This session describes the extension of the FPG-AI framework to NanoXplore radiation-hardened FPGAs, a strategic European solution for space systems. We conducted a reengineering of the FPG-AI flow to ensure compatibility with NanoXplore technology. We successfully deployed the LeNet-5 neural network on the NG-ULTRA FPGA, demonstrating the device’s capability to support AI hardware accelerators. The implemented prototype achieved an inference time ranging from 0.720 to 0.393 ms, utilizing a maximum of 7% of the available Functional Elements.
Exploiting FPG-AI versatility, we implemented the LeNet-5 model on devices of other vendors (AMD Xilinx, Microchip) to compare the performance of the different technologies for AI workload.
Finally, we provide a comparison of FPG-AI with state-of-the-art High-Level Synthesis (HLS) frameworks for AI acceleration on the NG-ULTRA device to evaluate the trade-offs of the different design flows.
To address the growing demand for precise global positioning in commercial and scientific applications, the German Aerospace center (DLR) proposed the Kepler concept to increase the end-user performance of the Global Navigation Satellite System (GNSS) service. By utilizing optical links between satellites for precise time transfer and ranging Kepler enhances the performance, enabling down to mm-level accuracy, as well as the robustness of the GNSS Galileo.
The COMPASSO mission of the DLR is an in-orbit validation of Kepler key technologies. Two iodine clocks, a frequency comb and a laser terminal for optical time transfer and ranging are evaluated, paving the way for future space applications. Hosted on the Bartolomeo platform of the International Space Station (ISS), COMPASSO is scheduled for launch in 2027 and will operate for 1.5 years.
Time transfer and ranging evaluation is performed via bi-directional transmission of pseudo random noise spreading sequences at 9.6 GChip/s as well as establishing a 75 Mbit/s data link between ISS and the optical ground station of the DLR Oberpfaffenhofen. By leveraging FPGA technology, the performed digital signal processing (DSP) can be referenced onto the highly stable iodine clock or oven-controlled crystal oscillator reaching target ranging accuracy level. Appropriate measures to ensure stable and reliable operation in space as well as allow for in-orbit FPGAware updates of used SRAM based FPGA technology are implemented. Control as well as telemetry and telecommand tasks are taken over by an embedded LEON3 softcore processor.
This presentation provides an overview of the COMPASSO mission, with a focus on the FPGA selection, design and implementation. Design decisions and lessons learned are shared, offering insights into the challenges of scientific missions and taking novel technology into space. Concluding, a look ahead exploring new design opportunities and taking developments in the market into account is given.
Model-based approaches are widely adopted in the aerospace industry to streamline the implementation of command laws. In this context, Field Programmable Gate Arrays (FPGAs) play a crucial role in deploying Guidance, Navigation, and Control (GNC) algorithms for aerospace systems, offering the benefits of massively parallel computational capabilities and predictable execution times. Traditionally, real-time system implementations based on model-based designs rely on dataflow-synchronous programming languages. Previous research has focused on compiling these languages into Register Transfer Level (RTL) models using Hardware Description Languages (HDL) through high-level synthesis (HLS). Whereas these methods allow for high-level system specification, the compilation process is often lengthy, lacks traceability, and may compromise the synchronous dataflow properties of the original programs.
To address these limitations, alternative approaches have explored the use of intermediate representations to compile programming languages directly into FPGA netlists through pattern-matching techniques.
Building on this concept, in collaboration with CNES (French space agency), we demonstrate how such an intermediate representation can efficiently generate FPGA netlists from dataflow programming languages for the synthesis of GNC algorithms. Our methodology introduces Pyxis, a toolchain that provides a fast, predictable synthesis process with enhanced traceability. This approach improves the efficiency and reliability of FPGA-based implementations for aerospace applications. Future work will focus on simulating nanosatellite control systems implemented using Pyxis.
The growing significance of small satellites in space missions motivates the definition of streamlined CubeSat platforms that emphasizes cost-effective and flexible communication solutions. Central to this effort is the use of FPGAs, chosen for their capacity to integrate commercial off-the-shelf components while meeting stringent performance and reliability demands. By capitalizing on FPGA-based architectures, both rapid prototyping and reconfiguration capabilities can be achieved for on-the-fly protocol updates, a critical requirement for satellite communications under tight power and volume constraints.
This work highlights the central role that will be played by FPGAs for small-satellite applications and how technology is developing to meet expectations and requirements of the sector. Key clues include the capability of implementation of advanced signal processing technics in restricted form factors, radiation-tolerant design to mitigate single-event upsets, real time adaptability and power efficiency to accommodate the high radiated power required in low Earth orbit. For instance, SoC-style FPGAs can integrate processor cores and dedicated DSP blocks, facilitating maximum spectrum utilization and error correction, which is very important in such a hostile environment as space. We further elaborate on how adopting COTS FPGAs streamlines both cost and availability, enabling iterative, agile development cycles.
Building on these insights, we offer a comprehensive study of emerging trends in FPGA usage for small-satellite SatCom, integrating state-of-the-art research on reconfigurability, system miniaturization, as well as power consumption and cost reduction. This survey focuses on how modern FPGAs can not only address today’s challenges—such as bandwidth allocation and Adaptative Beamforming—but also dynamically adjust to evolving mission objectives. Through this work, we aim to advance the knowledge base for next-generation CubeSat communication systems, ultimately fostering more resilient, versatile, and scalable satellite networks.
Spacecraft in-orbit servicing is a vital technology for extending satellite lifespans by enabling repairs, upgrades, and refueling. Accurate pose estimation, which determines the relative position and orientation of spacecraft, is crucial for such missions but poses challenges in space environments due to varying lighting, shadows, and high precision requirements. Traditional computer vision and sensor fusion techniques often fall short under these conditions. Recent advancements in Deep Learning (DL), particularly with Convolutional Neural Networks (CNNs), have demonstrated great accuracy and robustness, and emerge as a possible alternative to classic vision algorithms. However, real-time deployment of deep learning models in space is hindered by the limited computational resources of space-qualified hardware.
Field-programmable gate arrays (FPGAs) offer an effective solution, combining flexibility, power efficiency, and high computational performance, making them well-suited for AI acceleration in space applications. This thesis investigates the integration of AI-driven pose estimation algorithms with FPGA acceleration, utilizing Xilinx's Vitis AI and Deep Learning Processing Unit (DPU) to deploy CNNs for real-time space operations. The study focuses on the CAT-MICE system developed by GMV Aerospace and Defence S.A.U., with experiments conducted using Platform-ART® facilities as part of the CAT Breadboard project. The ultimate aim is to achieve high performance and precision in spacecraft pose estimation through FPGA-based deep learning solutions.
We present a Time-to-digital Converter (TDC) implemented on Field-Programmable Gate Array (FPGA) technology that exploits the statistics of single-photon detection to provide the proper input distribution to guarantee bin calibrations during the signal acquisition without any TDC stop or data loss. As a standard solution, TDCs usually integrate a ring-oscillator to provide the input statistics for the calibration. When a new calibration is required, the data acquisition needs to be stop. Our configuration, which we called “steady calibration”, allows to combine the acquisition and the calibration at the same time. The advantage is not only the removal of data loss but also an improvement in the performances of the TDC (i.e., its jitter) as the calibration is carried out every time there is a new detected event. This application is particularly well-suited for Satellite Quantum Communications where single-photon detectors (SPDs) are part of a setup in a harsh environment. As a matter of fact, temperature has a direct effect on the jitter of tapped delay-line TDC. Therefore, temperature changes in space environment can have a major impact on the performances of a TDC.
The TDC, called “Marty” and implemented on a Zynq-7000 chip with an FPGA+CPU architecture based on the one proposed by Stanco et al. (Versatile and Concurrent FPGA-Based Architecture for Practical Quantum Communication Systems doi.org/10.1109/TQE.2022.3143997), has a bin size of ~18ps, an average jitter of ~27 ps (at room temperature) and can sustain up to 15 Mevent/s (transferred via ethernet connection). The device was tested in a two-channel configuration in a dedicated climatic chamber ranging from 5° to 80° to force a relevant change in the jitter performances (the SPD was not inside the chamber). It was verified that the steady calibration not only can guarantee no data loss, but it also prevents jitter to diverge as temperature increases (in our case jitter starts at ~28 ps at 5° and reaches ~32 ps at 80°).
Furthermore, the steady calibration is able to reduce the overall jitter variability bringing to a more stable jitter value with an average standard deviation ⟨σsteady⟩ = 0.64 ps with respect to the non-steadily-calibrated one ⟨σRO⟩ = 1.33 ps. Last, the TDC was also integrated into a Quantum Key Distribution receiver setup and successfully tested for a real QKD implementation, showing its equivalence with a commercial TDC device.
This result can be relevant for future (European) space missions where time-tagging units are required to have stable and low jitter value, in particular for those missions related to critical applications like Quantum Communications or Quantum Random Number Generation where the TDC is a fundamental tool to readout qubits via single-photon detectors.
This work was presented on a pre-print paper (M. R. Bolaños W. et al., A time-to-digital converter with steady calibration through single-photon detection doi.org/10.48550/arXiv.2406.01293) which is currently under review by a scientific journal.
The data volume in satellite missions has expanded rapidly due to advancements in sensor technology, higher resolution imaging, and the growing number of satellites. This surge in data demands high-performance on-board processing solutions that offer both flexibility and computational power while being resistant to radiation.
IngeniArs is addressing this challenge by developing HPCEX, a space-qualified printed circuit board designed for high-performance computing. HPCEX supports parallel processing and the implementation of Artificial Intelligence algorithms, enabling efficient data handling and analysis in space environments.
The core of HPCEX is the high-end, space-qualified AMD KU060 FPGA, which incorporates the IngeniArs GPU@SAT IP Core for enhanced parallel computing and AI acceleration. Additionally, it includes high-speed serial link interfaces such as SpaceFibre, WizardLink, and Gigabit Ethernet, ensuring robust data transfer capabilities.
To enhance reliability, the board integrates also a Microchip RTG4 radiation-hardened FPGA as the system controller. This FPGA manages critical functions like fault detection, radiation mitigation, and telemetry/telecommand (TM/TC). It also oversees the programming and scrubbing of the AMD KU060 FPGA, while handling telemetry from all board components and connected units, along with telecommands from the on-board computer. Standard interfaces, including I2C, SPI, CAN, and SpaceWire, are provided for efficient sensor access and TM/TC exchange.
This architecture delivers a high-performance board that is both flexible, thanks to its programmable elements, allowing adaptation to diverse user and mission requirements, and resilient to the harsh radiation environment of space.
The integration of a Backplane interface allows HPCEX to be seamlessly incorporated as a processing board within a higher-level system. In this framework, IngeniArs is developing EVOPRO, a comprehensive flight unit that pairs HPCEX with a second flight board, known as HPSCM. HPSCM features the high-performance Microchip HPSC processor, which supports high-reliability and safety-critical operations through function segregation, lockstep configuration, and hardware redundancy. Additionally, HPSCM incorporates an AMD Xilinx RFSoC platform, space-qualified to handle SATCOM functionalities such as TT&C and PDT. It also includes a Mass Memory Unit for efficient data storage.
Together, these advancements provide a robust platform for modern satellite missions, enhancing on-board data processing, communication, and adaptability, thereby meeting the evolving needs of space exploration and satellite operations
In space applications, SRAM-based FPGA devices are vulnerable to radiation-induced soft errors such as single event upsets (SEUs) and multiple bit upsets (MBUs), which can lead to system malfunctions. We have developed an experimental setup to perform fault injection campaigns in the configuration memory using the AMD Soft Error Mitigation (SEM) IP Core, emulating SEUs and MBUs in commercial off-the-shelf (COTS) SRAM-based FPGA devices. A key feature of our setup is the extraction of critical bits, which accelerates the fault injection process by focusing only on flipping the bits that could affect the design's operation. The setup includes a Python-based graphical user interface (GUI) that automates the generation of FPGA frame addresses for these critical bits and manages the fault injection campaign. It also generates real-time reliability reports. The source code for the Python GUI and VHDL example designs will be available in a public GitHub repository. In the workshop, we will demonstrate the procedure for measuring the reliability of both unprotected and protected designs, offering an alternative to radiation tests in accelerators. This work contributes to the reliable use of COTS FPGA devices in space applications.
Spiking Neural Networks (SNNs) are emerging as a disruptive technology in artificial intelligence, offering a biologically inspired approach that enables energy-efficient and event-driven computation. Their potential impact on spatial applications, particularly for anomaly detection in satellite data, is significant. Traditional deep learning models often require extensive computational resources, making them impractical for deployment on edge devices such as satellites. In contrast, SNNs leverage sparse, asynchronous processing to efficiently handle high-dimensional time-series data, such as satellite data. Their inherent temporal processing capabilities make them particularly well-suited for detecting anomalies in dynamic and resource-constrained environments, where rapid and reliable decision-making is crucial.
A key enabler of SNNs in space applications is the integration with FPGAs, which offer significant advantages in terms of low power consumption, real-time processing, and adaptability to mission-specific requirements. By deploying SNNs on FPGAs, satellites can perform onboard anomaly detection with minimal energy overhead, reducing reliance on ground-based processing and lowering data transmission loads.
To demonstrate the potential of this approach, our team developed a spiking autoencoder and applied it to the ESA Anomalies Dataset, a collection of satellite telemetry data. The solution comes from a deep study of the biological models at the basis of neuronal function combining consolidated approach from the literature to the anomaly detection on time series data problem. Autoencoders are one of the most used and consolidated architectures for anomaly detection on time series. Their working principle is based on learning how to represent in a compress manner the characteristics of an input sample (encode) and then reconstruct the input itself from the compress representation (decode). The idea is to measure the error between the input and the reconstruction and discriminate between normal and anomal data using the reconstruction error. The SNN-based autoencoder was trained to reconstruct normal operational patterns, detecting anomalies as high values in the reconstruction error value. Preliminary results show that this method effectively captures complex temporal dependencies while maintaining high detection performance with significantly reduced energy consumption compared to conventional deep learning models. It achieves very good performance with very few data with respect to benchmark approaches enabling future implementation of on-chip (and maybe online) learning. In particular we arrived at 99.7% accuracy score with a train dataset of only 3 months length tested on the full mission dataset (84 months). This fusion of SNNs, FPGA-based neuromorphic hardware, and real-world satellite data paves the way for intelligent and autonomous anomaly detection in future space missions.
Commercial-of-the-shelf computer modules (CMs) bear the potential to bring
unprecedented computing power to space. Combining CM redundancy for criti-
cal and detect-recover schemes for less critical operations, we highlight how CMs
can match the safety of radiation-hard CMs, at a fraction of costs. We combine
CMs with an FPGA voter to provide fault masking through agreement where
necessary and parallel computing where possible, highlighting the benefits but
also the inherent costs of CM-level hypervisor-controlled radiation tolerance.
Early results from a proton-beam radiation test confirm the need for hypervisor-
controlled reactive power cycling to fend-off single-event latch ups and proactive
rejuvenation to counter other radiation faults at all levels of the hardware / soft-
ware stack. The results also indicate the need for external monitoring and fast
boot to also withstand the most severe situations, like solar flares
The Photospheric Magnetic Field Imager (PMI) will be one of the payload instruments on board the European Space Agency (ESA) VIGIL mission. It will provide vector magnetograms and tachograms of the solar photospheric plasma as valuable information for being used in space weather diagnostics.
The PMI instrument consists of an Electronics Unit and an Optics Unit, which are connected through a harness. The Electronics Unit contains a specialized Digital Processing Unit (DPU), which integrates a System Controller (SyC), implemented in a GR712 processor, and a Main Processing Unit (MPU), based on a Space-Grade AMD Kintex™ UltraScale™ XQR FPGA. The DPU also includes various memories and external interfaces.
The telemetry limitations of the mission, located at 1 au at the L5 Lagrange point, the requirement for continuous 24/7 monitoring of the Sun photospheric vector magnetic field and line-of-sight velocity, and the need for low-latency (25 min) and high-cadence (30 min) data products drive the implementation of a sophisticated onboard data reduction process. Additionally, partial scientific analysis is also performed onboard to minimize the size of the data products.
Only for its nominal observational mode, PMI uses 19.33 Gbit of data every 30 min, consisting of 24 images (four polarization states at six wavelengths) with a resolution of 2048 × 2048 pixels and a depth of 12 bits per pixel. Each image is formed by accumulating 16 camera frames during the initial processing stage. Through onboard processing, this data volume is significantly reduced to around 100 Mbit per dataset. The final dataset includes 5 maps (B, γ, φ, vLOS, and Ic) at 2048 × 2048 px resolution at a maximum of 6 bit per pixel, along with a Low Polarization Mask image, at 2048 × 2048 resolution at 1 bit per pixel.
To achieve this functionality, the FPGA-based MPU is responsible for high-performance data processing and reconfigurability, efficiently executing key tasks such as data accumulation, pre-processing, processing, compression, and storage in external memory. The GR712 processor, acting as the SyC, oversees system control and handles communication with the spacecraft.
In summary, we present how the next generation of Xilinx devices for deep space can provide great computing capabilities, simplifying the PMI DPU design without compromising its performance.
Single Event Upsets (SEUs) affecting configuration memory (CRAM) of programmable logic are a main source of faults that eventually lead to the system's failure due to hardware design corruption. The lack of information on how resources are programmed in CRAM significantly limits the ability to develop accurate reliability evaluation methodologies and tailored mitigation strategies, which could benefit from the knowledge of how SEUs can modify a circuit. Additionally, the lack of efficient integration mechanisms with first-party tools inhibits the automatization of radiation testing experiments and robustness evaluation campaigns.
PyXEL is a Python-based toolkit that supports robustness assessment experiments exploiting automatization and CRAM analysis. PyXEL allows the evaluation of netlists and implemented designs through non-invasive fault emulation, acting on the actual device, enabling a fast and accurate analysis based on used resources and emulated faults. PyXEL offers comprehensive reliability analysis techniques, including fine-grained fault injections, specific fault emulation (e.g., open routing, LUT corruption), and fault localization during radiation testing. PyXEL offers techniques and insight not currently available from vendors or third-party tools, assisting the designer in solving the mapping between CRAM and hardware and detecting the most sensitive parts of the design to SEUs while evaluating the overall design robustness. Additionally, PyXEL offers the methodology for visualizing, decoding, and analyzing the configuration data of programmable hardware devices and facilitates the development of mitigation solutions based on customized implementation solutions.
The comprehensive analysis flow supported by PyXEL includes fault model extraction during radiation testing, an assessment of system and module robustness through fault injection campaigns, and support for the implementation of hardening techniques. PyXEL supports the latest AMD devices, such as the 7 Series, Ultrascale, and Ultrascale+.
A common situation in hardware design for complex space systems is the need for simplification, driven by requirements related to weight, volume, power consumption, and cost. This often leads to resource limitations that pose significant challenges for designers.
Modern FPGAs can provide resources that help address these challenges through less traditional approaches. It is crucial for designers to have a comprehensive understanding of the toolbox that FPGA technology offers to explore alternative solutions when conventional methods are not applicable.
In this case study, a large number of ADCs needed to be properly connected to a central processing FPGA implemented on a PolarFire device. Due to hardware design constraints, the number of signals from the ADCs to the FPGA exceeded the available backplane connections. Additionally, challenges arose in meeting the mutual timing constraints of the routable interface signals.
The problem was resolved by combining system-level control with PolarFire features, leading to substantial external hardware simplification. The resulting solution meets all system requirements and has proven to be more robust than the originally intended architecture, both in minimizing points of failure and in its resilience to long-term variations, such as aging and signal drift.
In an era where technology underpins nearly every aspect of our lives, the importance of dependable hardware and software has never been more critical to ensure safety and security. The rapid pace of technological advancement often outstrips the ability to implement rigorous testing and validation processes, making it difficult to ensure that systems are both reliable and secure. At the Dependable Computing Systems group of the University of Twente, we employ a test-in-the-loop methodology for our dependability designing and research. The loop is depicted in Figure 1, and starts with a design, then bench tests, beam test and finally validation. We focus on the (co)design of dependable hardware and software frameworks. Examples include probabilistic instruction validators, a dependable execution environment, robust random forest tree algorithm, and runtime monitoring. In short, this includes software and hardware designs. These are first validated at the bench. Besides established simulation techniques, emulation based fault injection is extensively used. As well as side-channel analysis with custom tools. However, in the context of reliability evaluation, real beam experiments yield real results that allow us to create strong correlation with fault emulation techniques. Therefore, beam experiments are explicitly included in the methodology. Experiments are preformed on flash-based FPGAs and ASICs with neutrons and protons. The final step is validating the designs with the bench and beam experimental results. Key metrics are cross-sections and architectural and program vulnerability vectors. This validation serves as input for enhancement and refinement of the designs. The use of FPGAs is indispensable in this loop, as both bench and beam tests are primarily preformed with these devices. In summary, our focus on both theoretical and practical validation methods underscores the importance of dependability in technology, and aims to ensure that innovations are both effective and secure.
AMD Versal adaptive SoCs offer a combination of scalar and vector processing resources along with traditional FPGA logic gates, memory and DSP resources, providing highly capable platforms for integration of dense signal processing and AI inferencing in space-flight applications. In this presentation we review the latest qualification and radiation data for the AMD XQR Versal adaptive SoC devices, and look at how their reconfigurable heterogeneous computing resources enable demanding applications such as digital beamforming, STAP radar processing, and spacecraft telemetry anomaly detection to be performed on orbit.
In this presentation the use of Open Standards Based Modules to accelerate the development cycle of Space Applications using AMD Versal devices will be discussed. The presentation starts with a success story: a currently in-orbit hyperspectral sensor on a now extended mission on the ISS (International Space Station), based on SpaceVPX and AMD Zynq7 technology. Following this, future steps will be explored, with a description of the latest developments based around AMD Versal hardware. Several applications have been prototyped including Laser Spectroscopy, AI-based anomaly detection and a 1-million-point FFT implementation. Details of these 3 applications will be presented. Deployment options, and the rapid development cycle that using standard modules allows, will be explored through two hardware platforms, now available off the shelf from Alpha Data. These also follow the SpaceVPX standard and are suitable for development and prototyping Versal solutions for orbit, using solutions designed to take components with an appropriate radiation tolerance. To conclude, the application requirements of next generation hyperspectral sensor processing will be explored alongside the suitability of these platforms for rapidly deliver the increased performance required.
We will present the latest space grade power architecture supporting an AMD/Versal ACAP FPGA featuring a PMBus based power management infrastructure. Radiation performance data will be presented for all utilized PMICs as well as for the PMBus controller targeting new space applications for low earth orbit employment. Our modular architecture approach consists of a main FPGA board and an auxiliary power architecture board. This allows the independent upgrade of the power components for different mission requirements. The PMBus controlled power architecture is also fully protected against power rail loss by redundant rails that automatically take over the load in case a rail fails. This architecture has already proven to be very effective for mission critical terrestrial applications such as in the banking server industry. The architecture highlights a single phase as well as multi-phase power management ICs, 40A power driver stages, a dual-core M4 microcontroller serving as PMBus controller and multiple XOR controller to manage the power redundancy. Communication is ensured through PMBus and I2C bus protocols between all components of the architecture. Redundant rails are switched by silicon based MOSFETs from Infineon as well. The base power design for the AMD versal ACAP FPGA is targeting a core current capability of 200A on VCC_int. PMBus allows the continuous sensing and control of the PMIC components and real-time performance data like output voltage, current level and failure modes are tracked during operation. Single event effect data will be presented for the individual power components with a focus on functional interrupts and single event transients occurring in the switch nodes. Finally, we will present geosynchronous and space station orbit upset rates.
ProtoSDK is a low-code software development kit (SDK) aimed at simplifying the deployment of deep learning (DL) models on FPGA-based devices for space-related applications. By removing the necessity for specialized programming knowledge, ProtoSDK enhances accessibility and facilitates seamless workflows from selecting a model to deploying it on an FPGA.
The SDK enables an end user to select from predefined model architectures, configure relevant hyperparameters, and start the training process. Additionally, the level of quantization is selected. After the training, a benchmark compared to the baseline unquantized model is compared to the quantized output to determine its effectiveness. Such a model is then passed onto the streamlining and synthesis engine to carry out the necessary transformations to generate and synthesize the necessary IP cores that can be integrated into FPGA design. Depending on the target FPGA platform, either a complete bitstream file can be generated or standalone IP cores that require manual stitching into the existing FPGA design. Such deployments are further validated and benchmarked on hardware devices to ensure reliability, performance, and power usage in resource-constrained environments such as onboard data processing on satellite platforms, where power consumption and memory are limited.
ProtoSDK enables rapid prototyping and faster time to market with various state-of-the-art machine learning models, accelerated with FPGAs. This enables better data processing capabilities and lowers downlink costs.
The space ecosystem is rapidly evolving, driven by the New Space paradigm, which emphasizes the use of commercial off-the-shelf (COTS) components and more powerful, reconfigurable payloads. This shift enables missions to dynamically adapt and enhance their capabilities in orbit. However, reconfigurable architectures based on SRAM-based FPGAs introduce security challenges, particularly regarding secure updates and protection against attacks. To address this, GMV is developing an embedded FPGA IP to serve as root of trust for reconfigurable payload controllers that is first being integrated on Alén Space’s TREVO SDR which uses Zynq Ultrascale+ with SRAM-based FPGA.
This implementation integrates post-quantum cryptography (PQC) with a focus on Kyber-512, ensuring secure key exchange resistant to quantum attacks. Additionally, a True Random Number Generator (TRNG) leveraging FPGA jitter physical source of randomness to enhance cryptographic robustness. The RoT is embedded in an immutable FPGA partition, while the remaining FPGA fabric remains reconfigurable for mission-specific processing. Initially, a hybrid HW/SW co-development approach was adopted, where PQC operations were partially implemented in software and accelerated in FPGA. Subsequently, a full FPGA-only solution was developed to enhance security by isolating cryptographic functions entirely within hardware.
The system further supports a Trusted Execution Environment (TEE), enabling secure enclave-based execution of sensitive operations while maintaining flexibility for payload reconfiguration. This architecture ensures a secure and scalable foundation for in-orbit reconfigurability, addressing the evolving needs of modern space missions while maintaining robust cryptographic security.
In recent years, the role of Artificial Intelligence (AI) has become increasingly prominent in the space industry. Its ability to perform better than traditional methods, in particular for image processing, has further accelerated its development within the sector. In this context, the architecture of a spacecraft’s processing system is crucial in determining both the efficiency of inference algorithms and the platform’s adaptability across different missions.
For high-criticality missions, however, advanced Neural Network (NN) inference solutions used on the ground are often unsuitable for space due to limited radiation hardening and power budget constraints. Moreover, the development of custom solutions for this category of mission involves significant costs in terms of design and fabrication. Among emerging alternatives, Coarse-Grained Reconfigurable Array (CGRA) architectures have shown promise for NN inference on Earth and are now being explored for space applications.
This session introduces the CGR-AI Engine, a highly parameterizable CGRA-based platform designed to accelerate Digital Signal Processing (DSP) algorithms and NNs. The platform consists of a CGRA processing core, Data Mover Engines (DMEs), memory blocks, and a RISC-V Central Processing Unit (CPU) for efficient data management via the programmable DMEs. Data and RISC-V firmware can be loaded onto the platform’s local memories by an external CPU host.
Our work includes a Design Space Exploration (DSE) activity to evaluate different implementations of the CGR-AI Engine using the Radiation-Hardened-By-Design (RHBD) DARE65T standard cell library platform. We demonstrate how the CGR-AI Engine provides a flexible solution for executing Convolutional Neural Network (CNN) layers, tailoring the architecture to meet the stringent requirements of space applications.
To further validate our approach, we developed an FPGA-based SoC prototype on the Xilinx ZCU104 device, evaluating its performance, resource utilization, and power consumption. This prototype serves as a functional platform to demonstrate the CGR-AI Engine’s efficiency for on-board processing in space missions.
The AMD Versal radiation-tolerant FPGA family represents a cutting-edge platform for high-performance applications. Versal offers unparalleled capabilities on space-qualified devices, featuring integrated GTY transceivers that support lane speeds of up to 25 Gbit/s. These attributes make them ideal for implementing advanced spacecraft communication protocols such as SpaceFibre (SpFi).
SpaceFibre (ECSS-E-ST-50-11C) is a spacecraft on-board data-handling network technology, building on its predecessor, SpaceWire, to deliver significantly higher data rates. SpFi integrates critical reliability features, including Quality of Service (QoS) and Fault Detection, Isolation, and Recovery (FDIR). Designed for scalability and interoperability, SpFi supports multi-lane configurations, enabling seamless communication over both copper and fibre-optic cables. Its low-latency and high-throughput capabilities ensure robust performance, meeting the demands of modern spacecraft operations. STAR-Dundee SpFi IP cores have achieved TRL-9, having been deployed in at least six operational missions since 2021, with adoption planned for over 60 additional spacecraft.
Recent heavy-ion radiation testing conducted at GANIL (France) validated the resilience of the Versal FPGA platform and the SpFi IP cores. The campaign employed high LET levels, exceeding 40 MeV·cm²/mg, to evaluate susceptibility to Single Event Effects (SEEs).
This work presents the cross-section values measured for the Versal transceivers, including a detailed analysis of the effects on internal transceiver components, such as the TXPLL and data path, as well as SpFi fabric logic. SpFi demonstrated robust error recovery against SEEs affecting the transceiver, with transient errors self-recovering in under 4 µs and persistent errors automatically recovering in 2 ms. No data errors were observed due to SEEs in the transceiver. For SEEs affecting the FPGA fabric, when no Distributed Triple Modular Redundancy (DTMR) was used, the inbuilt XilSEM scrubbing mechanism ensured automatic recovery within tens of milliseconds, although with potential data loss. Applying DTMR to the SpFi IP achieved exceptional reliability, effectively eliminating data errors due to SEEs in the fabric.
Additionally, the campaign successfully demonstrated a 100 Gbit/s SpaceFibre link operating under radiation. This was achieved using a quad-lane configuration, with each lane operating at 25 Gbit/s. This represents a significant milestone, as it will be the first published result for the radiation testing of a 100 Gbit/s communication link.
The results underscore the potential of AMD Versal FPGAs and SpFi IP cores as a highly reliable solution for spacecraft data-handling systems, paving the way for next-generation high-throughput, radiation-tolerant communication architectures.
The increasing complexity of deep learning models has created a demand for high-performance computing platforms that can efficiently execute inference tasks. FPGA's flexibility made them an appealing choice for accelerating such tasks. Recently, we also witnessed a growing interest in RISC-V-based solutions combined with dedicated AI accelerators to enhance computational capabilities. While these platforms successfully address performance requirements, their implementation in reconfigurable logic for safety-critical applications—such as space missions—introduces reliability challenges. Specifically, Single Event Upsets (SEUs) in the Configuration RAM (CRAM) can alter circuit behavior, potentially leading to system failure. Traditional redundancy-based fault-tolerance techniques are impractical for Deep Neural Network (DNN) accelerators due to limited hardware resources and the inherently high parallelism of their datapaths. Therefore, fast detection of radiation-induced faults is critical to prevent mission-compromising consequences, along with an efficient recovery mechanism to minimize system downtime.
To address these challenges, we propose an FPGA-based heterogeneous computing platform that integrates the NEORV32 RISC-V processor with a Systolic Array-based DNN accelerator, implemented on an AMD KCU105 device. Our system features a built-in self-test and self-recovery mechanism that leverages algorithm-based fault tolerance to detect errors in the accelerator’s datapath during neural network execution. By extending the accelerator’s ISA, inference can run in either standard or testing mode, enabling fault detection capability in the order of a few clock cycles rather than seconds. When a fault is detected, we exploit the FPGA partial reconfigurability feature to trigger dynamic partial reconfiguration (DPR) and reload only the affected bitstream section—preserving ongoing computations while restoring the faulty accelerator.
On AMD KCU105 device, the proposed platform reduces system downtime by up to 900× compared to full-device reconfiguration, ensuring rapid fault recovery and enhancing system availability. Additionally, our approach limits worst-case inference execution overhead to 30%, a significant improvement over traditional methods that can incur up to 96% overhead.
Hardware acceleration for an edge-AI application utilizing a convolutional neural network (CNN) typically involves distributing intensive computational tasks, such as matrix convolutions or multiplications, across multiple cores running in parallel. This can be achieved using a static GPU-like (Graphical Processing Unit) architecture or a configurable array of cores like the one available in the AMD Versal AICore device [1, 2], which is a monolithic chip that embeds a processing system, a programmable logic and an array of 400 AIEngines (AIEs) that are specialized computation tiles well-suited for artificial intelligence (AI) oriented applications. The Versal provides an interesting potential for optimized AI-acceleration [3, 4] with the flexibility of a configurable device for radiation hardening of space applications.
In this work, we present the testbench that we have developed in the context of Single Events Effects (SEE) assessment of the Versal AIEs under laser testing [5]. It is based on ResNet50 [6], a CNN designed to efficiently train very deep models using residual connections, offering strong performance in image classification tasks. AMD has used ResNet50 on the Versal AI Engines for SEE test purposes [7]. The development and design approach were based on Petalinux OS and AMD Deep Learning Processing Unit (DPU) IP.
In our testbench, one of the A72 APU core of the processing system (PS) runs a bare-metal Resnet50 application based on a light C++ framework for CNN modeling with a part of its calculation tasks being delegated to AIEs to perform accelerated onboard inference. The main test loop of this application consists in feeding in the input data, executing lightweight ResNet50 operations, triggering the AIE graph execution and comparing output data with expected (golden) results. The acceleration graph uses 352 AIEs to accelerate the residual layers calculations, including convolutions and post-convolution additions, by performing these operations in parallel. Part of the AIEs execute dispatch kernels to manage the flow of input and output data.
Further details about the development, the design flow and the graph implementation will be presented, and some specific challenges will be discussed and illustrated by laser-testing results on individual AIEs.
References:
[1] P. Maillard et al, "Neutron and 64MeV Proton Characterization of Xilinx 7nmVersal Multicore Scalar Processing System (PS)," IEEE REDW, pp. 18-22, USA, 2022.
[2] A. Dufour et al, "Heavy-ion and proton Single Event Effect (SEE) characterization of 7nm FinFET AMD Versal," RADECS, Toulouse, France, 2023.
[3] A. Arora et al, "MaxEVA: Maximizing the Efficiency of Matrix Multiplication on Versal AI Engine," ICFPT, pp. 96-105, Japan, 2023.
[4] A. Leftheriotis et al "Evaluating Versal ACAP and conventional FPGA platforms for AI inference," MOCAST, Greece, 2023.
[5] S. Achaq et al, "Bottom-up Analysis of the Impact of Single Event Effects in a CNN Hardware Accelerator using Laser Testing," RADECS 2024, Canary Islands, Spain.
[6] K. He et al, "Deep Residual Learning for Image Recognition," IEEE CVPR, USA, 2016.
[7] P. Maillard et al," Protons Evaluation of 7nm Versal AI Engines (AIE) based Radiation Tolerant Platform for Deep Learning Applications, " NSREC, Ottawa, Canada, 2024.
We will present the latest full stack solutions for sensor processing and machine learning based on native hardware and software purpose built for Lattice FPGAs. Computers are not build to sense directly. Intermediate interfacing, preprocessing and contextual AI enable easy adoption of sensors to the large compute. While the industry has benefitted from built in features of FPGA for machine learning, we present hardware in the loop algorithm simulation and development environment with a full compiler and inference engine for a variety of vision sensors.
Frontgrade (formerly CAES, Cobham, Aeroflex, UTMC) pioneers the future and underpins many of the world’s most critical missions. Our radhard/radtolerant microelectronics empower the world’s leading spacecraft: from leading high throughput commercial communications satellites, earth observation satellites, manned space, deep space exploration, and the latest new space constellations.
Too often system designers are faced with a choice of using antiquated FPGA technology, or one that is much larger and power hungry than required. Frontgrade Technologies partnered with Lattice Semiconductor to bring their small footprint, low power, Nexus platform to the space market. The Nexus platform is based on 28nm FDSOI technology delivering strong performance with radiation tolerance.
Frontgrade will present details of the CertusPro-NX-RT devices, radiation testing summaries for the CertusPro-NX-RT for TID, proton and heavy ion, and highlight additional upscreening underway on the CertusPro-NX-RT.
To meet the high market demand for low earth orbit missions with reliable plastic encapsulated microelectronic components, Frontgrade is at the technology forefront delivering Space PEM qualified devices. Frontgrade will detail our Space PEM screening and qualification flows and compare it to NASA & ECSS flows for multiple flows that will be offered.
Hyperspectral sensors are playing an increasingly relevant role as part of the space missions payload. They acquire vast amounts of data, which are useful in multiple applications but at the same time are difficult to manage because of the limited capacity of both communication channels and on-board storage. Data compression thus becomes mandatory to reduce raw data volume. And with the latest generations of sensors with increased resolutions, a shift from lossless to lossy compression methods may be needed to achieve required compression ratios.
The Consultative Committee for Space Data Systems (CCSDS) has published several compression standards conceived for space missions. Among them, the CCSDS 123 is aimed at hyperspectral data compression using a predictive approach. Issue 1 focused on lossless compression, while the more recent Issue 2 extends the preprocessing stage to also provide near-lossless compression capabilities, and in addition proposes an alternative entropy coding method, more complex and specifically devised for low-entropy data.
This work presents a new compression IP core fully compliant with the CCSDS-123.0-B-2 compression standard for compression of hyperspectral data in both lossless and near-lossless regimes. The IP core is widely configurable and supports almost every standard feature including all proposed entropy coding methods (sample-adaptive, block-adaptive and hybrid), as well as the most common sample arrangement formats: BIP, BIL and BSQ.
Initially, a design space exploration phase was conducted, which helped us to identify the best strategies to mitigate the data dependencies which are present in the preprocessor datapath. Then, the IP core has been developed as a synthesizable, technology-independent VHDL code. It is built over a previous development, the SHyLoC CCSDS 123-IP core, a hardware implementation of the CCSDS-123.0-B-1 standard. However, a great redesign effort was performed to support near-lossless compression and all new features in a single, flexible architecture which supports all foreseen options, along with an adaptable control which provides a throughput optimized for the dominant data dependencies on each configuration. This allows us to support most of the standard features, while providing a throughput of 1 sample per clock cycle for a subset of the configuration space. The IP core also implements the new Hybrid entropy coder to provide optimal compression ratios in the near-lossless compression regime. This new core reaches clock frequencies of 125 MHz for the Kintex Ultrascale FPGA technology at a moderate usage of logic resources (around 5% LUTs and 13% BRAMs). Implementation results for Microchip and Nanoxplore FPGA devices will also be provided.
The IP core has been developed in the scope of a project funded by ESA with the aim of producing a technology-agnostic reusable core. IUMA leads the project, with participation of DSCAL, UAB and TASIS, who have contributed with the hybrid encoder design, verification and validation, respectively. This new core will join the ESA portfolio of IPs, thus enabling its use as a building block to reduce development costs of future space missions.
AGGA - Advanced GPS / Galileo ASIC denotes a family of ASICs for processing satellite navigation signals. The development started with AGGA-2 already before the year 2000, and continued with AGGA-3 in 2004 and AGGA-4 in 2010. In 2022 ESA initiated an activity to convert the AGGA-4 ASIC RTL sources to an IP core for a range of FPGAs used in ESA space missions; this work was implemented by daiteq.
The talk will describe the features of the AGGA-4 IP core and the major design choices made during the design process, notably those that significantly improve the performance of DMA transfers of GNSS observables when compared to the AGGA-4 ASIC, and the adoption of the technology mapping library from the LEON2-FT IP core. A special attention will be paid to the validation of the AGGA-4 IP core and the necessary testing infrastructure that was needed to execute the behavioural tests that are part of the VHDL design database in hardware. The talk will conclude with implementation characteristics of the AGGA-4 IP core implemented in current AMD, Microchip and NanoXplore FPGAs.
In addition, the talk will describe newly implemented features that enable cycle-by-cycle debugging of GNSS software - AGGA-4 Debug Support Unit and FT601 based USB-3 Digital Front End with data bandwidth up to 3Gbps.
Requirements Tracking (aka Specification coverage) is getting more and more attention, and is critical for space projects. Unfortunately, tracking Verification vs Requirements is often handled manually, which is very time-consuming, error-prone and boring.
UVVM’s Specification coverage was developed in cooperation with ESA to provide a really efficient solution
to Requirements Tracking and the Requirements Traceability Matrix (RTM), and has resulted in a very user friendly solution that significantly improves FPGSA quality and development efficiency.
The tool automatically generates the reports you need for both mission-critical and safety projects, and in fact for any project where quality is important.
UVVM is free and Open Source and also provides lots of other time saving and quality improving features, like Log and Alert handling, Constrained random, Functional coverage, BFMs and VVCs for lots of interfaces, Scoreboards, Watchdogs, etc...
This presentation gives a brief overview of Specification Coverage before going into more details of proper Requirements Tracking. It also shows what is provided with UVVM and how this could be applied to your testbench.
With the ever-increasing complexity of digital designs targeting both ASIC and FPGA technologies, the verification gap is not-so-slowly reaching the space sector. Although the space sector has been traditionally driven by rad-hard designs and devices that traded design capabilities and performance for radiation tolerance, recent advances in both microelectronics technology and usage of commercial off-the-shelf (COTS) components in the space sector have greatly increased the average complexity of digital designs in space applications.
In this age of even-more-complex digital designs, Formal Verification poses itself as a promising complement to simulation-based verification, with the potential to both uncover hard-to-find bugs, by performing exhaustive breadth-first searches, and to greatly increase the confidence in designs, by mathematically proving they work as expected.
The semiconductor sector is not blind to these facts: industry trends show an increase in the use of formal methods in general throughout the microelectronics industry… but not as much on ‘VHDL-centric teams’, because the majority of formal verification is typically done using SystemVerilog Assertions (SVA).
Therefore, the European space sector -which predominantly and traditionally has relied on VHDL- is at risk of getting left behind. If nothing is done, it will be a late adopter of formal verification techniques, resulting in high entry barriers, low efficiency, and a lot of redundant duplication of efforts, as teams struggle to implement these methods.
But simply recommending the usage of formal methods is not enough. Formal verification is difficult, both in conceptual difficulty and tool usage. Thus, there is a need to lower the adoption barriers for VHDL-centric teams within the European space sector.
With that objective in mind, the presentation will explain the advances on:
1. The definition of a Formal Verification Methodology for submodules and IP cores for the European space sector,
2. The development of a software framework that acts as an easy-to-use abstraction layer to interact with the formal tools,
3. A repository of examples, in increasing order of complexity, selected so the different concepts in the methodology can be learned incrementally, in a structured way.
This work is funded by the European Space Agency, through the activity "Lowering the adoption barriers for Formal Verification of ASIC and FPGA designs in the Space sector", ESA Contract No. 4000144681/24/NL/GLC/ov. The methodology, framework, and repository of examples will be published under a free and/or open-source (FOSS) license.
Abstract
Developing and deploying a verification methodology can be costly and time consuming. Going without one will be even more costly due to bugs escaping into production hardware systems.
Open Source VHDL Verification Methodology (OSVVM) provides the VHDL community with an already developed, open-source solution. OSVVM implements all of the capabilities of a modern verification methodology: transaction-based testing, a verification framework, verification components, self-checking tests, messaging handling, error tracking, requirements tracking, constrained random testing, scoreboards, functional coverage, co simulation with software, test automation, and a comprehensive set of test reports.
This presentation examines how these capabilities will benefit your projects.
SystemVerilog+UVM also provides a similar set of capabilities. Unfortunately, SV+UVM ended up absurdly complex to use – instead of using a module (entity/architecture in VHDL) with its built-in concurrency, SV+UVM uses OO, sequential code, and fork and join (to get concurrency). As a result, SV has failed to unify the design and verification communities.
VHDL+OSVVM on the other hand uses entity/architectures to create verification components and libraries of subprograms (procedures and functions) to extend VHDL into a complete verification language. In doing this, OSVVM creates verification capabilities that rival SystemVerilog+UVM while at the same time it uses VHDL language elements that are familiar to VHDL design engineers.
As a result, with VHDL+OSVVM and a good verification lead, any VHDL engineer can do verification as well as RTL design.
Benefits
OSVVM + VHDL Provide:
- A structured, transaction based testbench environment in which any VHDL engineer can write VHDL testbenches and test cases for both simple unit/RTL level tests and complex, randomized full chip or system level tests.
- Buzz word features including Constrained Random, Functional Coverage, Scoreboards, FIFOs, Memory Models, error logging and reporting, and message filtering that are simple to use and work like built-in language features.
- Powerful verification data structures that provide unmatched test reporting with HTML for humans and JUnit XML for CI tools.
Author
Jim Lewis is an innovator and leader in the VHDL community. He has 30 plus years of design and teaching experience and is well known within the VHDL community. He is the Chair of the IEEE 1076 VHDL Standards Working Group. He is a co-founder of the Open Source VHDL Verification Methodology (OSVVM) and the chief architect of the packages and methodology. He is an expert VHDL trainer for SynthWorks Design Inc. In his design practice, he has created designs for print servers, networking, fighter jets, video phones, and space craft.
Whether teaching, developing OSVVM, consulting on VHDL design and verification projects, or working on the IEEE VHDL standard, Mr Lewis brings a deep understanding of VHDL to architect solutions that solve difficult problems in simple ways.
Space applications are increasingly incorporating analysis and decision-making capabilities to derive insights from data flows. This requires designers to adopt advanced technology that accelerates design, verification, and debugging of system designs.
• Hardware: Architectural investigation, IP integration, advanced verification, fast debug, and efficiently using the integrated hard blocks (SERDES, PCIe®, and DDR)
• Software: Early pre-hardware software design, operating system build infrastructure, software algorithm acceleration, find and fix software bugs early
• Integration: Co-simulation of hardware and software, system-level debug, and fast iteration using fully scripted flows
PolarFire® Studio is a comprehensive suite of development tools targeting Microchip’s PolarFire 2 FPGAs supporting RTL and C/C design flows to accelerate design, verification, and debug of FPGA systems. It combines FPGA Designer, System Designer, Microchip IP and reference designs to provide engineers with a system level design tool for the next generation space applications.
PolarFire® FPGA Designer is a comprehensive and modern development tool designed for Microchip FPGAs. It features an intuitive and user-friendly graphical interface that supports the traditional RTL to bitstream workflow, while also offering additional capabilities for rapid design, verification, and debugging. This includes a fully scripted flow, fast design constraint creation and integration, advanced static timing and power analysis with visualization, industry-leading QuestaSim Core OEM edition simulator and the ability to find and fix bugs faster with enhanced system debug.
The integrated IP vault provides designers with direct access to intellectual property (IP), providing an air gapped design option, necessary to efficiently complete applications. Quickly completing the setup and configuration of integrated hard blocks, custom logic implementation and IP portability across designs.
PolarFire System Designer integrates system-level analysis and design tool optimizations for developing embedded software applications running on Microchip FPGAs. Designers can maintain a unified code base for application, vector processing, and high-level synthesis using abstracted C/C workflows, supporting modular design across various project types, catering to multiple design personas. Developers can take advantage of the support for RISC-V® processor compile, debug, and execution of embedded software, high-level synthesis to automatically compiled C/C functions to RTL for fast verification and analysis and embedded software/hardware accelerators to create a RISC-V-based SoC
This presentation will discuss methods for developers to efficiently model complex systems from power planning through full design with high-level abstractions to low-level control while offering low power, high reliability and quantum security.
The RT PolarFire® family of SoC FPGAs represents a new generation of radiation-tolerant embedded systems. Built on Microchip's flight-proven RT PolarFire FPGA fabric, these devices offer outstanding power efficiency, high reliability, best-in-class security and enhanced radiation performance, making them ideal for deployment in demanding environments with broad offerings for Low Earth Orbit (LEO), deep space or anything in between.
This abstract summarizes the key features, capabilities, and radiation resilience characteristics of these SoC FPGAs. The RT PolarFire SoC FPGA leverages the RISC-V® architecture, supporting not only the flexibility of Linux®-based systems but also ensures the determinism required for real-time control systems. Its multi-core processor, coherent with the memory subsystem, provides central satellite processing capabilities akin to single-board computers widely used in the space industry for command and data handling, platform avionics, and payload control. This versatility enables the development of highly integrated designs that are customizable and adaptable over time while optimizing size, weight, and power considerations.
Designed to withstand the harsh radiation conditions of space, the RT PolarFire SoC FPGA eliminates the need for external scrubbers by achieving zero configuration memory upsets, reducing both system complexity and cost. By consuming up to 50% less power compared to competing solutions, it simplifies satellite design, easing thermal dissipation challenges and power management requirements.
Finally, coupled with Microchip’s Mi-V ecosystem, which supports a wide range of operating systems such as Linux, VxWorks®, and Zephyr®, designers can accelerate development timelines for mission-critical applications. Mi-V is a comprehensive suite of tools and design resources, developed with numerous third parties, to support RISC-V designs. The Mi-V ecosystem aims to increase adoption of the RISC-V instruction set architecture (ISA) and support Microchip’s SoC FPGA portfolio.
Backed by over six decades of expertise in powering spaceflight missions and a path to achieving the Qualified Manufacturers List (QML) qualification, the RT PolarFire SoC FPGA delivers a reliable, high-performance, and radiation-tolerant platform for advancing the frontiers of space exploration.
The increasing complexity of space missions demands sophisticated hardware solutions, with Field Programmable Gate Arrays (FPGAs) serving as a cornerstone. However, the intricate nature of FPGA development necessitates structured engineering approaches to guarantee reliability and performance in the harsh space environment. Systems engineering principles are paramount for:
As FPGAs become indispensable components of space systems, systems engineering becomes a more critical factor in the success of missions. This presentation will explore the key systems engineering principles that should be considered to design and implement radiation-hardened FPGA solutions, fostering a synergistic approach between FPGA technology and systems engineering for successful space applications.
The increasing complexity of space-grade FPGAs and the growing demand for reliable and efficient radiation mitigation strategies pose significant challenges for development teams. Traditional approaches to selective radiation mitigation, such as expert-driven analysis and verification, are no longer scalable or repeatable for the new breed of devices being sent into space. The challenges associated with mitigation architecture and verification are multifaceted, including the large fault state space, the need for manual fault injection, and the limitations of debug visibility. These challenges highlight the need for a more systematic and rigorous approach to selective radiation mitigation. To address these challenges a comprehensive methodology is presented that leverages formal verification techniques to ensure the correctness and reliability of radiation mitigation strategies. The methodology combines formal verification and simulation-based verification to provide a systematic and rigorous approach to selective radiation mitigation in space-grade FPGAs. It aims to provide a scalable and repeatable solution for selective radiation mitigation, ensuring the reliability and correctness of radiation mitigation strategies in harsh space environments.
Higher-level methodologies, such as model-based design, enable the faster development of complex hardware-software FPGA designs while maintaining a reasonable quality of results compared to lower-level approaches.
In this presentation, we introduce the SpaceStudio virtual platform, which includes a specialized space toolkit to complement its ecosystem. This toolkit integrates the RTEMS real-time operating system (RTOS), Siemens Catapult HLS, and Precision Hi-Rel (High Reliability), a specialized variant of Siemens' Precision tool designed for critical applications requiring high reliability.
We propose a methodology for accelerating the development of fault-tolerant systems on Zynq Ultrascale+ MPSoC targets by leveraging high-level synthesis and fault-tolerant hardware synthesis tools.
The design is modeled as a network of modules, which can be mapped to software tasks or hardware accelerators on the FPGA. The module instances exchange data and control information through abstract communication channels, such as queues and registers. This approach enables the automated generation of the required structures based on the module's target.
This modular architecture facilitates a faster architecture exploration phase, as modules can be easily moved between hardware and software, and benchmarked both in simulation and on a more timing-accurate prototype using hardware execution time probes.
The methodology also incorporates a virtual platform that provides hardware-software co-simulation using SystemC and QEMU for Cortex-A53 and Cortex-R5 processors, significantly accelerating the functional verification of designs. By combining efficient software simulation and transaction-level modeling of hardware, this platform enables the simulation of complex designs that would otherwise require hours to simulate at the register-transfer level.
A key benefit for embedded software developers is the ability to develop and validate RTEMS drivers early in the project without requiring physical access to the FPGA board.
On the implementation side, Siemens' Catapult HLS enables the rapid hardware implementation of accelerators for computationally intensive algorithm modules, significantly reducing the effort needed to translate algorithms into register-transfer level (RTL) languages. Precision Hi-Rel efficiently synthesizes hardware code generated by Catapult, while providing built-in redundancy mechanisms, such as triple modular redundancy (TMR), to enhance fault tolerance. These fault-tolerance mechanisms are handled transparently, allowing systems to achieve higher reliability without additional development time.
The methodology will be demonstrated with an implementation of a corner-detection algorithm, a widely used component in Vision-Based Navigation to improve spacecraft position estimation and enable processes like landing and space rendezvous. This use case involves detecting key features in images, a computationally intensive task critical for navigation and mapping in space missions. The corner-detection algorithm benefits from the proposed fault-tolerance mechanisms, ensuring reliable performance despite hardware faults or unexpected conditions. The targeted platform is the ZCU102 board from the Zynq Ultrascale+ family, equipped with four ARM CPUs and FPGA fabric.
This work is one of the results of two NAVISP projects: Agnostic Hardware/Software Codesign Framework for GNSS Software Receiver (2019–2021) and HW/SW Codesign Environment (2022–2024).
The design and verification of FPGA-based systems for space applications rely on complex toolchains that translate high-level logic into hardware implementations. Typically, only a single synthesis, placement, and routing tool is available for a given FPGA, making it difficult to assess how design choices and toolchain behavior interact. The availability of multiple toolchains allows cross-checking of design functionality, and an open-source tool can provide more avenues to inspect and control the implementation process.
To enable this, we have developed an alternative toolchain for the NanoXplore NG-Ultra, the only large, radiation-hardened FPGA made in Europe. As an activity under ESA's OSIP Ideas platform and with help from NanoXplore, we have extended nextpnr to support placement and routing for NG-Ultra. While not a goal of the project, we have also contributed an alternative synthesis flow based on Yosys, which allowed us to target individual aspects of the nextpnr flow when synthesizing test cases during development. The result is a prototype of an almost fully open-source flow, with only the bitstream generation and board programming steps still requiring vendor tools.
This presentation will discuss the technical challenges encountered in developing an alternate open source toolchain while trying not to replicate the vendor's approach. The FPGA’s large size required scaling improvements to the generic architecture-independent parts of nextpnr, while its nontraditional architecture—particularly its routing network—demanded close attention to placement strategies to ensure successful routing. This is a result of both complex routing constraints requiring careful implementation, and an overall limited amount of routing resources relative to logic elements in the NG-Ultra architecture.
Additionally, we will present our validation approach to confirm the correctness of our implementation, and benchmarking results comparing nextpnr against the vendor tool Impulse in terms of routability, resource utilization, and timing. We will explore examples where having an alternative toolchain may give us more confidence in our own design, but also cases where it gives us more flexibility to utilize resources in a different way in order to get specific behavior or test certain capabilities of the tools or the FPGA itself.
Yosys is an open-source framework for register-transfer level (RTL) synthesis, primarily used for digital logic designs written in hardware description languages (HDLs) such as Verilog. It is an essential part of open-source FPGA development workflows, enabling the transformation of high-level hardware designs into low-level representations like gate-level netlists.
Nextpnr is an open-source FPGA place-and-route tool designed to support multiple architectures. It is a key part of the open-source FPGA development ecosystem and it is a tool for converting synthesized netlists into physical configurations for FPGAs.
TASTE is an open-source framework for Model-Based development of accelerators used in space applications. It provides a seamless workflow that integrates design, development, and deployment while bridging multiple existing technologies instead of introducing a new modeling language.
This presentation explores the integration of ONNX-based machine learning (ML) models into the TASTE toolchain as part of an ongoing OSIP initiative. The objective is to extend TASTE’s capabilities to support ML-driven space applications, including onboard data processing and autonomous decision-making. The approach enables the seamless incorporation of ML models alongside traditional modeling tools such as OpenGeode and MATLAB Simulink.
A key focus is the integration of the open-source PandA Bambu High-Level Synthesis (HLS) tool with the ONNX-MLIR framework to generate efficient FPGA-accelerated ML code. This combination provides an automated path from high-level ML model descriptions to optimized hardware implementations, addressing the need for high-performance and energy-efficient computing in space systems.
The presentation will cover the methodology, key technical challenges, and implementation details, along with experimental results on selected ML models. Attendees will gain insights into the benefits of this approach and its potential impact on future space missions.
The main objective of this activity was to perform a technology assessment of Achronix FPGA-based solution for Ethernet layer 2 switch, targeting to meet a data rate up to 100 Gbps per single channel. Assessment consisted of implementation, simulation and breadboard validation using commercially available BittWare VectorPath S7t-VG6 accelerator card, with the Achronix Speedster AC7t1500 FPGA, together with two QSFP loopback devices connected to provide loopback functionality on the Ethernet channels. Additionally, an attempt to transfer achieved solution to Xilinx Versal technology in simulation was made.
Foreseen usage of the outcome of this activity is in the HydRON programme. The main focus of this project was to investigate end-to-end system architecture of a high throughput system (minimum 100 Gbps) that results from combining high data rate optical feeder links, WDM optical ISLs and on-board switching/routing capabilities, needed to implement an optical transport network in space.
Rad Hard / Radiation Tolerant FPGA based solutions currently used in space environment do not meet requirements for the HydRON programme, thus current state-of-the-art COTS FPGA devices have become an attractive solution for telecommunication payload processors for optical transport network in space. Especially with additional features like Network-on-Chip, which allows for high data rate connections inside the FPGA without compromising fabric’s resources for logic and application. FPGA technologies taken into assessment in this project are based on 7 nm lithography process which enables possibility to use them for application cases requiring on-board switching functions with data rates up to 100 Gbps per port, thus meeting the requirements for HydRON.
Lunar Gateway is central to the Artemis missions for returning to the Moon for scientific discovery and chart a path for the first human missions to Mars.
This space station will be a multi-purpose outpost supporting lunar surface missions, science in lunar orbit and human exploration further into the cosmos.
Thales Alenia Space has signed a contract with the European Space Agency to develop ESPRIT (European System Providing Refueling, Infrastructure and Telecommunications) for the upcoming lunar space station.
The HLCS (Halo Lunar Communication System), is a fundamental element of ESPRIT, and it provides S-Band and Ka-Band uplink and downlink with search and tracking function.
In the field of communications and tracking, we will focus on the the K-Band Transceiver (KBT), a nodal element of the HLCS subsystem developed by Thales Alenia Space.
The KBT equipment uses a novel System-On-Chip based on the PolarFireRT FPGA from MICROCHIP and implementing the processing features related to the Lunar communications, e.g.:
• SpaceWire interface towards the ESPRIT on-board computer (COM-HUB)
• LDPC coding and decoding functions
• SRRC-OQPSK demodulation up to 50 Msps
• SRRC-OQPSK modulation up to 20 Msps
• FFT on the receiver side and local sweeping on the transmitter side for autonomous establishing of the Lunar link
The KBT SoC embeds the LEON2FT Processor that is in charge of equipment-level management and low rate signal processing tasks. It also includes the “daiFPU” Floating Point Unit (from daiteq s.r.o.) used to implement fine power estimation algorithms as needed for supporting the antenna pointing towards the Lunar asset.
The presentation will provide a thorough overview of the equipment architectural design placing special emphasis on the relevant digital core based on SoC outlined above.
In recent years, the aerospace industry has increasingly focused on reducing satellite costs and, notably, minimizing Time-To-Market (TTM). IP cores offer an ideal solution by providing pre-validated, reusable modules that simplify the design and development process. By leveraging these ready-made components, engineers can concentrate on system integration and customization, accelerating the creation of reliable, efficient satellite systems while reducing design risks and iterations.
IngeniArs S.r.l., an Italian SME based in Pisa, specializes in delivering cutting-edge products and design services for the aerospace sector. Their extensive portfolio of Intellectual Property (IP) cores, designed for implementation on Field-Programmable Gate Arrays (FPGA) or Application-Specific Integrated Circuits (ASIC), focuses on satellite communications, artificial intelligence, and on-board data handling communication links and networks.
For satellite communications, IngeniArs offers IP cores for both transmitters and receivers that comply with the CCSDS 131.2-B standard, tailored for high-rate Radio Frequency (RF) telemetry applications, primarily in Earth Observation. These cores support all 27 Modulation and Coding (ModCod) formats specified by the standard, enabling Earth-to-Ground communication at rates of up to 1 Gbaud.
IngeniArs has recently expanded its portfolio with two new IP cores for optical Payload Data Transmission (PDT). These cores are fully compliant with the CCSDS 142.0-B standard and support both High Photon Efficiency (HPE) and Optical On-Off Keying (O3K) protocols. They are suitable for Earth-to-Ground communication as well as Inter-Satellite Links (ISL). The HPE core, utilizing an SCPPM encoder, supports transmission rates up to 8 Gbps, while the O3K core, capable of reaching 10 Gbps, offers encoding options with either Low Density Parity Check (LDPC) or Reed-Solomon (RS).
The primary solution in IngeniArs' portfolio for implementing Artificial Intelligence (AI) and Computer Vision (CV) algorithms is the GPU@SAT Soft Core. This core can be implemented on space-qualified FPGAs/ASICs, enabling on-board data processing and handling. It is highly scalable and flexible, offering a configurable number of Computational Unit (CU) cores to meet varying complexity and performance requirements. The GPU@SAT core can be easily configured and used through a standard AXI Bus interface.
Finally, IngeniArs provides several high-speed on-board data handling and communication solutions, particularly those related to ECSS standards like SpaceWire, SpaceFibre, and WizardLink. The SpaceWire CODEC IP Core supports full-duplex communication up to 400 Mbps, while the SpaceWire Router IP Core manages traffic between up to 31 different nodes, handling both SpaceWire and host ports. SpaceFibre CODEC and Router enable much higher transmission speeds, reaching up to 6.25 Gbps per lane, with an overall data rate of 25 Gbps when using 4 lanes. These cores offer features like virtual channels, Quality of Service, 8b/10b coding, and full compatibility with all SerDes. Finally, the WizardLink TLK Equivalent IP Core is designed to replace the Texas Instruments TLK2711 chip in existing designs, specifically for FPGAs equipped with SERDES devices.
Establising digital trust on a computing platform benefits from hardware root-of-trust (HW-RoT) situated on the computing platform itself. Examples of HW-RoT solutions include TPM, DICE, OpenTitan and Caliptra. These offer a wide range of security and cryptography services for a host system. However, the verification and validation of these solutions may prove challenging, especially when applied in high security assurance solutions. In this presentation we establish a base for the essential HW-RoT functions from the user perspective, and review established solutions against base requirements. We then suggest a foundation and architecture, which can be applied to FPGA and ASIC solutions, to build hardware root-of-trust for computing platforms.
The evolving landscape of space system applications demands flexibility, high-performance, and robustness, particularly in high rate data acquisition, processing and transfer. Traditional FPGA families struggle to meet these demanding requirements due to the limitation of internal resources and the fixed nature of one-time programmable FPGAs.
Microchip's RTG4 FPGAs offer a solution to these challenges with reprogramming capabilities, radiation-hardened technology, and low-power digital microelectronics designed specifically for space applications. These features make RTG4 FPGAs ideal for mission-critical applications in demanding environments.
This abstract presents three Airbus Crisa products using RTG4 devices, focusing on their innovative features, benefits, and challenges encountered during development.
High-performance Imaging Analog Front Ends: Airbus Crisa successfully developed an RTG4 FPGA-based solution managing nearly 90% of the internal memory for enhanced data processing bandwidth. This advancement significantly improved data processing capabilities in a single FPGA, demonstrating the technology's potential for high-performance space applications.
Science Data Processing in Instrument Control Unit (ICU) applications: The equipment receives science data from three Front End Electronics (FEEs) at 525 x 3 Mbps. The RTG4 FPGA design includes packetization and transfer of data to the Mass Memory Unit (MMU) using a 2 Gbps WizardLink, presenting challenge’s regarding jitter and skew requirements for reliable communication between FPGAs in the FEE application. The equipment is connected to a microprocessor via a 100 Mbps SpaceWire (SpW) for Command & Control and System Housekeeping telemetry data reception.
Payload Controller Module (PCM): The Airbus Crisa PCM, an Advanced Data Handling Architecture (ADHA)-compliant module, acts as a system slot for payload applications and offers flexibility through a reprogrammable RTG4 FPGA and multiple interface standards provided by the GR740 SoC processor. The RTG4 FPGA design implements a 6-port SpaceWire (SpW) router and 1 additional SpW port with RMAP used for command and control (all the SpW working at 200 Mbps), SpaceFibre optical links at 2.5 Gbps, and various controllers, including FLASH, SDRAM, and microprocessor boot management from FLASH. It is also capable of storing automatically in FLASH data inputs without software intervention.
These Airbus Crisa designs using RTG4 FPGAs present a wide range of applications that allow for addressing significant challenges, such as:
· High clock frequencies
· High use of internal memory blocks
· Handling multiple independent clocks using internal clocking resources
· Adapting functional blocks due to high complexity in reset architecture or to meet jitter and skew requirements.
· Careful pinout selection to allow the use of LVDS, SERDES, different voltage IOs etc.
The final presentation will go deeper into these challenges and the solutions adopted in the mentioned Airbus Crisa products.
The Comet Interceptor mission, part of ESA’s F-class portfolio, aims to visit a long-period comet, originating from the Oort Cloud and entering the Solar System for the first time. The mission consists of three spacecraft—main spacecraft A and two ‘sub-spacecraft,’ B1 and B2. Spacecraft B2 is equipped with the Optical Periscopic Imager for Comets (OPIC), developed by the University of Tartu, to capture images of the comet nucleus and its environment during the flyby.
Given the mission’s brief proximity to the target and the risk of spacecraft damage from dust impacts, OPIC must operate autonomously to prioritize and transmit images of the comet nucleus during a critical moment of the flyby. Central to this functionality is the IMPRIO IP core, developed by Bitlake Technologies, which autonomously identifies the comet nucleus and extracts regions of interest (ROIs) for prioritization.
The IMPRIO IP core is required to process 2048 x 2048, 12-bit images at a minimum throughput of 6 frames per second (fps), addressing challenges such as the comet nucleus unknown shape, size, and unpredictable visual features. Its integration into the ProASIC3L FPGA—already constrained by the OPIC camera head’s image readout functionality—demanded a resource-efficient design without hardware-level DSP support.
To meet these constraints, a multiscale Laplacian of Gaussian blob detection algorithm was selected for its noise resilience, robustness to damaged pixels, and suitability for optimized FPGA implementation. The multiscale approach further enhances target centroid detection, adapting to variations in the nucleus size during the flyby.
The resulting IMPRIO IP core is integrated in the OPIC camera head (3D Plus 3DCM734-1-SS). It achieves ~7.1 fps, exceeding performance requirements while efficiently utilizing the FPGA’s available logic resources and maintaining the timing closure required for OPIC camera operation.
In this work, we will detail the design methodology, FPGA-specific optimizations, and performance validation of the IMPRIO IP core. Additionally, we will share insights and lessons learned from implementing image processing application on the resource-constrained platform
This presentation details our development of an actuator controller for new space launcher applications using the Microchip Polarfire FPGA. We showcase how adopting open-source hardware specifications and tooling—specifically the Wishbone bus infrastructure and AGWB interconnect generator—enabled us to limit vendor dependency while maintaining acceptable quality standards. A key technical achievement was our development of a cached controller for the Polarfire's internal NVM, creating a reliable non-volatile memory interface crucial for our application. Our bus-based architecture, combined with Polarfire's Identify ILA, provided good debugging observability during development. We demonstrate how this approach creates a pathway to extend our design to a complete system-on-chip using the RISC-V based Mi-V soft-core. This case study offers interesting insights for FPGA developers seeking to balance the new space industry's demands for faster, more economical development cycles while achieving acceptable reliability in radiation environments.