All FPGAs share several design methodologies, yet each of them face specific challenges. The anti-fuse FPGAs are currently heavily used in most electronic equipment for space yet there are other technologies which use in space is growing: Flash-based and SRAM-based. The use of COTS FPGA is also increasing; specially for space missions with shorter lifetime and less quality constraints.
The aim of the workshop is to share experiences and wishes among FPGA designers, FPGA vendors and research teams developing methodologies to address radiation mitigation techniques and reconfigurable systems.
The topics related to FPGAs for space are covering (not limited to):
- FPGA for Artificial Intelligence/ Machine Learning
- general design, verification and test issues and good practices
- performance achievements and potential problems
- power consumption achievements and potential issues
- design tools performance, good practices and potential limitations
- radiation mitigation techniques, tools and potential limitations
- trends of FPGA usage in space applications
- partial/total in-flight reconfigurable systems
- lessons learned: ensuring successful and safe use of FPGA in space applications
- choosing the best FPGA type for our space application
- export license limitations / changes / ITAR / EAR
-package and assembly challenges
- companion non-volatile memory (when required) experience
- high level synthesis (HLS) and model based design (MBD) approaches used for flight designs
The main FPGA vendors will present updates and will be available for questions. The detailed agenda will we published closer to the event. Presentations from the major design groups are expected.
Submission of abstracts is now open via this website.
The deadline for submitting an abstract has been extended to February 10, 2023.
A Demo Session and Exhibit will be organised on the first day of the Workshop during the Welcome Reception at the Wintergarden South.
Attendance to the workshop is free of charge.
Registration is required via this website, and an Indico account is required for it (company email address only, not personal one).
To create an Indico account go to the top right corner and choose “Log in”, then “create one here” and follow the given instructions.
After an account has been created you can register using this Indico event site.
The materials presented at the workshop are intended to be published on this website after the event. All material presented at the workshop must, before submission, be cleared of any restrictions preventing it from being published on this website.
Sponsored by
From first generation of antifuse FPGAs more than 20 years ago to last generation of Xilinx Versal, FPGA technologies have always been in the DNA of Airbus digital equipment. Beyond being just a "user" of these FPGA technologies, Airbus is also a major contributor of the NG-Ultra product development offering to Europe a key rad-hard FPGA with an embedded System-on-Chip.
What can be noticed is that FPGA size, features and complexity have considerably increased during these 20 years, leading to the need to coordinate full FPGA design teams, and to carefully prepare the introduction of any new FPGA technology from its selection & evaluation to its qualification.
Based on Airbus strong heritage in this domain all along these 20 years, an overview of topics around FPGA development considered by Airbus as key topics will be exposed. This will be an opportunity to present our feedbacks on FPGA technologies but also on FPGA development process.
SpaceWire (ECSS-E-ST-50-12C) is a data-handling network for use on-board spacecraft, which interconnects instruments, mass-memory, processors, downlink telemetry, and other on-board sub-systems. SpaceWire is simple to implement and has some specific characteristics that help it support data-handling applications in space. SpaceFibre (ECSS-E-ST-50-11C) is an evolution of SpaceWire, being backwards compatible with SpaceWire at the packet level. SpaceFibre is a very high-performance, high-reliability and high-availability network technology specifically designed to meet the needs of modern space applications where very high throughput is required. It provides point-to-point and networked interconnections at Gigabit rates— more than 6.25 Gbit/s per lane for current FPGAs, with multi-lane links reaching up to 16 times the speed of a single lane —with Quality of Service (QoS) and Fault Detection, Isolation and Recovery (FDIR). SpaceFibre NORBY and OPS-SAT technology demonstrators have already flown SpaceFibre, with more missions in both Europe and the USA currently designing or planning to use SpaceFibre.
STAR-Dundee has developed a complete family of SpaceWire and SpaceFibre IP cores which are fully compliant with these ECSS standards. This family is composed of SpaceFibre Single-Lane and Multi-Lane Interface and Routing Switch cores, SpaceWire Interface and Routing Switch cores, and RMAP Target and Initiator cores.
A new generation of radiation-tolerant FPGAs is emerging to cope with the ever-growing processing power required by newer missions. Microchip has released the PolarFire RTPF500, Xilinx the Versal XQRVC1902, and NanoXplore the BRAVE NG-Ultra. SpaceFibre operation requires serial transceivers, which are already inbuilt into modern FPGAs. The SpaceFibre IPs have been adapted to take advantage of the specific transceivers and memory blocks offered by these new FPGAs.
In this work, we analyse in detail the performance of STAR-Dundee SpaceWire and SpaceFibre IP cores on this new generation of FPGAs and consider several performance metrics, e.g. maximum speed, resource usage, etc. We also compare the performance of the IPs with current state-of-the-art space-grade FPGAs, i.e. Microchip RTG4 and Xilinx Kintex UltraScale XQRKU060. This analysis can also be used as a representative benchmark to compare the performances of the different FPGAs available for space.
Finally, the STAR-Tiger SpaceFibre routing switch is presented. The STAR-Tiger is the primary element of the payload data-handling network for the Hi-SIDE project, a European Union project carried out by several leading aerospace organisations from across Europe aiming at developing satellite data-chain technologies for future Earth observation and Telecommunication systems. The STAR-Tiger is used for transferring data at high data rates among the different elements of the network. It provides 10 SpaceFibre Multi-Lane ports (4-lane and 2-lane) with an aggregated user data throughput of 230 Gbit/s. It has been implemented using a Xilinx KU060 and the total power consumption of the unit is lower than 15 W at 20 C.
Artificial intelligence (Deep Learning) is everywhere. The space industry is no exception. Automated recognition of lunar craters for moon landings and identification of space junk using imaging could play important roles in securing space safety and advancing space exploration. Deep Learning is the most successful solution for image-based object classification, and for most practical applications it requires performant platforms like FPGAs and SoCs.
Designing Deep Learning networks for embedded devices such as FPGAs and SoCs is challenging because of resource constraints, the complexity of programming in Verilog or VHDL, and the hardware expertise needed for prototyping on an FPGA or SoC.
In this talk I will explain how to prototype and deploy Deep Learning-based vision applications on FPGAs and SoCs using MATLAB. Starting with a pretrained model, I will demonstrate a MATLAB-based workflow that deploys the trained network for image recognition to the Xilinx Ultrascale+ MPSoC platform for inference using APIs from MATLAB.
Deep Learning practitioners can quickly explore different networks and evaluate their performance on FPGAs or SoCs directly from MATLAB. This workflow also enables hardware engineers to optimize and generate portable Verilog and VHDL code that can be integrated with the rest of their applications.
GMV proposes a model-based approach for deep learning (DL) acceleration on FPGAs, taking on-board space debris detection as the target application. GMV has developed G-Theia1 smart-sensor integrating camera and high-performance processing logic into an embedded payload system. G-Theia1 is first proposed as a cost-effective space-based surveillance system in the project H038.3 SBSS-GNSS. Extending the work done in SBSS-GNSS, GMV shows a proposed model-based flow where deep learning models are defined and trained using PyTorch , later exported to ONNX and finally converted directly into register transfer level code (RTL). It leverages Nngen, a fully open-source tool, to implement different deep learning accelerators for image classification and object detection into a dedicated flow being demonstrated on the FPGA fabric of the System-on-Chip Zynq Ultrascale+ MPSoC . To allow for a flexible deployment and facilitate testing, dynamic partial reconfiguration is used to switch between different accelerators in runtime.
The service delivered by spacecrafts can be improved and maintained over time if updates can be applied to the system, increasing the spacecraft's lifetime. The In-Flight Maintenance for System-On-Chip based computer (IFMSOC) ESA project aims to propose safe and secure procedures for modifying the functions performed by a spacecraft after its deployment into a reprogrammable HW/SW architecture. These procedures solve the in-flight management of binaries associated with system updates, the consistency check of HW and SW applications in the context of heterogeneous reconfigurable SoCs, the application of updates, and error handling. The generic architecture solves the applicability of both HW and SW updates allowing its usage in hybrid devices which combine embedded processors with HW-based engines such as FPGAs. A particular interest of the proposed maintenance system for SoCs is the configuration management and HW/SW dependencies, as it is a key factor in avoiding problems of updated patches with functions split or triggered in between FPGA side and SW processing functions.
Generally traditional computer vision algorithms are used in space to solve detection and tracking problems during flight, but recently deep learning (DL) approaches have seen widespread adoption in non-space related applications for their higher accuracy. This trend was enabled by the development and adaptation of hardware architectures to accelerate machine learning workloads, such as DSPs, VPUs, TPUs, GPUs and FPGAs. Due to the possibility of rad-tol/rad-hard HW reconfiguration and parallelism, as well as their higher performance per watt with respect to GPUs or other devices, FPGAs have been used to accelerate computations in space. Given the complexity of current DL models and FPGA development, the abstraction provided by model-based engineering methods allow for greater productivity in the implementation of DL accelerators. The implemented models are ResNet, DenseNet and SqueezeNet with slight modifications to perform object localization.
**CNES status on Nanoxplore developments. The following activities will be described during this presentation:
- On the IP side , DDR2 and Esistream projects will be described.
- The R5 reference design will be presented.
- R5 software environment (XNG and GNU )
- The presentation will finish with Comodo Large and Ultra boards.
**
OSVVM is an advanced verification methodology that defines a VHDL verification framework, verification utility library, verification component library, and scripting API that simplify your FPGA or ASIC verification project from start to finish. Using these libraries, you can create a simple, readable, and powerful testbench that is suitable for either a simple FPGA block or a complex ASIC.
OSVVM is developed by the same VHDL experts who have helped develop VHDL standards. We have used our expert VHDL skills to create advanced verification capabilities that provide:
• A structured transaction-based verification framework using verification components.
• A common, shared transaction API for address bus (AXI4, Axi4Lite, Avalon, …) and streaming (AXI Stream, UART) verification components.
• Improved readability and reviewability by the whole team including software and system engineers.
• Improved reuse and reduced project schedules.
• Buzz word features including Constrained Random, Functional Coverage, Scoreboards, FIFOs, Memory Models, error logging and reporting, and message filtering that are simple to use and work like built-in language features.
• A common scripting API to run all simulators. OSVVM scripting supports GHDL, NVC, Aldec Riviera-PRO and ActiveHDL, Siemens Questa and ModelSim, Synopsys VCS, and Cadence Xcelium.
• Unmatched test reporting with HTML based test suite reports, test case reports, and logs that facilitate debug and test artifact collection.
• Support for continuous integration (CI/CD) with JUnit XML test suite reporting.
• A rival to the verification capabilities of SystemVerilog + UVM.
Looking to improve your VHDL verification methodology? OSVVM provides a complete solution for VHDL ASIC or FPGA verification. There is no new language to learn. It is simple, powerful, and concise. Each piece can be used separately. Hence, you can learn and adopt pieces as you need them.
Maybe your EDA vendor has suggested that you should be using SystemVerilog for verification. According to the 2022 Wilson Verification Survey [1], for both FPGA design and verification, VHDL is used more often than Verilog or SystemVerilog. Likewise, in the survey you will find that OSVVM is the #1 VHDL verification methodology.
[1] https://blogs.sw.siemens.com/verificationhorizons/2022/11/21/part-6-the-2022-wilson-research-group-functional-verification-study/
System-level design commonly employs building blocks, also denoted as soft IP cores, to conform complex developments. This is also a trend in the space industry to save costs and development time.
Every IP must pass through a verification and validation process before being integrated in a larger design to ensure a proper system behaviour. In order to ensure the functional correctness of the IP core and its compliancy to the requirements, a verification campaign based on a set of defined test cases is performed based on functional simulations and the creation of testbenches. However, as hardware designs grow in complexity, it is more and more difficult to reach all possible corner cases by a test campaign designed to fulfil the defined requirements. Therefore, even if a full code coverage is achieved during the verification process, there is always a chance that the IP will exhibit an unexpected behaviour in certain situations. This fact motivates to look for alternative verification approaches. Among them formal methods are the most effective but its solutions are not scalable to IP core level. In this scenario recently hardware fuzzing appears as an interesting solution to solve the IP verification problem.
Fuzzing is a testing technique where inputs are generated randomly and used to identify defects in software. It is commonly utilized in cybersecurity to find vulnerabilities and has also been applied to software testing. The fuzzing process involves two pieces: a fuzzer that generates inputs and manages the execution of the software under test, and a fuzzing harness that connects the fuzzer to the software. The fuzzer is designed to be coverage-guided, meaning it uses information gained from previous inputs to direct future input generation.
The fuzzing architecture for hardware testing is more complex than for software and is constituted by three components: IP Core Fuzzer, Orchestrator and Pull of Agents. First, the IP core fuzzer, which encapsulates the fuzzer, the fuzzing harness and a reference software which models the IP core under verification. It generates both the input test vectors and their expected output. These data are sent to the Orchestrator, which manages the execution of the Agents, which simulate the IP core to be validated. Multiple agents can be executed in parallel to increase the number of vectors evaluated per unit time, hence the term Pull of Agents. The Orchestrator selects on every moment which Agent to launch, providing an input test vector and the necessary configuration, retrieves the simulation output and compares it against the expected one. This allows to identify bugs in the IP core and reproduce them later.
In this work, we apply the fuzzing methodology to the verification of a universal data compressor compliant with the CCSDS-121.0-B-3 standard. The CCSDS-121 IP verification process includes several verification campaigns (with pre-generated test cases), reaching a 100% code coverage and then a hardware fuzzing verification. The fuzzer architecture has been configured with a pull of 20 agents, which reach throughput of 1.5 tests per second. After testing more than 1.000.000 test vectors, 18 unnoticed bugs have been detected and fixed in this IP core. The bugs detected are related to corner cases with non-common configurations, which were hard to notice in the standard verification campaign. This work demonstrates the strengths of the fuzzing methodology to complement traditional verification campaigns for hardware designs and the benefits of this approach for the verification of new IP cores.
Frontgrade (formerly CAES, Cobham, Aeroflex, UTMC) pioneers the future and underpins many of the world’s most critical missions. Our radhard/radtolerant microelectronics empower the world’s leading spacecraft: from leading high throughput commercial communications satellites, earth observation satellites, manned space, deep space exploration, and the latest new space constellations.
Too often system designers are faced with a choice of using antiquated FPGA technology, or one that is much larger and power hungry than required. Frontgrade Technologies partnered with Lattice Semiconductor to bring their small footprint, low power, Nexus platform to the space market. The Nexus platform is based on 28nm FDSOI technology delivering strong performance with radiation tolerance.
Frontgrade will present details of the Certus-NX-RT and CertusPro-NX-RT devices, radiation testing summaries for the Certus-NX-RT for TID, proton and heavy ion, and testing plans for the CertusPro-NX-RT. Frontgrade will also highlight work on a 64 Mb RHBD SONOS SPI Flash device that could power in flight reconfigurable solutions with the FPGA devices.
To meet the high market demand for low earth orbit missions with reliable plastic encapsulated microelectronic components, Frontgrade is at the technology forefront delivering Space PEM qualified devices. Frontgrade will detail our Space PEM screening and qualification flow and compare it to NASA & ECSS flows.
“Over 80% of FPGA released into the field have a nontrivial error” – Wilson Group Survey
All RTL developers are familiar with coding standards which outline the rules which the RTL should be developed in accordance with. Enforcing these rules is normally the performed by peer review, which makes the enforcement variable.
A better approach to enforcing code quality is use of a static analysis tool which has a defined ruleset enabled. Static analysis of code enables a much wider, variety of analysis to be performed on the RTL which increases the quality of code which enters simulation and synthesis.
Static RTL analysis, however, is not limited to just the enforcement of RTL coding rules. Static RTL can be used by the engineering team to enforce
1. RTL coding rules – Clocking, Reset, Register Sizing, Unused Code.
2. FSM analysis – Deadlock, Terminal States, Unused State addressing.
3. Clock Domain Analysis – Detection of Clock Domains and creation of Constraints
4. Clock Domain Crossing – Analysis of the RTL for clock domain crossing issues
5. Path Analysis – Longest path between registers
This analysis enables better quality of code for simulation as for example FSM terminal states can be quickly identified in static analysis while in simulation it may take many hours for the design to progress to that branch of the FSM.
Over the last three years Adiuvo has partnered with ESA to use the Blue Pearl Visual Verification Suite to analyse the current ESA FPGA IP Library, against a defined set of RTL coding rules.
This session will start by explaining the agreed ruleset how it was defined, which standards were considered and why each of the rule final rules is important.
The session will then progress to identify common issues found in the ESA IP Library, despite being developed by several different institutions common issues are found.
To conclude the session, we will present the development flow developed to correct issues identified within the RTL without impacting the performance.
Static analysis of RTL code can help developers, identify problems which might be missed in simulation as the “right” question is not asked. As demonstrated in this project static analysis enables the identification of issues earlier in the development cycle which are easier to correct. Static analysis also creates evidence of peer reviews and analysis which can further support stage reviews such as PDR and CDR.
Nowadays, SRAM-based FPGAs present an increasing need for high-density configuration memories on one hand, and for space applications, configuration memories must be hardened against the effects of radiation on the other hand. Today, there is a lack of rad tolerant high density configuration memories in the market. Moreover, some high performance SRAM-based FPGAs used for space applications might also need to be monitored and scrubbed to guarantee the system functionality during the mission.
In this talk, 3D PLUS presents 2 products to address these requirements, a 1Gb Magnetic non-volatile Random Access Memory (NVM) with serial interface (A-Mnemosyne) and a 64Gb space grade COnfiguration Memory BOot manager COMBO (B-Combo) designed to boot and scrub SRAM-Based FPGAs requiring large density configuration memories.
SpaceWire (SpW) is one of the most widely used communication standard in space applications for on-board data handling. Ensuring that a SpW device is bug-free and highly reliable is crucial to reduce to zero the risk of compromising the space mission.
In this paper, a SystemVerilog Verification Intellectual Property (VIP) supporting the full testing of any implementation of a SpW Codec is presented. The VIP is fully compliant with Universal Verification Methodology (UVM), which represents the current state-of-the-art for functional verification.
The presented SpW CODEC VIP is based on the concept of Twin Model, i.e., a software emulator of an ideal SpW Codec able to communicate directly with the Device Under Test (DUT), which can be any SpW IP core with the Data-Strobe Interface. This approach leads to a significant simplification of the testing because it automatically exchanges with the DUT all the low-level information for establishing and maintaining the communication link, not requiring user effort to continuously adapt input stimuli to the simulation scenario.
In parallel to the main communication link between the DUT and the Twin Model, this VIP includes a Twin Link connecting two Twin Models. The Twin Link emulates the ideal link behavior and is used in simulation scenarios involving communication errors and disconnections. In this way, the Verification Environment automatically compares the functioning of the two links, reporting any malfunctions of the main one that depends on the DUT operations. Without the Twin Link, the user should manually check the correct DUT behavior in case of disconnections.
The result is a highly reliable and configurable VIP that allows for automated testing of all functionalities of any SpW CODEC implementation. In addition, thanks to the full UVM compliance, it has significant advantages in terms of reusability and maintainability.
The features of this VIP have been confirmed by implementing a complete test campaign including more than 150 testcases on two different and unrelated IP Cores: IngeniArs S.r.l. IP core and the one belonging to European Space Agency IP core portfolio. All functional requirements have been covered by at least one test (100% of functional coverage) and both systems resulted compliant with the first release of the standard. In addition, IngeniArs S.r.l IP Core was proven compliant also with the revision 1.
The high user-friendliness of the presented VIP allows the user to define and run new testcases in a very easy way and without knowing the Verification Environment internal architecture. Therefore, it is easy for VIP users to implement their test plans and achieving goals such as full functional and code coverage.
The final result is a significant reduction in verification time and effort of SpW Codec IP Cores and SpW-based systems.
Keywords— Universal Verification Methodology (UVM), SpaceWire (SpW) Codec, Verification IP, Twin Model, Twin Link, Verification Environment, functional verification, reusability, SystemVerilog.
With more than 65 years of space flight heritage, we have not seen a more dynamic time for the space industry as in the last few years. This presentation will provide an overview of Microchip Technology’s key enabling technologies, addressing the increasing demands for enhanced signal processing in next generation space systems without sacrificing reliability or performance. Microchip’s Radiation-Tolerant (RT) FPGA families will be discussed in detail.
The increasingly popularity of RISC-V, an open-source Instruction Set Architecture (ISA), has enabled customization and accessibility for hardware designs in various industries as part of the bigger heterogeneous computing and hardware acceleration trend. Microchip continues to champion the RISC-V movement by offering a comprehensive suite of software tool chains and IP cores which can be used for RT FPGA designs and space applications. An introduction to Microchip's RISC-V solutions will also be presented.
A modular design approach is a fundamental concept for developing high-complexity FPGA designs. Both design and verification effort can be reduced by partitioning system design requirements into functionally isolated modules, using standard interconnects between these modules to tie the system together. IP core reuse of proven modules and modules with flight heritage can reduce both mission development risk and mission operation risk.
Existing FPGA interconnect standards such as APB, AXI3, AXI4 and AXI-Lite have been designed primarily for SoC implementation, utilising wide buses and a large number of control signals to provide both high performance and flexibility, which is well suited to SoC implementations typically containing a processor system or complex DMA engines. However, both FPGA resource usage overhead and increased verification effort of the interconnect infrastructure reduce the suitability to designs that do not require these features, such as the CO2M MAP instrument which is entirely FPGA based.
In this presentation we present TAS-UK’s leightweight IP InterConnect (IPIC) concept and illustrate how this has been used to develop a complex flight project in a very short timeframe. The CO2M MAP instrument contains many IP cores typical of an FPGA used for control of a complex instrument: EEPROM, stepper motor control, DDR2, heater control, thermistor acquisition, TMTC over MIL-STD-1553 and science data over SpaceWire. This is in addition to control of the CIS120 sensor control IP that is the heart of the MAP instrument concept. All of these modules have been developed with simple IPIC interfaces that are then connected via the IPIC interconnect infrastructure.
Further to this, the IPIC concept provides easy memory-mapped access to all registers in the design. This is very useful for patch/dump access during both AIT and Flight, but also allows a flexible TMTC handling process to be developed for MAP. The MAP TC handling and TM acquisition uses a pre-loaded table of IPIC read/write commands across the MAP memory address space to action the TC or collect the TM set, allowing both TC execution and the TM collection set to be updated in flight if required.
The IPIC has allowed the MAP design team to not only develop and test these modules in isolation, but also to build scripts for automatic generation of IPIC connection infrastructure VHDL and automatic ICD documentation generation, ensuring that the design and the documentation are true representations of each other with minimal FPGA designer effort. The auto-generation tools can also generate VHDL packages for register address and bit field constants used for system level simulation, allowing registers to be added or address maps changes to be made without requiring test benches to be rewritten. Further to this, a standardised interface allows unit testing of each module to be carried out easily, using a generic IPIC unit test harnesses in place of the IPIC interconnect and the VUnit framework to manage test execution and tracking.
This abstract focuses on the Sodern Navigation Cameras for Jupiter Icy moon Explorer (JUICE) and Mars Sample Return Earth Return Orbiter (MSR-ERO). The Cameras are custom made with TRL FPGA inside to withstand constraints of radiation environment and high observation capabilities. The first one is named NAVCAM for Navigation Camera, the camera is a key instrument for the spacecraft navigation, it assesses the position of the moon and stars in its field of view in an inertial reference frame. The second one is named NAC for Narrow Angle Camera, the camera is dedicated for the rendezvous and the capture of a football-size object called the Orbiting Sample (OS) in Mars orbit. NAVCAM is based on Sodern HYDRA Star Tracker experience with the HAS2 sensor. Digital control is performed with a FPGA RTAX2000S driving the sensor, MRAM memory with correction parameters, and 1Gb SDRAM for image summation. NAC is based on Sodern HORUS Star Tracker experience with the Faintstar sensor. Digital control is performed with a ProASIC3L FPGA driving the sensor and 1Gb SDRAM memory for image storage.
Permanent Magnet Synchronous Motor (PMSM) control is a field where real-time processing capabilities play a substantial role in the system’s performance. The usual tradeoff for the processing elements amounts to choosing between a DSP or an FPGA, with the latter seen as more complex to develop and maintain.
This tradeoff usually does not hold out against the specific constraints in the space industry: radiative environments causing SEEs (Single Event Effects), fault tolerance and reliability constraints. One favored answer was to rely on a thoroughly screened space-grade FPGA to bear the brunt of the reliability goals. However, the strong push to reduce recurring costs provides an incentive to explore architecture-based fault-tolerance solutions, where cheaper components verify each other dynamically.
In this talk we present a distributed motor control architecture based on GUARDS (“A Generic Upgradable Architecture for Real-Time Dependable Systems”) allowing:
A 40kHz motor control loop robust to any single (permanent or intermittent) fault
Undisturbed operation in a SEE environment
Distributed FDIR and telemetry
We will then explain why using an industrial-grade FPGA is a particularly good fit for this type of architecture, and how we make it work at Watt & Well: Rapid prototyping on reprogrammable FPGAs allows for quick algorithm de-risking; The inherent parallel architecture of FPGAs allows both flexibility in protocol design and make meeting the real-time deadlines easier; Finally, fine-grained verification capabilities make reaching a very high design assurance level a systematic process.
We present an early evaluation of a flight-proven altimeter design implementation on NG-ULTRA FPGAs using a HLS-to-Bitstream design flow relying on Siemens-EDA Catapult (HLS) & Precision (synthesis) tools and NanoXmap design suite.
Our analysis focuses on three main aspects :
1) Provide feedbacks on the design flow developed by Siemens-EDA
2) Compare the performances of Precision with those of NanoXmap synthesis
3) Estimate the performances that can be achieved
Recently, we are witnessing an increasing use of COTS FPGA devices in satellite systems that extend the capabilities of traditional onboard computers and enable the development of emerging space applications. One such device is the AMD-Xilinx Zynq-7000 APSoC, which is a System on Chip (SoC) integrating a dual-core Arm A7 processing system (PS) and modern SRAM FPGA programmable logic (PL). Given that the COTS FPGAs are susceptible to Single Event Effects (SEEs) caused by the ionizing radiation, the SEE characterization of the Zynq-7000 devices is essential for their further deployment in space.
To characterize a chip against SEEs for use in space, radiation test campaigns with heavy ions and protons must be performed. The experimental data should cover a range of heavy ion LETs and proton energies to guarantee accurate prediction of the SEE rates. Given that the Zynq-7000 integrates several SRAM arrays, such as the Configuration RAM and the Block RAMs (BRAMs) of the PL and the On-Chip Memory (OCM) and the processor caches of the PS, all these memories should be individually tested. Several experiments have been presented in the literature investigating the impact of radiation on the Zynq-7000 FPGA device, including our previous heavy-ion tests at CERN (2018) and GSI (2019). However, none of these works has covered the full ranges of energies of heavy ions and protons for all types of SRAM arrays in the chip.
Towards this end, we performed two new radiation campaigns with a high-energy heavy ion beam in GSI and a proton beam in PSI to achieve the complete SEE characterization of the Zynq-7000 APSoC and examine the radiation effects in all its SRAM arrays. Based on the experimental results, we can accurately model the radiation environment for various orbits/missions using the OMERE software and estimate the orbital error rates for the Zynq-7000 APSoC.
Specifically, the goals of the two radiation experiments were: 1) to complete the SEU characterization report for the PL part obtained from our previous heavy-ion experiments 2) to characterize the PL part for SEEs caused by protons; we tested the chip for the full range of proton energies, from 30 MeV up to 250 MeV 3) to characterize the PS part for SEEs caused by heavy ions and protons 4) to study the impact of radiation effects at the application level by running various processor benchmarks for different memory organizations. 5) to examine the effectiveness of the on-chip ECC controller to correct upsets in the DDR memory by irradiating the DDR chips of the board.
In this workshop, we will present the results of our radiation experiments and the estimation of various orbital error rates for the Zynq-7000 APSoC.
The flash-based mass memory system are very commonly used in PDHU and OBC. We will present a flexible FPGA architecture, which implements the most timing-critical data acquisition, buffering, storage and downlink functions. The architecture has been implemented in a compact and high-performance PDHU, which was presented on DASIA 2019: "A Compact High-Performance Payload Data Handling Unit for Earth Observation and Science Satellites". It has also been implemented in many industry projects, e.g., JUICE mission (on-board computer mass memory board), Biomass PDHU, FLEX PDHU, KACST Satellite Computer Board with mass memory, Mass Memory Module for KARI Kompsat-7, S4Pro (H2020 demo project), Mass Memory Formatting Unit in Copernicus Land Surface Temperature Monitoring (LSTM) and Copernicus Polar Ice and Snow Topography Altimeter (CRISTAL), Copernicus Imaging Microwave Radiometry (CIMR) PDHU as well as Artemis Gateway Mass Memory Storage (MMS). FPGA serves as the central controller and provides the following main features:
Detailed configurations (e.g., interfaces, capacities, data rates, etc) will be presented in workshop. The FPGA architecture has demonstrated its flexibility and scalability not only in previous industrial projects, but also in recent on-going proposals, studies as well as pre-development activities at DSI. We will present the future trends of our FPGA- and flash- based mass memory products.
The power of AI is increasing with ongoing research and investment. Convolutional Neural Networks excel at object recognition, object detection and image segmentation; transformers lead the way in sequence analysis, including translation, chatbot and search engine tasks. Loosely based on the micro-level architecture of the brain, these networks can – like the brain – be trained for specific, usually very bespoke, applications. The process of utilising this trained network in an application is called inference.
Taking the relatively simple case of object detection, training consists of a dataset of many thousands or millions of labelled images. The more the better. Training is the process of inference, comparison of the output of the network with the expected label, followed by backpropagation of the error using calculus. There are many pretrained networks, such as the ResNet family and YOLO, available from online sources that can be fine-tuned through a process of transfer learning for similar applications.
Moving beyond the training phase, each application has its own list of inference and associated software requirements. Explicit conditions on power consumption, working environment, security and performance will have an impact on processor choice. Additionally, image resolution and network structure parameters have an effect on latency, throughput and accuracy of the system. The decisions and compromises made in the requirements stage are affected by both hardware and software capabilities.
The Intel OpenVINO (Open Visual Inferencing and Neural Network Optimization) software platform is fast becoming a noted tool for efficient inference across Intel devices. The built-in Inference Engine allows easy integration with the different hardware, of which FPGAs are a prime example. Until recently the platform has centred around vision applications, supporting a wide range of CNN architectures for classification, segmentation or object detection, however the product is expanding to cover translation and other natural language processing.
OpenVINO consists of a Python-based Model Optimiser, which acts as a built-in converter and platform-agnostic optimisation tool, and the Inference Engine. The Model Optimiser can take a network from any training framework (Tensorflow, PyTorch, MXNet, Caffe, etc) and after removing the training hooks, condenses it to improve inference times.
Integrated plugins allow an easy route for inference on FPGAs on accelerator cards or embedded platforms. The main advantage of embedded devices is that they are the low-power, bespoke solution for many applications. Intel's Programmable Solutions Group is able to provide IP for image preprocessing, to security functions such as weights encryption, to quantized and binary options for exceptionally fast inference.
Technolution Advance and the Royal Netherlands Aerospace Centre (NLR) have started the development of a new payload Control & Data Processing Unit (CDPU) targeted towards smallsats. The CDPU will act as a hub for optical instruments that forms a logical bridge between the instruments and the satellite platform. It will offer standardized hardware interfaces for integrating optical instruments, as well as sophisticated facilities for processing the science data before it is sent down to earth. We defined a modular architecture to allow flexibility in interfaces and functions, with advanced and reliable in-orbit FPGA reprogrammability.
The control and data processing unit (CDPU) will be a space avionics unit to control compact optical instruments and to process the instrument science data . Under control, we currently assume functions like the provision of switched and conditioned power, the provisions of fine thermal management functions, driving calibration provisions and operational tasks as the management and execution of measurement schedules. Minimum data processing currently required is the packetization of raw data to a proper format for the downlink of the satellite. But the CDPU will be able to offer more. As instruments are becoming more sophisticated and since there is an increasing demand for in-orbit data processing functions such as compression, data filtering and encryption, the CDPU will contain powerful data processing capabilities like AI methods.
The processing module of the modular CDPU architecture is based on the Microchip PolarFire FPGA. For interfacing towards the instrument and to the platform, customizable instrument-specific and platform-specific modules have been designed. We have implemented our AXI4-based FreNox RISC-V system-on-chip into the PolarFire FPGA device, including SpW interfaces for interfacing with the instruments and the platform. The FreNox RISC-V SoC also accommodates function accelerators, such as compression and security functions.
The target market of the CDPU will be the MicroSat and MiniSat market (order 20 - 1000 kg); responding to the trend of CubeSats size increasing towards the MicroSat segment and a demand for smallsat avionics with increased reliability. At the same time state-of-art compact optical instruments often need MicroSat size platforms to be accommodated. Another use-case of the CDPU is to accommodate compact instruments on bigger missions where the CDPU can function as an interfacing unit with the host satellite, taking into account ADHA (Advanced Data Handling Architecture) compatibility.
As a founding member of the RISC-V Foundation, Technolution has designed and implemented the FreNox RISC-V processor family. The FreNox RISC-V technology has been integrated in a wide variety of qualified security products for line encryption and network domain separation at multiple confidentiality levels. We have also ported the full AXI-4-based FreNox RISC-V system-on-chip into the NanoXplore NG-Medium device, and have created a full interactive demonstration; through the SpW-interface we upload software in the RISC-V SoC, allowing the user to play and non-invasively debug the "Space Invaders" game on the NG-Medium board.
Field Programmable Gate Arrays (FPGAs) are ideal single-chip solutions for many applications. Many of these applications find their way into space. By their very nature, they are re-configurable, enabling the designer to create a standard chassis that can be deployed for various missions.
Space-based applications pose a unique set of challenges not typically found in celestial implementations. Exposure to high levels of radiation and energetic particles is the most obvious. These particles change the properties of the gates contained in the integrated circuit (IC) and cause them to behave in unexpected and unpredictable ways. These changes can range from the flipping of a random bit in memory to the latch-up on critical circuitry, resulting in permanent and catastrophic damage to the system.
A common way to address these issues is to use radiation-hardened components in the system design. This applies to the FPGA and all other components, such as discrete logic, timing, power management, data acquisition, and communication. This is the most pragmatic and expensive way to mitigate the effects of ionizing radiation on the system. Components that are radiation hardened by design are typically two to three orders of magnitude more expensive than their industrial, common off-the-shelf (COTS) counterparts. In many space applications, this additional cost is prohibitive.
What if there was a way to increase the system's reliability significantly without increasing the cost proportionally?
Our approach uses a single radiation-hardened supervisor that monitors the FPGA for latch-up conditions. We report the results of validation work on VORAGO Technologies’ VA41630 in a 4-channel latch-up monitor configuration to mitigate latch-up on multiple devices in harsh environments. The proven results suggest that this device may replace custom discrete solutions and reduce required engineering and system characterization resources to speed up the design cycle.
This hybrid approach brings the best of both worlds. A design rugged enough for many space-based missions as well as cost-effective to enable scalable deployment.
We propose a fault tolerant mechanism for linear DSP applications, that relies on: (i) parity based real number error correction codes to store redundant information efficiently, (2) gradient descent correction loops for error estimation, and (3) fine grained check-pointing and roll-back for consistent redundant information encoding. The error correction is performed by a novel gradient descent symbol update method designed for real number parity based codes, which performs error extraction from the syndrome vector.
We use the linear transform property of the application in order to compute the correction offset that is applied to the output data. The proposed solution is validated for a 512 point memory-based FFT architecture, augmented by the means of an LDPC code, for which an efficient way to embed the processing associated to the error correction code - encoding, syndrome check, decoding – is devised.
Field Programmable Gate Array (FPGA) devices are a golden core of many applications in space and avionic fields where reliability is an important concern. In this presentation, we present a set of EDA tools for the evaluation and mitigation of radiation-induced Single Event Effects (SEEs) on FPGAs. The tools are compatible with COTS and Rad-Hard FPGAs with SRAM and Flash -based Configuration Memory. Experimental results will be presented demonstrating the usability, efficiency and advantages of the developed tools.
The adoption of modern commercial grade SRAM-Based FPGAs, unencumbered by export restrictions, allows to take advantage of its growing computing power and reduced cost, volume, mass and power consumption for space applications ranging from control tasks to signal processing, from software-defined radio to machine learning, both in the “traditional-space” and the “new-space”.
The use of SRAM allows new execution models, for instance the FPGA task switching during the mission using partial reconfiguration, and new applications, for instance future proof upgradeable modules following technical advances in algorithms and changes in the environment. However, even in the more relaxed scenario of new-space, the lack of proper qualification, reliability evaluation and SEU mitigation may jeopardize the mission objectives.
Due to the higher cost and longer time for testing FPGA systems in its operational environment in space, one can use fault injection which has the advantage of being used since the earlier stages of engineering reducing the cost of corrections in the design. Fault injection is a powerful tool in reliability and fault tolerance analysis, supporting engineering activities of rapid prototyping, reliability aware design space exploration and selective hardening.
Emulation-based fault injection uses the real FPGA hardware, reusing existing circuitry supporting device test and configuration, to emulate the radiation effects from the space environment. In our implementation we use the Xilinx’s Internal Configuration Access Port (ICAP) to manipulated the FPGA configuration memory (CRAM), the contents of the memory blocks (BRAM) and the flip-flips from the configurable logic blocks (CLB). The emulation-based approach does not require complex facilities, has a low cost, allows the design to be tested near its nominal speed, and allows the test to be focused in selected design modules.
To achieve its goals in supporting the engineering, the fault injection must be cheap and fast, to allow evaluation of several alternative design solutions, and must be consistent with radiation effects, to drive the engineering in the right direction. Also, as fault tolerance and mitigation techniques are introduced in the FPGA design, the fault injection becomes increasingly complex, requiring more knowledge about the device and its behavior under radiation.
We present results from laser cartography, heavy ions micro-beam scanning, and static tests on Xilinx 7 Series FPGAs, under heavy ions, fast neutrons and thermal neutrons. Laser cartography and micro-beam was used to understand the organization of the FPGA memory while static tests were used to leverage statistical profiles of occurrence of single- and multiple-bit SEUs (SBU, MBU) in memory. This information was used to enhance the fault injector and devise new fault injection methodologies, including improvement of the consistency between fault injection and radiation tests, interoperation with memory scrubbing and other fault tolerance and mitigation techniques, increasing the fault injection speed to reduce fault injection campaign time, and extending the fault injector to the Xilinx UltraScale+ devices family. These results are supported by case-study applications including modules generated by high-level synthesis, softcore microprocessors and convolutional neural networks using different techniques of fault mitigation.
European efforts to boost competitiveness in the space services sector promote the research and development of advanced software and hardware solutions. The EU-funded HERMES project contributes to the effort by qualifying radiation-hardened, high-performance programmable microprocessors and developing a software ecosystem that facilitates the deployment of complex applications on such platforms. The main objectives of the project include reaching a technology readiness level of 6 (i.e., validated and demonstrated in relevant environment) for the rad-hard NG-ULTRA FPGA with its ceramic hermetic package CGA 1752, developed within projects of the European Space Agency, French National Centre for Space Studies and the European Union. An equally important share of the project is dedicated to the development and validation of tools that support multicore software programming and FPGA acceleration.
The HERMES project selected Bambu high-level synthesis tool to integrate capabilities to translate C/C++ code into Verilog/VHDL in its development ecosystem. In HERMES, Bambu has been and will be extended to support new FPGA targets, architectural models, model-based design, and input applications. The increased performance offered by FPGAs is made so available also to software developers that do not have hardware design expertise. During the workshop presentation, we will share the latest results and developments of the Bambu high-level synthesis tool developed in the context of the EU H2020-funded HERMES project.
NX is introducing a new generation of rad-hard SoC FPGA to address platform and payload application from new space to class 1 missions. NX will give an overview of new generation of NanoXplore rad-hard SoC FPGA NG-ULTRA and ULTRA 300 and ecosystem including update on component development and qualification status, new software tools IMPULSE and third party tools.
The EU-funded "qualification of High pErformance pRogrammable Microprocessor and dEvelopment of Software ecosystem (HERMES)" project was launched in March 2021 to increase the rad-hard NG-ULTRA SoC TRL to 6 and to develop and validate a software ecosystem that will allow users to take full advantage of the capabilities of this platform.
FentISS collaborates in this project, jointly with Airbus Defence and Space France, Thales Alenia Space France, NanoXplore, Politecnico Di Milano and ST-Micro Electronics by adapting our XtratuM-NG hypervisor, the RTEMS6 ESA QDP and a Linux MMU-less version to this hardware platform and by delta-qualifying the XtratuM hypervisor for an ECSS category-B.
On the other hand, the CNES "XtratuM NG porting and qualification on NG-LARGE" study was launched June 2022 where the role of FentISS is to adapt our XtratuM-NG hypervisor and our LithOS run-time ARINC653-like to the NG-LARGE SoC and delta-qualifying the XtratuM-NG hypervisor for an ECSS category-B.
This presentation describes the progress on our work for both, the NG-LARGE SoC and the NG-ULTRA SoC boards where:
GRLIB is a VHDL IP library, developed and maintained by Frontgrade Gaisler (previously Cobham Gaisler) that provides reusable VHDL IP cores for the development of system-on-chip designs. The library includes a variety of IP cores such as processors, memory controllers, bus infrastructure, and peripherals that can be used to build digital systems ranging from simple controllers to complex system-on-chip designs with fault tolerance features. GRLIB is designed to be flexible and easy to use and is widely used in the space industry.
The library is vendor independent, with built-in support for different EDA tools and target technologies, and includes template designs for popular development boards from major vendors. The availability of template designs and IP cores with extensive heritage reduces the time and effort required for design and verification, and it also helps to ensure that the systems are reliable and of high quality.
The presentation will give an overview of GRLIB and it will provide a description of the latest additions to the library: improved support for Lattice and NanoXplore FPGA target technologies, the RISC-V NOEL-V processor, a High-Speed-Serial-Link Controller supporting WizardLink and SpaceFibre, improvements to the GRSCRUB FPGA supervisor, a NAND Flash memory controller, and a DDR2/DDR3 memory controller with strong EDAC capabilities.
We present different use case of FPGA in science projects at IRAP ( Institut de recherche en astrophysique et planétologie ) :
For each case, the scientific context and the development constraints will be described and we will explain the reasons that led to the technical choices. We will give a feedback on development process, describing methodology and tools.
This prevention will give an overview of FPGA usage in instruments themself but also during development.
As on-board payload complexity and generated data volume increases, the demand for high-throughput data transfer becomes crucial. Higher transfer speeds are required between payload modules (to process data) but also between payload and platform (to store data before sending back to Earth). One of the standard interfaces to achieve this task is SpaceFibre - a multi-Gbps, on-board network technology dedicated for spaceflight applications, which uses electric or fibre-optic cables to provide data-rates up to 6.25 Gbps (and potentially beyond). In addition, this technology is the successor of SpaceWire, not only improving the data rate but also reducing the cable mass and improving QoS and FDIR capabilities while remaining compatible with SpaceWire at packet level.
This work presents results of the SFIC project under the Polish Industry Incentive Scheme (PLIIS). The main goal in the project was to present the SpFi as a high-throughput data transfer protocol used to transmit hyperspectral images to the compression cores realising two lossless data compression standards developed by the CCSDS: CCSDS-121 and CCSDS-123. The CCSDS-121 standard compresses raw uni-dimensional data, while the CCSDS-123 has been specifically designed for multispectral and hyperspectral images. The demonstrator offers capabilities to use either the standalone CCSDS-123 or the combination of both CCSDS-123 as a pre-processor and CCSDS-121 (which creates SHyLoC) and compress hyperspectral images corresponding to two different datasets. Data is fed via SpaceFibre operating at 2.5 Gbps. Achievement of above objectives was demonstrated by VHDL development, integration and hardware tests on an RTG4 FPGA Development.
The increasing requirements in cost and performance for the NewSpace approach combined with the radiation tolerance capability demanded in space systems have created the need to develop high-performance, low-cost, fault-tolerant systems for space missions. Unfortunately, although radiation-hardened devices meet the radiation tolerance requirements, these devices may not always meet performance and cost requirements. In contrast, commercial off-the-shelf devices may meet performance and cost requirements, but they are radiation susceptible, so they do not meet safety requirements. To achieve a platform that meets these performance, cost, and safety requirements, this work proposes a novel reconfigurable fault-tolerant and open-source processing system for space applications. On the hardware side, the system combines a RISC-V processor suitable for space environments (NOEL-V) with ARTICo³, a reconfigurable multi-accelerator architecture that can dynamically trade-off between computing performance, energy efficiency, and fault-tolerance. On the software side, the host processor runs under RTEMS real-time operating system. To enhance fault tolerance, the system is reinforced by a hierarchy of heterogeneous configuration memory scrubbers.
We present a System-on-a-Chip (SoC) architecture, based on Field Programmable Gate Array (FPGA), suitable for satellite quantum communication which exploits a COTS board, the ZedBoard by AvNet. Beside granting a very high flexibility thanks to its Zynq-7000 SoC, including both an FPGA and a CPU, this architecture allows to implement a “1-random-1-qubit” encoding where a unique random number, generated with no expansion from a Quantum Random Number Generator (QRNG), is used to encode a single qubit in a discrete variable Quantum Key Distribution (QKD) scheme exploiting the polarization degree-of-freedom of single photon.
The architecture has two main layers: one is implemented at CPU level and is responsible for the high-level functions (data transfer, parameters setting); the other is implemented on the FPGA to handle the high speed and deterministic functions (generation of the output signals driving the electro-optical setup). This offers a very high flexibility as the FPGA design is strictly reserved only to specific functions that require precise timing and high speed.
On the FPGA, 3 signals (5ns width) are generated with a repetition rate of 50 MHz (due to analog bandwidth limits). We also exploit both the CPUs to realize a continuous stream from an external source (QRNG/PC) through a 1-Gbps TCP connection which allows to have a continuous qubit transmission with no interruptions. Given a 4-bit encoding for each qubit (2 bits for polarization, 2 for intensity), we implement the architecture to sustain more than 200 Mbit/s. The data transfer is organized in two processes: from QRNG/PC to CPU0 where the data is stored into the on-board RAM memory; from CPU1 to FPGA where the data is moved from RAM to Block-RAM. The whole procedure is synchronized through interrupts (FPGA<->CPU1; CPU1<->CPU0; CPU0<->QRNG/PC). Over recent years, this system (or variation of it) was successfully used in several (satellite) QKD/QRNG experiments realized by the QuantumFuture research group. It was also implemented in the commercial QKD systems provided by ThinkQuantum, a spin-off company from University of Padova. Results were recently published in a peer-reviewed article (A. Stanco et al., DOI: 10.1109/TQE.2022.3143997).
The system was tested with a QRNG device, able to provide >200 Mbit/s, for 55 hours showing no interruptions and correctly delivering the data for the qubit transmission. Most of nowadays systems exploits a low-rate QRNG (~Mbit/s) and algorithm expansions to reach the required bitrate but with a major drawback in security as the transmitted qubit sequence is not fully random due to the expansion algorithms. Thus, our system offers a higher level of security for QKD thanks to the true randomness of the qubit sequence. According to the current state of the European and Italian satellite QKD missions, this represents a relevant result since such missions are considering payloads with both a QKD transmitter and a QRNG. Furthermore, as Cubesat technology is becoming more prominent and COTS components have started to find their place in space mission, a COTS-based system for a QKD-QRNG apparatus can be considered a valid baseline for satellite quantum communication.
Requirements for space sensor data to be sampled at rates in the hundreds of giga-samples per second combine with constraints in the growth of downlink bandwidth to force a dramatic increase in the need to process vast quantities of data aboard satellites. At the same time, emerging space programs face tight cost constraints and compressed development timelines. The AMD 7nm XQR Versal™ adaptive system-on-chip devices provide a flexible system architecture featuring massive amounts of scalar processing, vector processing, logic elements and memory, enabling seamless and reliable processing throughput and connectivity. XQR Versal devices have completed Class B qualification, adapted from Mil-Prf-38535 with modifications made by AMD for organic packages. They provide ample bandwidth for signal and data processing as well as machine learning (ML) inferencing, and allow for virtually unlimited on-orbit reconfigurability. This presentation provides an overview of the features of the XQR Versal devices, with an update on radiation data for the 7nm Versal in proton and heavy ion environments. We will cover the latest qualification results, and will review the materials and construction of the ruggedized organic package used in the XQR Versal ACAP devices.
An increasing number of on-board processing applications require intelligent in-orbit processing to extract value-added insights rather than clog precious RF downlinks with bandwidths of data for post-processing on the ground. Some applications require autonomous, real-time decision making, e.g. a space-debris retrieval spacecraft outside of its ground-station coverage would not be able to receive a late command to initiate a collision-avoidance manoeuvre, or space-domain awareness from multiple sensors followed by object detection and classification may require an immediate friend or foe decision. High-definition SAR imagery is increasingly generating huge amounts of Earth-observation data and in-orbit AI inference and the implementation of neural networks allows for feature identification, scene segmentation and characterisation.
Space-grade FPGAs, ACAPs, MCUs with vector-processing engines and rad-tolerant AI accelerators optimised for linear algebra and neural networks, each offer certain advantages for intelligent on-board processing. Some applications require small, low-power, Edge-based solutions while others can accommodate 140 W semiconductors.
AMD/Xilinx’s Versal ACAP (Adaptive Compute Acceleration Platform) contains an array of AI engines comprising VLIW SIMD high-performance cores containing vector processors for both fixed and floating-point operations, a scalar processor, dedicated program and data memory, dedicated AXI channels and support for DMA and locks.
The AI tiles provide up to 6-way instruction parallelism, including two/three scalar operations, two vector reads and one write, and one fixed or floating-point vector operation every clock cycle. Data level parallelism is achieved via vector-level operations where multiple sets of data can be operated on a per-clock-cycle basis. Compared to the latest FPGAs and microprocessors, AI engines improve the performance of machine learning algorithms by 20X and 100X respectively, consuming only 50% of the power.
Spacechips is developing a space reference design baselining the XCVC1902-1MSEVSVA2197 ACAP which can be used for prototyping and de-risking future mission concepts, as well as a qualified version suitable for space flight.
This work compares the implementation of smart payloads baselining ultra-deep-submicron COTS parts, space-qualified FPGAs, ACAPs, MCUs with vector-processing engines and rad-tolerant AI accelerators. Each of these offers an intelligent processing solution depending on the space application, the required performance and operational mission constraints.
The work also describes the specification and capability of Spacechips’ Versal Space Reference Design, discusses our design-in experiences of the XCVC1902-1MSEVSVA2197 ACAP and the challenges of powering and thermally managing a large, 140 W semiconductor.
In recent years, research in the space community has shown a growing interest in Artificial Intelligence (AI), mostly driven by systems miniaturization and commercial competition. Among the available devices for accelerating AI onboard satellites, Field Programmable Gate Arrays (FPGAs) constitute a valuable solution for their energy efficiency and low non-recurrent costs. To facilitate and accelerate the experimentation and development of AI on FPGAs, several automation tooflows have been released.
This session presents a novel technology-independent toolflow for automating DNN deployment onboard FPGAs in space applications. Given an input DNN, the framework first applies compression techniques to shrink model complexity and ease hardware implementation. The acceleration stage of the proposed system features a fully handcrafted Hardware Description Languages (HDL)-based architecture that poses no limit on device portability thanks to the absence of third-party IPs, high scalability, and fine-grain control on resource mapping. An automation process directly generates the HDL sources of the accelerator customized for the target DNN-FPGA pair, thus making the presented solution an end-to-end and ready-to-use toolflow. The user has a high degree of control over the final design as he can indicate constraints on accuracy, inference time, or resource usage percentages.
The presentation illustrates the design choices behind the system dataflow and also provides an insight into user control. We present and discuss implementation results of DNN models on both radiation tolerant and rad-hardened devices from different vendors (Xilinx, Microsemi).
Thanks to its high device portability, the proposed toolflow is a valuable candidate for the deployment of DNNs onboard devices not yet supported by any other framework. The availability of a DNN-to-FPGA toolflow that fully support state-of-the-art space-qualified FPGAs, including NanoXplore technology, will deeply promote the deployment of AI in space missions.
We present the ongoing development of a Payload Data Processor for small satellites based on a Xilinx Kintex UltraScale FPGA. To increase the flexibility and reliability of the system, we use a Vorago Technologies ARM Cortex-M4 as a companion microcontroller. The controller configures the FPGA, can be used as an external scrubber, and controls the payload’s data acquisition electronics. A clear hardware separation between tasks related to payload telemetry and control and those related to data processing not only increases the reliability of the system but also allows to switch off the FPGA to save power without completely relinquishing control of the payload. This latter advantage is especially important for power-constrained small-satellite missions. A first in-orbit validation of the Payload Data Processor is planned for 2024.
A Field Programmable Gate Array (FPGA) is becoming an essential component of a satellite since it can support various digital functionalities. However, like any other integrated circuit, an FPGA is not immune to a Single Event Upset (SEU) caused by charged radiation particles. Most optimal architectures to test the FPGA fabric in a harsh radiation environment are simple chain architectures (one chain link has one LUT and D-Flip op) with different SEE error mitigation techniques. The proposed research uses such architectures on an experimental device, and Commercial of The Shelf (COTS) SRAM-based FPGA serves as an experimental device. The research aims to evaluate the COTS FPGA in a chaotic outer space and controlled laboratory radiation environment. The first part of the COTS FPGA evaluation will take place in outer space as part of the TRISAT-R satellite mission at an altitude of 6000 km. At the same time, the most suitable facilities for evaluating the COTS FPGA in a nuclear laboratory environment is CERN’s CHARM facility with Mixed Field radiation particles.
The results gained from the TRISAT-R satellite mission in MEO and nuclear laboratories will help us correlate and evaluate an SRAM FPGA behavior in harsh radiation environment. Device Under Test (DUT) is a Commercial-Off-The-Shelf (COTS) SRAM-based Xilinx Artix-7 FPGA technology. The target device tests three different FPGA images stored in NAND flash memory. The differences in FPGA images are in applied Tripel Modular Redundancy (TMR) mitigation technique.
During the tests we will measure number of errors on the chain’s data path, number of successful and failed reconfiguration of the FPGA, occurred Latch-up in Artix-7 FPGA and power consumption.
Up to this date we have concluded tests in CHARM nuclear laboratories with, where we proved that a COTS FPGA can be successfully reconfigured in harsh radiation environment. Furthermore, longer the experiment was exposed to radiation particles, the power required to configured FPGA increased. On the subject of errors on the chain's data path, the data path without TMR had the higher number of errors compare to the other two data paths which had similar number of errors. The second part of the research takes place in Earth’s orbit as part of TRISAT-R mission. This part of the research is still in progress since we are waiting for a time slot which will allow us to do measurements described above. Our plan is to conduct measurements until the end of February.
With increasing demands on the computing power of future missions, it is essential that available resources are used efficiently, which can be achieved through greater flexibility, i.e., total or partial in-orbit reconfiguration of FPGAs. This work proposes a generic solution to safely reconfigure FPGAs from different manufacturers and technologies (Flash, SRAM) in space, including a generic interface towards the S/C.
The R3FPGA system consists of a reconfigurable target FPGA that represents the actual application of the mission and a reconfiguration engine that is responsible for managing the reception storage and deployment of the configuration bitstreams, and controlling the proper state of the target FPGA. Special attention is given to the overall reliability and robustness against radiation induced SEEs as well as SEFIs during the critical reconfiguration process as well as interaction with the scrubbing process, essential for SRAM-based FPGAs. Additionally, a generic interface based on standardized TM/TC commands (PUS) is developed and implemented in the bootloader.
The reconfiguration engine is designed as a flexible solution that is capable of connecting multiple FPGA devices and is radiation hardened. The proposed GR-716B-based solution consists of several memory devices and offers scrubbing as well as reconfiguration via miscellaneous interfaces. The goal is to support full and partial reconfiguration of the Xilinx UltraScale (SRAM), NanoXplore NG-Medium (SRAM), and Microchip PolarFire (Flash). Nevertheless, this work is carried out with an eye on future needs, therefore, the developed reconfiguration and scrubbing engine can be in principle adapted to support other FPGAs such as NG-Ultra and with embedded processors and AI elements, like the XQR Versal.
The feasibility of the solution will be assessed by the implementation of multiple use cases in dedicated HW demonstrators. A scientific use case of an optical or hyperspectral instrument is selected for NanoXplore and Xilinx SRAM-based FPGAs. Specifically, for the Xilinx UltraScale (SRAM), a custom development board with the (XQR)KU060 FPGA is manufactured, that can be reconfigured via SelectMap interface and can also be used to perform fault injection into the FPGA and thus exercise FDIR aspects. For Microchip PolarFire (Flash), a radar use case is foreseen, in which a modern reflector Synthetic Aperture Radar (SAR) with wide swath width and scan on receive (SCORE) principle is implemented on an evaluation board hosting a MPF300 FPGA that supports JTAG interface for reconfiguration.
The results of this study will support the use of reconfiguration of FPGAs under space conditions. For this reason, a test platform that consists of a reconfiguration engine and a target FPGA is exposed to radiation (heavy ion and proton). The upcoming test platform includes the reconfiguration engine and a science use case demonstrator. The overall goal of the radiation campaign to undergo heavy ion and proton tests of the UltraScale FPGA with focus on reconfiguration.
The work is carried out under ESA contract No. 4000134941/21/NL/CRS.
With the Fraunhofer On-Board Processor (FOBP), we created a fully in-orbit reconfigurable FPGA experimentation platform in geostationary orbit which will launch in 2023 as part of the Heinrich Hertz satellite mission.
It provides digital signal processing for satellite communication for up to 450 MHz bandwidth and is flexibly programmable from ground.
Reconfiguring the two Virtex-5QV FPGAs changes the payload behaviour but clearance by the satellite operator is not necessary. Instead the involved parties demand exhaustive qualification before conducting the communication experiment. Additionally to testing the FPGA design in the target environment, the on-board software and its compatibility with the software-defined ground station has to be verified.
Continuous Integration (CI) provides the means to automate the necessary steps on ground. It not only reduces the required time to set up the measurement devices but also the risk of mistakes which could be made by human operators by ensuring reproducibility of the test results.
CI refers to the practice of regularly integrating code changes into a central repository, where automated tests are run to ensure that the code is stable and functioning properly. This practice was founded in the context of software development but can also be incorporated for automated tests on the target hardware for qualifying the target FPGA design and finally in-space deployment. This includes the integration of several devices like a power supply tester and a thermal vacuum chamber.
As the first step, the FPGA design is built from VHDL. Our fully automated buildsystem controls the Xilinx build tools. Each FPGA design is fully compatible to all FOBP models (on-ground reference and flight hardware). This ensures that the results of the tests can be applied to the model in space. Afterwards, the on-board software is built and tested for operation on an IP-based processing unit. FPGA design and software are then integrated into a single image.
The FOBP is able to receive design updates over the air through a custom in-band telemetry and telecommand link and reboots with upgraded firmware and software.
Additionally, a human readable report is generated, the runtime logbook for space components is supplemented and the resulting image is stored in the database of verified builds. This procedure is also applied for testing and upgrading ground station components like the software-defined radio modem.
With this approach to continuous integration in space, we have implemented proven methods for developing on-ground software and verifying space-grade FPGA designs including on-board software. This enables us to quickly and safely qualify novel experiments or even smaller updates for our Satellite Communication Laboratory in Space.
The talk will present the use of eFPGA for supporting application-specific, user-defined instructions in LEON2-FT. The work extends the SIMD-Within-a-Register (SWAR) concept, developed by daiteq for LEON2-FT and NOEL-V, with the aim to improve performance for applications such as GNSS or image processing where more data values can be stored next to each other in one processor register and processed by one assembly instruction in one clock cycle. In the presented work the SWAR concept is extended with eFPGA-specific aspects like interfacing and fabric reconfiguration with the overall goal to allow end users to define and use their own application-specific instructions in future ASIC versions of LEON2-FT and NOEL-V processors through describing the new instructions in HDL and generating a configuration bitstream that can be downloaded to the eFPGA during processor operation. We will show an implementation of custom instructions designed for GNSS processing and CCSDS121 image compression in the Menta eFPGA technology, and conclude with more general observations on efficient forms of custom instructions.
Verification is critical for an acceptable FPGA quality, but unfortunately, achieving a good FPGA quality is often very time-consuming and difficult. However, with a good testbench architecture, the workload could be reduced significantly and at the same time really improve the quality.
UVVM provides the best VHDL testbench architecture possible and also allows a unique reuse structure. Entry-level UVVM is dead simple even for beginners, and for more advanced verification, the standardised Verification Components, the high-level SW-like commands and all the other features will allow even really complex verification scenarios to be handled in a structured and understandable way. UVVM is open source and provides a great testbench kick start with open source BFMs and verification components for UART, SPI, AXI, AXI-lite, AXI stream, Avalon MM, Avalon stream, I2c, GPIO, SBI, GMII, RGMII, Ethernet, Wishbone, Clock generator, and Error injector. Other equally important functionalities in UVVM are Advanced Randomisation, Functional Coverage, Watchdogs, and Specification Coverage. The latter allows very efficient requirement tracking and provides a Requirement Traceability Matrix - typically mandatory for space applications, functional safety, etc.
Major parts of the UVVM extensions over the last 5 years have been made in tight cooperation with ESA - during two UVVM-dedicated ESA projects. This has assured very good support for mission-critical FPGA development, but also for safer and faster FPGA development in general.
This presentation will give you a fast introduction to UVVM and show both simple and advanced features, and explain how they will help you make a better testbench - and develop this much faster.
FPGAs are getting more complex with each product generation.
Nowadays systems require, in most cases, many complex FPGAs which are connected and interchange data and control between them in a robust and reliable way.
Not only the functionality of the system must be proved, but also its robustness, its functional behaviour under stress conditions by intensive tests to verify bandwidth tolerance, in order to provide the requested service to the highest standards.
To ensure the functionality of the design, its robustness, and to provide a way forward to quickly reproduce test cases, the verification of the FPGA and the system where it is instantiated is critical.
In modern systems, there is not a single optimized verification level that could be used to prove the correct implementation of the FPGA and the system. Different verification levels must be used in a divide-and-conquer approach, to ensure the quality of verification, so that later errors do not show up at the customer site during the final integration phase.
Complex FPGAs can be simulated only to a certain functional level in order to keep simulation time and testbench complexity as low as possible.
Simulation of complex FPGAs require as well the generation of testbenches that could be as complex as the design itself, with sufficient complexity to automate the generation of stimuli and checking of results, so that errors could be catched, in an automated way, as soon as they are produced, minimizing VHDL fix-and-simulate cycles duration, and saving development time.
The effort and complexity of simulating systems with many FPGAs grow exponentially in cost and in time, even with using modern verification methodologies like OSVVM, or UVM.
In order to be able to verify complex systems with a big number of FPGAs interconnected between them, reducing cost, reducing testbench complexity, reducing verification time, increasing functional coverage and maximazing results, it is necessary to look for other approaches.
One of these approaches is HW emulation on big and fast FPGAs. Traditionally HW emulation have been used to prototype big ASICs on a single, or many, FPGAs to demonstrate functionality before going to tapeout.
One of the environments used in AIRBUS CRISA during the last two years has been ProFPGA, a powerful HW emulation system, to verify complex FPGA systems, involving many FPGAs, as a way to do full system integration and verification, anticipating to integration errors, increasing functional coverage, speeding up verification, and ensuring functionality, robustness and quality of the FPGA and the system.
We will show how we used a powerful emulation system like ProFPGA has to emulate and interconnect a Power System Unit, composed by 32 FPGAs interconnected between them, to overcome the problems of the traditional verification methods by simulation, and how we were able to find and resolve tough issues, that would have been discovered very late in the integration process otherwise.
.
Yosys is a well-known open source framework for design synthesis and verification. This talk presents an overview of the Yosys-based tools, with a focus on issues relevant to applications of FPGAs in space. Thanks to its extensibility and open interfaces, Yosys is an excellent basis for building both generic and highly customized EDA tools. Examples of such tools by YosysHQ include formal verification of safety properties with SBY, mutation coverage and fault injection with MCY, the new formal sequential equivalence checking tool EQY, and upcoming deep formal cover trace generator SCY. The same interfaces enable third party developers to create their own applications such as the Linty tools (https://www.linty-services.com/) detecting bugs and enforcing coding standards.
SBY (Symbiyosys) is an open source tool for formal verification of SVA properties, such as safety properties, cover properties, and liveness properties, and the generation of formal witness traces.
MCY (Mutation Cover with Yosys) is an open source framework for mutation coverage with a formal verification twist. A main issue with coverage is false negatives: code that appears covered, but even though it is executed, its consequences are not propagated to output ports, or the test bench does not check the output. Mutation coverage solves this issue by introducing a change to the design and making sure that the testbench fails on the modified design. However, this adds a new issue: False positives. False positives are mutations that are not considered covered, simply because they do not actually change the behavior of the design under test. MCY employs formal methods to filter out such false positives, thus providing a significantly more useful coverage metric than most other coverage tools.
Instead of testing a testbench, the same functionality can also be used to test a design's fault tolerance: By setting up the formal test to flag any unexpected change in outputs, it will exhaustively check whether a single-bit mutation can cause an error.
EQY (EQuivalence checking with Yosys) is a brand new open source framework for formal equivalence checking, with focus on verification of post-synthesis netlists generated by other tools. It employs a two-step process in which first candidates for equivalent points in the gold and gate netlists are identified, and then those equivalent points are used to split the large equivalent checking problem into many smaller ones, that each can be solved in relatively short time.
SCY (Sequence of Covers with Yosys) is an upcoming open source framework for producing deep formal cover traces for large designs. With SCY a user can specify a sequence of cover statements that is then eagerly solved by the framework one at a time, using the final state of one such cover trace as the initial state for the next coverage property. With that it becomes possible to generate deep formal traces showing complex bus interactions on an entire SoC. With SCY we also add support for data-flow properties to our formal tools, as those properties are especially useful for encoding properties dealing with complex system buses.
Linty continuously scans your code (VHDL and Verilog/SystemVerilog) to keep it maintainable and reliable (bug free).
Linty is likely to become your favorite companion. We can guarantee that:
You're welcome to attend this demo that will highlight Linty's best attributes:
ARIETIS is a standalone 3 axes electronic gyroscope which is currently under development by INNALABS (Ireland) and EREMS (France).
One of EREMS responsibility on this product is the development of the FPGA design, based on NanoXplore European technology, the NX1H35AS (NG-Medium).
The ambitious measurement precision of this gyroscope is at the root of several technical challenges for the FPGA engineers:
- Complex regulation loops
- Large amount of resource used (about 60% of NG-Medium)
- Very high working frequency (80MHz target)
- New FPGA technology and developing tools
The project started with the implementation of the design in an 'easy' FPGA target on first models : the igloo2 manufactured by Microchip.
This helped the teams validate the algorithm and functional part of the design, without being slowed by any potential problem coming from development tools.
Year 2022 however has been a big turning point for Arietis as the first timing closure campaign on NG-medium target has been completed with very satisfying results.
We invested a lot of effort on NanoXplore's nxmap development tool, trying different options, defining tons of timing and placement constraints, and also tweaking our design for better implementations.
This presentation aims at sharing part of EREMS's work and results with the Space FPGA community, as we consider ARIETIS as a good example of what can already be achieved with NanoXplore Technology.
Future telecommunications satellites are envisioned to seamlessly integrate into the mobile communication networks of the next generation. First signs toward the integration are already visible today with the ongoing normative activities on non-terrestrial networks in the current 5G New Radio standard. A general goal in these next generation communication networks is to employ radio terminals that are more intelligent, automatically adapting their transmission to the current state of the spectral environment, in order to make better use of the scarce frequency resources. For realizing such highly adaptive radios, research mainly focuses on the use of algorithms in the Artificial Intelligence (AI) domain, in particular neural networks (NNs). As a result of the success that these methods showcase in the area of communications and beyond, there is a major desire in the satellite industry for deploying NNs and similar algorithms directly on-board of the satellites in space. The main challenges associated with that desire are the limited power budget and computing resources of satellites.
With the Versal Adaptive Compute Acceleration Platform (ACAP) in the XQR variant, AMD Xilinx now offers a chip in a space-grade package that is particularly targeted for machine learning applications in space, while promising more compute power than traditional FPGA-based systems-on-chip (SoCs). Combining an FPGA fabric with a new class of compute engines intended for parallel computing, including AI inference acceleration, this device opens up new opportunities in terms of on-board processing performance, but also comes with novel challenges in terms of system integration and application development for satellite manufacturers. An initial evaluation of the design flow and the capabilities of the platform is thus in the center of this presentation.
Starting, we will providing background information on contemporary deployment techniques of neuronal networks on FPGAs. Afterwards, we will focus on two modes of utilization of the Versal platform: First, the usage of Xilinx’s Deep Learning Processing Unit (DPU), which is a generic hardware accelerator built for deploying a variety of arbitrary deep-learning operations and applications, is elaborated. This is Xilinx’s straight-forward and out-of-the-box way of programming the Versal which performs computation on both programmable logic and AI-Engines. However, due to the lag of customizability, we introduce a custom co-processor design approach as an alternative to the DPU in a second step which focusses on utilizing the Versal’s AI-Engines, in this case especially for convolutional neural network operations, in a more controllable manner.
Lastly, the presentation is concluded by providing a brief dive into the three computing parallelism schemes on the AI-Engines.