All FPGAs share several design methodologies, yet each of them face specific challenges. The anti-fuse FPGAs are currently heavily used in most electronic equipment for space yet there are other technologies which use in space is growing: Flash-based and SRAM-based. The use of COTS FPGA is also increasing; specially for space missions with shorter lifetime and less quality constraints.
The aim of the workshop is to share experiences and wishes among FPGA designers, FPGA vendors and research teams developing methodologies to address radiation mitigation techniques and reconfigurable systems.
The topics related to FPGAs for space are covering (not limited to):
- FPGA for Artificial Intelligence/ Machine Learning
- general design, verification and test issues and good practices
- performance achievements and potential problems
- power consumption achievements and potential issues
- design tools performance, good practices and potential limitations
- radiation mitigation techniques, tools and potential limitations
- trends of FPGA usage in space applications
- reconfigurable systems
- lessons learned: ensuring successful and safe use of FPGA in space applications
- choosing the best FPGA type for our space application
- export license limitations / changes / ITAR / EAR
-package and assembly challenges
- companion non-volatile memory (when required) experience
The main FPGA vendors will present updates and will be available for questions. The detailed agenda will we published closer to the event. Presentations from at least the major design groups (Large System Integrators) are expected.
Submission of abstracts is now open via this website.
The deadline for abstract submission is 20th of January 2020 - extended to the 3rd of February.
A Demo Session and Exhibit will be organised on the first day of the Workshop during the Welcome Reception at the Wintergarden South.
Attendance to the workshop is free of charge.
Registration is required via this website, and an Indico account is required for it (company email address only, not personal one).
To create an Indico account go to the top right corner and choose “Log in”, then “create one here” and follow the given instructions.
After an account has been created you can register using this Indico event site.
The materials presented at the workshop are intended to be published on this website after the event. All material presented at the workshop must, before submission, be cleared of any restrictions preventing it from being published on this website.
Since years Airbus Defence and Space capitalizes on the advantages of Field Programmable Gate Array Technologies and has accumulated considerable heritage and experiences with it.
The presentation will provide an overview of the application domains and the FPGA devices that address those application fields. Particular consideration will be payed towards the experiences associated with the new European FPGAs of NanoXplore and more recent and flexible reprogrammable devices. Application domains and benefits will be discussed. As all recent technologies augment significantly in complexity, in resources and functionality, new scopes and opportunities will be discussed.
Regardless the technology, all recent devices have a significant rise in complexity in common. This changes also the application of the FPGAs from a dedicated function solution to a complex system on chip (SoC) with influences on the total development approach and increase the verification effort and request the use of new more advance verification tools and methodologies.
From user point of view, the equipment complexity is inside the FPGA which integrates a large part of numeric hardware, data handling software and application software. This trend is changing the way to face new designs our market considered traditional. A fast loop co-engineering phase is necessary during the major part of the design phase to resolve the equipment complexity. The fast exchanges during co-design allow to make the trade-offs on various subjects such as power, timings, data accuracy, functional behaviour, resources utilization. From equipment designer point of view, a potential drawback is that the design is more showed up even if the design is less risked. The presentation will discuss these new challenges in development philosophy and the experience with corresponding tools and methods.
Modular, reconfigurable spacecraft offer a new approach to extending mission capability and maximising the lifetime of a spacecraft. Future uses of space robotics such as in-orbit construction and servicing allow faulty or obsolete parts of a modular spacecraft to be replaced by servicer spacecraft that dock with their targets and perform upgrades and maintenance. Such manoeuvres will require a high degree of autonomy from both platforms and thus will need to leverage high-performance onboard computing for both the robotic control and manipulation of service spacecraft but also for managing
Thales Alenia Space in the UK (TAS UK) and The University of York (UoY) are involved in projects towards this goal and are collaborating to research autonomous network reconfiguration and fault tolerance of the onboard network based on existing space technology (SpaceWire, SpaceFibre). Both organisations have identified FPGA based MPSoCs as a solution for providing the high-performance computing that autonomous robotic systems require, using the FPGA fabric for mission-phase related hardware accelerators (e.g. vision soft co-processors) that can be swapped as the construction or maintenance task demands.
In this presentation we will describe the modular spacecraft avionics unit that TAS UK is developing for the H2020 MOSAR project. This is based on the Xilinx Ultrascale+ MPSoC and uses the "big-little" architecture to provide a split between the spacecraft module's mission functionality (executing on the "big" quad-core A53) and the support functions to provide: the communication network, module-to-module docking management and the module power management functions of the spacecraft (implemented on the "little" dual-core R5 cores).
Details on our development of an AXI4 compatible SpaceWire and RMAP IP core will also be included. RMAP forms an important part of the MOSAR fault management strategy and this core allows processor-transparent RMAP access to the full MPSoC address range, with automatic DMA descriptors for all other SpaceWire traffic. The AXI4 interface simply allows it to be dropped into any Ultrascale+, Zynq 7000 and NG-ultra based design and several configuration options allow options such as SpW front end type (oversampling /clock recovery) and output data path width (32-bit/16bit) to be selected.
We will also present details of research by the University of York on using RMAP in a MPSoC environment. Access to the full address space of a MPSoC via RMAP brings security and fault management concerns to complex SoCs and hardware security based approaches (e.g. ARM's TrustZone) could be used in future MPSoC architectures to protect against damage by either corrupt RMAP packets, damage from failure modes of RMAP initiators or malicious/compromised spacecraft modules. To tackle autonomy challenges UoY is currently developing a reasoner based, reconfigurable modular robotic platform that can cope with uncertain environments that arise in space applications using FPGA based MPSoC and soft-processor technologies.
MOSAR has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant agreement No. 821996.
Part of this work is funded by EPSRC and Innovate UK under grant KTP12066.
Cobham Gaisler has developed two new processor models. LEON5 is a continuation of the LEON line of SPARC processors and NOEL-V is the first implementation from Cobham Gaisler of the open RISC-V instruction set architecture.
As with earlier generations of processor IP core implementations from Cobham Gaisler, the new processor models target both FPGA and ASIC target technologies. Since the implementations are more complex compared to earlier generations of processors, focus has been on high-capacity FPGAs such as the Xilinx Kintex Ultrascale.
The presentation will show characteristics and performance results for NOEL-V and LEON5 and discuss the current FPGA implementation results, radiation mitigation strategies and a roadmap for the mitigation strategies that are planned to be evaluated and implemented.
STAR-Dundee has extensive experience with demanding applications on radiation tolerant FPGA devices utilising SpaceWire and SpaceFibre. The SpaceFibre Interface Single-Lane and Multi-Lane IP cores have been implemented very efficiently in radiation-tolerant FPGAs utilising the high-speed SerDes integrated in capable devices including the Virtex 5QV, Microchip RTG4 and Xilinx Ultrascale devices.
To provide higher performance, multiple lanes can be used in parallel, for example a net data rate (excluding the 8B10B encoding overhead) of 10 Gbit/s can be achieved in an RTG4 FPGA using four lanes. Higher data rates will be supported in the next generation of FPGA devices, including the Microchip PolarFire, the Xilinx KU060 and the NanoXplore NG-Large FPGA. The Microchip RT-PolarFire designs and integration with the NG-Large SerDes is in progress.
The SpaceFibre network is able to provide the high-reliability, high-availability, high-performance network technology which is essential for on-board data-handling elements
The applications and integration with next generation devices will be described.
Open Source VHDL Verification Methodology (OSVVM) is an advanced verification methodology and library that simplifies the creation of structured, transaction-based tests and test environments that are powerful enough for ASIC verification, yet simple enough for small FPGA verification.
OSVVM is implemented as two separate open source libraries: OSVVM Utility Library and OSVVM Verification IP Library. Currently these are hosted on GitHub. With the IEEE 1076-2019 standardization effort, the 1076 packages are now IEEE Open Source. Following the path of IEEE 1076, OSVVM has been accepted as an IEEE Open Source project and will be migrating the primary Git repository to the IEEE hosted site sometime in Q1 2020.
OSVVM was named the number #1 VHDL Verification Library by The 2018 Wilson Research Group ASIC and FPGA Functional Verification Study.  In Europe, VHDL is used in 78% of all FPGA designs and OSVVM is used by 30% of the FPGA Verification teams – SystemVerilog and UVM is only used by 20% of the FPGA Verification teams.
The OSVVM Utility Library uses a set of packages to create features that rival language based implementations (such as SystemVerilog and UVM) in both conciseness, simplicity, and capability. This presentation provides an overview of OSVVM's capabilities, including:
• Transaction-Based Modeling
• Constrained Random test generation
• Functional Coverage with an API for UCIS coverage database integration
• Intelligent Coverage Random test generation
• Utilities for testbench process synchronization
• Utilities for clock and reset generation
• Transcript files
• Self-Checking – Alerts and Affirmations
• Message filtering – Logs
• Scoreboards and FIFOs (data structures for verification)
• Memory models
The OSVVM Verification IP Library is a growing set of transaction based models. Currently the repository has models and testbenches for
• AXI4 Lite: Master and Slave
• AXI Stream: Master and Slave
• UART: Transmitter and Receiver
Looking to improve your VHDL FPGA verification methodology? OSVVM provides a complete solution for VHDL ASIC or FPGA verification. There is no new language to learn. It is simple, powerful, and concise. Each piece can be used separately. Hence, you can learn and adopt pieces as you need them.
Developing FPGAs which work across reliably in flight requires considerable thought. Reinforcing this challenge is that 84% of commercial FPGA designs make it to production with a non trivial error (source mentor graphics / wilson group survey).
To reduce the risk of a fault making it to flight or being found very late in the program. There need to be a number of rules followed for best coding practices, coding practices for safe synthesis, and design review check lists. This is before we get to safely analyzing the Clock Domain Crossing which is prevalent in modern FPGA Designs.
Failure to follow these simple rules can lead to failures in orbit like the NASA WideField Infrared Explorer which failed due a logic design error and reliance on the default values during power up.
Static Analysis can help address these issues and also save considerable time later in the implementation flow and improve the quality of code which enters simulation and synthesis. With static analysis we do not need to ask the right question like we do in simulation just agree the applicable coding rules.
This presentation will demonstrate how a static analysis tool has been used to create a rule set which has then been applied to the eight high priority IP in the ESA IP Catalog. The presentation will present preliminary results on the decisions behind the implemented rule set and tool flow along with results from the first tranche of IP verification along with suggested improvement which could be made.
To perform the design of an FPGA project, several graphical representations of the later coded implementation of the FPGA function can be used.
The usage of hardware description languages for the implementation enables also design approaches and representations that are used for software designs, as a basis for the FPGA implementation, like the well defined Unified Modeling Language (UML).
To improve the design process at our institute and support the code generation by tools a UML 2.0 profile for VHDL was adapted to define a detailed design model of the VHDL implementation.
The profile defines the structural view of the implementation with UML class diagrams and the behavior of the units with state diagrams.
These diagrams represent the structural and behavioral implementation of the VHDL code in the design phase. A set of defined transitions of the model elements into the VHDL code allows an automatic generation of the VHDL files from the model.
The presentation will give a short overview of the used UML profile and the transition rules. A short presentation of realized projects in the frame of developed scientific instruments and lessons learned on this design guideline concludes them.
In this presentation we will review the features of Microchip’s RTG4 radiation tolerant FPGAs, and provide an update on radiation effects and qualification status. We will also review component screening options for high-volume, low-cost satellite constellations, where non-traditional trade-offs between cost and screening are available. Lastly, we will look forward to the next generation of radiation tolerant FPGAs, with special focus on how the features, performance, radiation characteristics and low power consumption can save system cost in space applications.
Industry standards including DO-254, IEC 61508 and ISO 26262 define functional safety and error mitigation strategies for the creation and validation of high reliability systems. The Synplify Premier tool automates industry methods for mitigating soft errors such as single-event upsets (SEUs) that are increasingly present in the latest FPGA process geometries. Synplify Premier provides essential strategies to automate SEU immunity and create safe designs that operate with high reliability in radiation-rich environments: In particular, it provides direct support for SEU error detection and recovery schemes for FPGA designs and also offers automated support for the creation of SEU error monitors, enabling software-based error mitigation schemes for controlling, monitoring, recovery and diagnostics of system errors that occurred due to SEUs. Also, it enables the user to validate the implementation of error mitigation strategies by enabling functional tests in hardware by offering dynamic fault injection capabilities.
SRAM-based FPGAs have permeated the space sector for multiple data processing applications. FPGA vendors provide this market with devices with different levels of radiation tolerance. While devices must be chosen with enough manufacturer-provided fault tolerance for each specific mission, occasionally users may want to add extra application-specific fault tolerance mechanisms, in order to increase tolerance to soft errors.
For that purpose, fault injection is an established technique that allows emulating soft errors, predicting the Architecture Vulnerability Factor of designs and modules, and also identifying the main contributors to output error rates, which can later be hardened by the user, for example using TMR or other error detection and correction mechanisms.
The FT-Unshades2 platform is a flexible platform that can be used to assess both digital and analog designs, as it includes both an FPGA-based emulator for digital designs and an analog simulation tool for analog circuits. The most recent updates to the platform will be described, such as the addition of new fault classifiers, generation of faulty outputs in vcd format, and improvement of analog simulation speed.
Additionally, a new fault injection platform, FTU-VEGAS, has been developed for the NanoXplore NG-MEDIUM FPGA. The platform allows to perform both fault injection and radiation testing of the device, with the intention of bridging the gap between both worlds. Users may use the platform in case of needing extra fault tolerance for specific applications.
Finally, the triple_logic package, a permissively-licensed VHDL package for automatic hardening of user-specified signals and registers, will be described. The package allows users to harden the most critical parts of their design, after identifying them through fault injection experiments.
Nowadays, SRAM-based FPGAs are becoming a common choice for space applications due to their main features such as reconfigurability, low costs and high-performance. However, SRAM-based FPGAs are also sensitive to several effects caused by ionizing radiations which leads to misbehavior of these devices. Therefore, a pre-deployment analysis and tackle of the effects caused by radiation-induced faults on SRAM-based FPGAs used in space mission application is mandatory.
VeriPy is a Python framework interfaced with Vivado Design Suite by Xilinx. It provides the tools, the means and the know-how for the analysis and the study of synthetized and implemented netlists for Xilinx FPGAs, focusing mainly on SEUs and MBUs effects, reliability analysis, errors propagation and traversal of complex netlists.
A set of CAD tools to elaborate and analyze the netlist generated by the NXmap tool is presented. In this presentation we will focus on the development of two tools: a static-analyzer of the post-layout netlist generated by NXmap and oriented to the analysis of Soft-Errors into NG-Medium FPGA and PySETA, a tool for the analysis and propagation of Single Event Transients into the circuit architecture of NanoXplore FPGA. The results will be compared with NanoXmap simulators and Fault Injection. We will demonstrate the possible application on the verification of robustness and mitigation techniques applied on a set of test-bench circuits implemented on the NG-MEDIUM device. Experimental results obtained by simulation will be commented and discussed in details
This talk will describe new extensions for the ESA LEON2FT IP core. The first part will present four new technology mapping targets that were added to the package - Xilinx Spartan 6, Virtex 7, MicroSemi PolarFire and NanoXplore NG-Medium. The second part will outline new arithmetic extensions for LEON2FT: a new modular FPU that supports arbitrary precision and packed formats, and new integer ALU that implements sub-32 bit integer operations.
Abstract: For critical space-applications, single event upset (SEU) data are gathered to determine if the mission’s survivability requirements are satisfied. When performing failure analysis on a mission’s FPGA applications, it is common practice to use simple test structures that focus on the FPGA’s discrete internal components. It is also common practice that SEU parameters obtained from said simple FPGA test structures (generic data) are extrapolated to fit tactical and used by survivability tools to predict mission success. Unfortunately, the fidelity of generic SEU data extrapolation to mission-tactical applications is problematic. Alternatively, to improve the fidelity of data characterization, it is better practice to use representative tactical designs as SEU test structures. However, this process requires SEU testing for every FPGA design; and can be impractical.
This presentation addresses how to obtain (and use) proper SEU data for (mission-critical) tactical application survivability analysis; while reducing the need to perform SEU testing on every design.
Aerospace and Avionics ICs reliability depends on their ability to withstand the effects of radiation that naturally occurs in space and the Earth’s atmosphere. Radiation can cause Single Event Upsets (SEUs) and other types of faulty behavior in the design elements, which is transient by nature (recovery is possible). ICs are also susceptible to silicon aging effect, which may result in permanent faults (non-recoverable).
Measurements of effects of faults and their coverage through fault simulation is an area in Functional Safety that has been fast evolving in recent years, and could be leveraged for the aerospace chip design industry as well, both for soft IPs and for full FPGA designs. Methodologies and tools to measure safeness of potential faults, and the fault coverage of added safety mechanisms (such as TMR, ECC, DCLS etc.) have been implemented by various vendors. The challenge, though, is to enable an efficient and cost effective fault campaign to measure safeness and fault coverage on a modern, large scale design.
The presentation will demonstrate best-in-class tools and methodologies, using unified fault classification language, to manage the complex task of a fault campaign.
Modern programmable SoCs, such as the Xilinx Zynq-7000 APSoC FPGAs, provide an attractive COTS platform for building high-performance, miniaturized systems in space avionics. However, since SRAM FPGAs are vulnerable in radiation-induced effects, fault tolerance techniques must be developed to support their proliferation in critical applications. SEE mitigation approaches usually combine redundancy techniques (e.g. TMR) with memory scrubbing to correct upsets in the configuration memory.
We present a configuration memory scrubbing approach which is based on a two-dimensional (2D) Error Detection and Correction (EDC) coding scheme by combining: a) the Xilinx embedded (internal) frame-level ECC code (vertical direction) and b) an (external) interfame, interleaved parity code (horizontal direction). The internal ECC detects all single bit upsets (SBUs) and the vast majority of multiple-bit upsets (MBUs) per frame, but the error correction is only guaranteed for the SBUs. The internal ECC mechanism, based on the built-in Xilinx 7-series Readback CRC, achieves fast error detection without extra cost. On the other hand, the 2D coding scheme guarantees the detection and correction of all SBUs and the vast majority of MBUs. The proposed scheme eliminates the need for storing externally the golden bitstream; only the parity bits should be stored in a rad-hard memory.
We have executed a radiation experiment in collaboration with ESA-ESTEC in CERN SPS North Area at November 2018 using ultra-high energy heavy ions to evaluate our approach. The test was performed at an energy of 150A GeV/c for different LETs (8.8 and 12.45 MeVcm^2/mg) for a total effective fluence of more than 10^6 ions/cm^2. The outcomes of the radiation experiment were two-fold: first, the proposed scrubbing scheme achieved 100% error correction coverage of the single and multiple upsets observed and second, the offline analysis of the results produced useful inferences for the topology of the multiple cell upsets (MCUs) which guided us to fine-tune the 2D ECC algorithm. The configuration frames of the Zynq-7000 FPGA are divided into parity clusters in order to enable the correction of MCUs expanding into adjacent frames as observed in the experiment and reduce the storage requirements of the parity data.
We have implemented the proposed approach using an external board (for prototyping purposes we use a Zybo board but we plan to migrate in a radiation-tolerant microcontroller) in the role of external scrubber. The external scrubber communicates with the on-chip logic (ECC mechanism) through the JTAG port. The ARM SoC runs the error correction algorithm and various configuration memory access functions provided in a software library, a flash memory stores the parity data, while the FPGA fabric implements the low-level JTAG functions. This solution combines the hardware speed and popularity of the JTAG interface with the software versatility provided by the embedded processor.
Moreover, we aim at implementing the proposed approach as a full-hardware solution by integrating on-chip (in the reconfigurable logic) the 2D error correction algorithm. This solution will reduce the error correction latency improving the system availability and provide a self-healing system eliminating the need for an external controller.
To develop new frequency bands for broadband satellite communications and to ensure the constantly growing demand for higher data rates, the EIVE project proposes the world’s first in-orbit verification of a communication link in E-Band on a CubeSat system. A data downlink is planned in the frequency range of 71-76 GHz, allowing 5 GHz radio frequency data bandwidth, from a nanosatellite to a ground station. The main purpose of the mission is to evaluate the influence of the atmospheric effects on different modulations formats but also the transmission of uncompressed, high resolution 4K video data via the E-Band link, for future Earth observation services or inter-satellite links. A reconfigurable on-board payload computer based on Field-Programmable Gate Array (FPGA) technology provides the needed flexibility and enables adaptable modulation formats with respective data rates and switching / routing of the digital processing chain. The digital signal processing for the E-Band link is entirely performed by the payload FPGA, independent on the on-board computer and it only communicates with the on-board computer by means of a Universal Asynchronous Receiver-Transmitter (UART) protocol. An Arbitrary Waveform Generator (AWG) is implemented on the payload FPGA and the waveform patterns are integrated in a memory block and played-back cyclically via a high-speed interface. The baseband signal has up to 3 GHz bandwidth and the digital-to-analog converter (DAC) has a resolution of 8 bit and a sampling rate of 12 GSa/s.
A trade-off analysis between two FPGAs is also shown and the suitable type is chosen according to the application requirements. A DAC is chosen in harmony with the selected FPGA. The presentation introduces the system design, the requirement engineering, the design flow, the different interfaces integration and the planned qualification tests for the payload computer of the extremely high data rate satellite link. The EIVE satellite is scheduled to fly in 2021.
OPS-SAT is using Intel’s Cyclone V SoC in its experiment processing platform. Since the mission is entirely about experimentation, this SoC will be used for different purposes. Experiments range from FPGA based machine learning algorithms to Linux based communication applications. It is the need of the mission to safely reconfigure the SoC in space according to the experiment requirements. The extent of reconfiguration differs with each experiment, with some experiments requiring only FPGA configuration to be updated, while some others need changes in the operating system. This presentation will give an overview of the boot flow of Cyclone V and explain how we are making use of the flexibility it offers.
In the beginning of this year, CNES conducted a one month project with three Students from Enseeiht School. The purpose was to try and gather feedback about continuous integration of FPGA development. The use case was a RiscV ioptimized processor targetting NanoXplore Medium FPGA. The tools used for this presentation were: Git, Gitlab,Gitlab-Ci,Doxygen,Sonarqube-Rulechecker,Cocotb,Nxmap,Ghdl. The presentation provides description and feedbacks about this use case.
Verification is critical for an acceptable FPGA quality. Unfortunately, achieving a good FPGA quality is often very time consuming. However, with a good testbench architecture the workload could be reduced significantly. UVVM provides the best VHDL testbench architecture possible and also allows a unique reuse structure. Entry level UVVM is dead simple even for beginners, and for more advanced verification, the standardised Verification Components, the high level SW-like commands and all the other features will allow even really complex verification scenarious to be handled in a structured and understandable way. UVVM is open source and provides a testbench kick start with open source BFMs and verification components for UART, SPI, AXI-lite, AXI stream, Avalon MM, Avalon stream, I2c, GPIO, SBI, GMII and Ethernet.
UVVM has been significantly updated through ESA’s UVVM extension project. We have previously released the Scoreboard, and now lots of other new functionality has also been added. The most important of these are activity watchdog, Error injection, Monitor, Hierarchical VVCs and Specification Coverage.
This presentation will give you a fast introduction to UVVM Utility library, BFMs and VVCs, and then go through the new features and explain how they will help you making a better testbench - and develop this much faster.
High-Level Synthesis (HLS) methodologies is proposed since around 15 years as a promising design methodology. Nevertheless, software engineers are still not able to get the maximum benefit from HLS due to the required knowledge about both parallelism and the specific FPGA hardware architecture. This presentation will explore the common design challenges engineers face when using HLS and how SLX for FPGA helps engineers to overcome them.
Some of the challenges include applications that make extensive use of non-synthesizable and hardware unfriendly code, identifying parallelism and when and where to insert pragmas.
SLX for FPGA is a programming tool that analyzes C/C++ code to provide a deep understanding of software interdependencies, parallelization opportunities and to enable an automatic design optimization and pragma insertion.
With need for rapid deployment of flexible payloads and capability to process complex data in space, the requirements for cutting-edge FPGAs that can address this need is quickly growing. To enable technology in the new applications particularly to improve bandwidth, accommodate increasing number of channels in high performance data handling, a high level of gate density, embedded high speed gigabit serial links, high performance and capability to change algorithms in space, SRAM based FPGAs are being increasingly preferred.
In this session, Xilinx will cover its latest 20nm Adaptable Kintex FPGAs targeted for these applications. The latest on qualification completion, screening flows, Vivado software availability, radiation characteristics and ecosystem solutions like space-IP, memories will be presented. Complex and evolving machine learning algorithms can be serviced by Xilinx solutions without swapping out existing hardware. Use cases for modern satellite constellations with Xilinx FPGAs on board will be shared.
The high-performance and reconfiguration capabilities offered by the new European FPGA technology can be leveraged in planetary exploration scenarios, where rover autonomy relies on multiple diverse and computationally intensive algorithms for Computer Vision. A number of HW/SW pipelines for rover localization and mapping were developed on Xilinx technology during past ESA activities (SPARTAN/SEXTANT/COMPASS). These pipelines include classical feature extraction such as Harris corner detection, Canny edge detection, SIFT description, SURF detection & description, FAST, BRIEF, they include feature matching via L1/L2/chi2/Hamming distances, as well as depth extraction from stereo images via Disparity-based and SpaceSweep algorithms. With a twofold purpose, these HW/SW pipelines are now being ported and optimized on the NG-MEDIUM and NG-LARGE FPGAs under new ESA activities (QUEENS1/QUEENS2 and NXARTAN). First, we perform a methodical assessment of the new European FPGA by using these VHDL kernels as high-performance benchmarks and testing all possible options/capabilities of the new SW & HW tools. Second, we optimize and deliver accelerators based on European technology, which offer considerable gains over the conventional LEON-FT approach. The current talk summarizes the results of the aforementioned activities with respect to the assessment of NanoXmap and its progress through successive SW versions, as well as the performance of the CV pipelines implemented on NG-LARGE.
Vison-based navigation systems make use of image processing algorithms which are very computationally demanding in terms of memory and processing load.
In space domain the development of robust and efficient vision-based navigation systems is a key point to implementing autonomous navigation systems used in space exploration, active debris removal and rover vehicles. However, the tight constraints in space domain in comparison with other sectors above-mentioned makes that developments a great engineering challenge. Space devices have to fit hard constraints in terms of radiation hardened, environmental conditions, power consumption and FDIR (Fault detection, isolation and recovery). All of this makes space devices presenting lower performances than COTS (Commercial Off-The-Shelf) devices. Taking into account that an OBC (On board computer) should perform many other task with a tight timing constraints, it is really difficult to perform complex image processing at the same time. For that reason, accelerator devices are used in order to offload the OBC of the most computationally demanding parts as the image processing or low-level sensors management and pre-processing of data. In GMV we are designing HW accelerators based on Rad-hard SRAM-based FPGAs (Field Programmable Gate Array).
The main problematic in this type of architectures is how microprocessor and accelerator devices are going to communicate each other (physical interfaces, network and transfer protocols). It is possible to define different architectures depend on the services and capabilities implemented in the accelerator devices, from use of simple wire connections without communication protocols (row data transmission) to use of space qualified interfaces and communication protocols. In this way, GMV has developed a scalable architecture widely used on IPBs (image processing boards) in different ESA projects as CAMPHORVNAV (Vision-based Navigation Camera Engineering Model for Phobos Sample Return), MSR (Mars Sample Return), HERA or NXARTAN (Localization and Mapping adaptation for BRAVE devices).
This architecture provides the capabilities to have a network of devices running in parallel an independently different accelerator algorithm, having at all the time control of each accelerator and device from OBC and adding HW reconfiguration capabilities. For this purpose, SpaceWire is used to communicate and control the device network while PUS (Packet Utilization standard) over CCSDS Space packet protocol is used to Identify and manage the different accelerators and devices itself. This architecture has been already validated including 2 FPGA with separated functionalities, one of them acting as master managing interfaces, modes and self-tests while the other FPGA is fully devoted to the computer-vision accelerators. GMV is currently designing second version of these devices using only European Rad-Hard SRAM FPGA, NG-MEDIUM combined with NG-LARGE.
Forward Error Correction (FEC) is a mission critical onboard data processing task, providing continuous and reliable data transfers to ground stations even at low Signal-to-Noise Ratio (SNR) regimes.
The Consultative Committee for Space Data Systems (CCSDS) has standardized a number of protograph-based Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) codes for deep-space (AR4JA) and near-earth (C2) channel coding, as alternatives to concatenated convolutional and Reed-Solomon codes. The recommended LDPC codes (AR4JA and C2) outperform their predecessors in every aspect, including power, spectral efficiency and bit error rate (BER) performance, especially in the error-floor region.
Onboard implementation of CCSDS LDPC FEC imposes strict requirements on Size, Weight, Power and Cost (SWaP-C). Moreover, very high data-rate performance is required to enable a seamless integration within a high-speed onboard data processing chain to support Gigabit rate payloads (e.g. synthetic aperture radar and hyper-spectral optical instruments) and leveraging next-generation very high-speed serial link and network technology (i.e. SpaceFibre).
A novel architecture has been developed for high data-rate efficient hardware implementation of the CCSDS LDPC FEC which leverages the inherent parallelism of the CCSDS code structure, by concurrently processing multiple bits, according to an optimized scheduling.
We demonstrate the implementation of CCSDS LDPC FEC for deep-space and near-earth communications in the form of onboard hardware accelerator IP Cores targeting different space-grade FPGA technologies. The FPGA technology agnostic CCSDS LDPC FEC encoders achieve state-of-the-art throughput performance, ranging in the area of multiple Gbps. At the same time, resource utilization is kept at a minimum. A detailed throughput performance assessment will be presented along with implementation comparison results for different space-grade FPGA technologies.
The Wide-Field-Imager (WFI) instrument onboard ATHENA will be based on x-ray detectors that are read-out by frame processor modules. The core component of these modules is a Microchip RTG4 FPGA. For the last four years, the WFI team has been developing implementations that allow real-time processing of data streams. Further, we have investigated the use of softcores that can be implemented as processing units. We will present the current RTG4 design and our experiences with the design flow.
The Photospheric Magnetic Field Imager (PMI) will be one of the payload instruments onboard the Lagrange mission. It will provide magnetograms and tachograms of the solar photospheric plasma as valuable information for being used in space weather diagnostics. The Polarimetric and Helioseismic Imager onboard Solar Orbiter (SO/PHI) is the major heritage instrument for PMI. Its DPU, however, does not provide the necessary processing power to cope with the operational real time requirements of PMI. Thus for PMI the development of an updated DPU is necessary, based on more powerful FPGAs and enhanced efficiency of the architecture.
The SO/PHI DPU hardware architecture uses a Leon-3 FT (GR-712) core for control tasks, in combination with an RH OTP FPGA that acts as system supervisor. The advanced data processing is done within two radiation-tolerant, in-flight reconfigurable, Xilinx Virtex-4 (XQR4VSX55) FPGAs and they reach high-performance computing resolving extremely complex algorithms. The aim is to optimize the existing design approach of SO/PHI for PMI by reusing as much as possible. But the most important drawback is that Virtex-4 manufacturing is discontinued and we cannot use it any more.
The current DPU proposal (indeed one among the two options in a pre-development study) is more conservative and uses only one Virtex-5 as a baseline. This device is already being used in several missions and its availability is immediate. Compared to SO/PHI, it reduces the number of DPU building blocks to two: the system controller and one reconfigurable FPGA at the expense of optimizing the number of reconfigurations. This proposal tries to improve the time-space partitioning scheme used for SO/PHI based on the lessons learned about the data processing onboard.
On the other hand, the Kintex Ultrascale FPGA is the other candidate to consider. In this case, the FPGA resources are far more numerous and powerful but it is not a radiation-hardened device by design. Hence, a thorough study about design balance between protection to errors induced by radiation and computing performance is needed.
We are also considering to move from a GR712 to the new GR740 where we would use 4 Leon cores. This obviously represents a good advantage since we can move some tasks from firmware to software if needed and provided the feasibility from a time requirement point of view.
Regarding the bus connection between FPGA and processor, we consider to keep the Memory Mapped I/O but we are also evaluating the possibility of using PCI which looks more suitable. One of the major challenges of our design is that neither GR712 nor GR740 has specific capabilities for programming the FPGA directly. We are contemplating the use of a multi-booting scheme based on SPI memories and we are even experimenting with bit-banging the JTAG from the Leon’s GPIO.
In summary, we present how the next generation of Xilinx devices can provide great computing capabilities which can simplify the PMI DPU design without compromising its performance. We point out some design challenges (buses, re-programmability, and scheduling of the time-space programing) and the necessity of studying the trade-off between computing performance and error protection.
This year, NX presentation will address 2 majors topics:
1. The NXmap-v3 features
2. As well as we will update the NX products portfolio.
NXmap-v3 is the brand new NXmap tool suite developed in order
1. to support High-End FPGA devices,
2. as well as to introduce many additional features.
NX presentation will describe all new features versus the previous NXmap v2.9.7 released by feb20.
The 2nd part of the NX presentation will focus on the updated products portfolio, especially
1. The NG-Large which has been validated, including HSSL 6,25Gbps and the Embedded ARM Cortex-R5.
We will present you the industrialization and qualification planning in order to highlight availabilities for each package and QA level.
2. Then we will update your view on the introduction of the 1st Rad-Hard Low-Power High-End FPGA SoC, I mean the NG-Ultra with the embedded DAHLIA SoC.
We will finish by highlighting the 2nd member of the Ultra family, named Ultra-150.
Last, we will conclude and open the Questions & Answers session.
Reliability in digital circuits that operate in radiation prone environments is achieved with significant cost increase. The classical generic solution based on triple modular redundancy triples the cost.
In this work, we present a novel perspective of enhancing the fault tolerance of digital circuits, by employing iterative structures in the processing data-paths. We start by presenting a reliability analysis for LDPC decoders. These forward-error-correction circuits implement belief propagation algorithms, with messages being passed between processing nodes in multiple iterations. We show that these iterative decoders can support errors of up to 〖10〗^(-5) with negligible decrease in error correction capability.
The second part of the presentation discusses a novel approach for achieving fault tolerance in processing data-path pipelines based on control engineering feedback loops. For this purpose, we rely on a correction controller that computes and applies correction factors to a sub-set of designated registers of the processing pipeline. In order to design the correction controller, and to select the subset of registers subject to correction, we model the faulty processing data-path as a process with perturbations, for which we develop a dynamic model. Based on this model, we determined the output states, represented by registers that are compared to a reference, and the corrected states, that represent the registers for which the correction factors are added. The correction process is performed in several iterations, by rewinding the data-path computation in blocks that have non-zero correction factors. A key issues is represented by the controller needing a reference. The workaround is to employ two controlled data-paths in reaction.
The Controller Area Network (CAN) bus protocol defines a fault-tolerant multi-master serial protocol. Originally it was intended for use in vehicles, but the robustness of the CAN protocol has made it a popular choice in a wide range of control applications, also in the radiation environments seen in space applications or high-energy physics experiments. Electronics designed for radiation environments often feature an FPGA, and employ Triple Modular Redundancy (TMR) to achieve radiation tolerance. There are several varieties of TMR, and the best choice is highly technology-dependent. We are presenting a new open-source CAN controller written in VHDL for FPGAs. The controller is highly configurable, but when used in a radiation environment, it features TMR techniques specifically tailored for Xilinx FPGAs.
In the past decade, the field of Machine Learning has witnessed dramatic breakthroughs in the state of the art for tasks such as image classification and object detection, aided by advancements in algorithms, training data and computing architectures. To date, most results have been demonstrated for terrestrial applications, but there is significant demand for solutions that can scale these capabilities into the Space environment, where on-board Machine Learning combined with high performance sensor packages could offer a dramatic reduction in decision latency. In this session, we will discuss Xilinx hardware and software solutions suitable for enabling high performance, low-latency, SWaP-optimized machine learning acceleration for Space applications; as well as the associated design requirements. Solution types covered include the Xilinx Deep Learning Processor Unit, as well as third-party and fabric-based options.
Artificial intelligence (Deep Learning) is everywhere. Home appliances, automotive, entertainment systems, you name it, they are all packing AI capabilities. The space industry is no exception. Automated recognition of spacecraft and space junk using imaging plays an important role in securing space safety and space exploration. Although Deep Learning is the most successful solution for image-based object classification, for most real applications it also requires performant platforms like FPGAs and SoCs.
Designing Deep Learning networks for embedded devices like FPGAs and SoCs is challenging because of resource constraints, complexity of programming in Verilog or VHDL, and the hardware expertise needed for prototyping on an FPGA or SoC.
Learn how to prototype and deploy Deep Learning-based vision applications on FPGAs and SoCs using MATLAB. Starting with a pretrained model either trained in MATLAB or any framework of your choice, we demonstrate a workflow how to deploy the trained network for image recognition from MATLAB to the Xilinx Ultrascale+ MPSoC platform for inference using APIs from MATLAB.
Deep Learning algorithm engineers can quickly explore different networks and their performance on an FPGA or SoC directly from MATLAB. The workflow also enables hardware engineers to optimize and generate portable Verilog and VHDL Code that can be integrated with the rest of their application.
In the last years, Machine Learning has rose in popularity, and also the space community has started to consider AI-based algorithms as a promising solution for tasks such as spacecraft navigation and on-board image elaboration. ESA project CloudScout, flying on mission PhiSat-1/FSSCAT on-board a 6U CubeSat, has the aim to show the feasibility of AI-based algorithms in orbit to perform computer vision tasks. In particular, CloudScout is able to discard Earth surface images covered by clouds in order to maximize the quality of downloaded data, using a Convolutional Neural Network (CNN) algorithm realized by University of Pisa. On PhiSat-1, the algorithm will run on a COTS hardware accelerator, the Myriad-2 by Intel Movidius. Since the Myriad-2 requires an Operative System (OS) and Eye-of-Things board mounting the chip offers a limited set of high speed communication interfaces (USB and CIF), we decided to implement the cloud-detection algorithm on an Xilinx XCKU60 FPGA. This solution does not require an OS and can support any high-speed protocol. The proposed hardware architecture is able to perform an inference with the same performance of the Myriad-2, 300ms, requiring 25% of the LUT, 2% of the FF and 53% of RAM blocks for storing weights and data to be processed.
Since the realized hardware accelerator does not fit on space-grade FPGAs (i.e. Microsemi RTG4 and NanoXplore Brave Large) we decided to work on the software model to reduce its complexity, and on the hardware architecture in order to produce a simpler and reusable design.
The algorithm was slightly modified to reduce the number of computations required by removing max-pooling and performing sub-sampling during convolutions by increasing the stride parameter. The hardware design focused on the optimization of the usage of memory resources available on-board the FPGA, which represented the bottleneck of the previous design, since CNNs require a large amount of data to perform simple computation.
The result of the preliminary analysis shows that thanks to both software and hardware optimization, it would be feasible to implement the cloud-detection algorithm on target devices taking into account both hardware resources and inference time constraints.
The understanding of the power of AI is increasing with its usage and ongoing research. Convolutional Neural Networks excel at object recognition, object detection and image segmentation; LSTMs and transformers lead the way in sequence analysis, including translation and search engine tasks. Loosely based on the micro-level architecture of the brain, these networks can – like the brain – be trained for specific, usually very bespoke, applications. The process of utilising this trained network in an application is called inference.
Taking the relatively simple case of object detection, training - carried out on chips such as the Nervana Neural Network Processor - consists of a dataset of many thousands or millions of labelled images. The more the better. Training is the process of inference, comparison of the output of the network with the expected label, followed by backpropagation of the error using calculus. There are many pretrained networks, such as ResNet and YOLO, available from online sources (some of which have suitable licences) that can be fine-tuned through a process of transfer learning for similar applications.
Moving beyond the training phase, each application has its own list of inference and associated software requirements. Explicit conditions on power consumption, working environment, security and performance will have an impact on processor choice. Additionally, image resolution and network structure parameters have an effect on latency, throughput and accuracy of the system. The decisions and compromises made in the requirements stage are affected by both hardware and software capabilities.
The Intel OpenVINO (Open Visual Inferencing and Neural Network Optimization) software platform is fast becoming a noted tool for efficient inference across Intel devices. The built-in Inference Engine allows easy integration with the different hardware, of which FPGAs are a prime example. Until recently the platform has centred around vision applications, supporting a wide range of CNN architectures for classification, segmentation or object detection, however the product is expanding to cover translation and NLP.
OpenVINO consists of a Python-based Model Optimiser, which acts as a built-in converter and platform-agnostic optimisation tool, and the Inference Engine. The Model Optimiser can take a network trained with any framework (Tensorflow, PyTorch, MXNet, Caffe, etc) and through cleaning the training hooks and merging superfluous layers effectively improves inference times.
Integrated plugins allow an easy route for inference on FPGAs on accelerator cards, and a separate flow which encompasses the embedded realm. The advantages of embedded are the low-power bespoke solution for many applications. Working together, the Programmable Solutions Group (including the newly acquired Omnitek) is able to provide IP for image preprocessing to security functions such as weights encryption) to quantized and binary options for exceptionally fast inference.
New Space applications require more compactness, and modularity, in a global context of increasing overall performances. During 2018 SEFUW conference, we introduced the concept of our Computer Core FUSIO RT project under CNES R&T activity. The goal of the presentation is to provide with the latest information related to this solution (available since 2019) in terms of technology and caracteristics, fitting Space environment constraints.
FPGA-based computer unit is common in Space designs thanks to FPGA’s high flexibility, high performance and its short time to market an affordable price.
In most computer board or processing units, designs require FPGA with reliable configuration memory, computing memory and storage memory.
FUSIO RT is a new family of Computer Cores providing several different modules with basic structure (FPGA + configuration memory: SPI NOR TMR) with or without additional memories such as storage memory (NAND) and/or computing memory (SDRAM).
Providing such kind of modular Computer Core brings key benefits, such as:
• reducing drastically the size of the overall system,
• lower the overall weight,
• providing basic technologic bricks, ready to use
• design easy to upgrade (tools & form factor compatible)
• all embedded functions radiation proven
• speeding up development by using FUSIO RT
In this update, we will focus in particular on new available tools and solutions related to the use of the FUSIO RT family such as power management, Development Kit and tools for a quick and easy start prototyping.
Most of the latest High-End FPGAs are SRAM-based. They present high performance, high density (more configurable resources), high flexibility (partial reconfiguration and thus are widely deployed in space applications primarily for payload type applications. The SRAM-based FPGAs require a configuration memory to reload the configuration pattern when powered-on. The main requirement for a configuration memory is reliability through the mission lifetime. It has to have zero error otherwise; the FPGA functionality can change. This translated into radiation requirements for device bitstream storage would be:
• TID dependent on mission requirements
• Single Event Latchup (SEL) immunity
• Single Event Upset (SEU) immunity
• Single Event Functional Interrupt (SEFI) immunity.
Latest High-End FPGAs requires increasingly larger configuration images. For example, NanoXplore Medium requires a bitstream of +50 Mbit, while XilinX KU060 requires 192 Mbit of bitstream length. High density is now a requirement for configuration memory as well as radiation tolerance, reliability, data integrity and security.
In this topic 3D PLUS will presents latest products and plan for the near future. Responding to the requirements detailed above. They are:
• 128 Mbit TMR (Triple Modular Redundancy)SPI NOR Flash
• 256 Mbit TMR Radiation Intelligent QSPI NOR Flash
• SOI Based MRAM with SPI Interface
We present our portfolio of QML-V certified high speed cache QDR SRAM devices which are boosting the memory throughput performance of space grade FPGA's. In addition, we introduce our high density NOR Flash boot memory solutions and ecosystems to support all modern space grade FPGA's.
Spacechips develops ultra high-throughput on-board processing and transponder products for telecommunication, Earth-Observation, internet and M2M/IoT satellites. We compare and share design-in experiences of the latest ultra deep-submicron space-grade FPGAs and MPSoCs: Xilinx's, 20 nm, Kintex UltraScale KU060, NanoXplore's 28 nm NG-ULTRA and Microchip's 28 RFPolarFire.
Xilinx's KU060 and NanoXplore's NG-ULTRA are SRAM-based FPGAs while Microchip's RTPolarFire is based on SONOS flash technology. We compare and share design-in experiences for the three devices as well as presenting future prototyping solutions.
Xilinx's, 20 nm, Kintex UltraScale KU060 SRAM FPGA offers the space industry a step-change in reconfigurable, on-board processing. For missions with higher-reliability needs, a scrubber may be required to preserve the integrity of the KU060's configuration memory.
A number of internal and external scrubbing options will be presented for the KU060 from blind scrubbing to read-back with partial re-configuration in the background during normal operation. This will include SEFI detection of the SelectMAP interface.
The three devices offer sufficient processing resources to realise on-board AI for telecommunication, Earth-Observation and science missions. A number of machine-learning applications will be presented as well as the related hardware challenges.
The dilemma of on-board payload data-processing units has not changed for decades: the resolution of remote sensing units is continually increasing so that the data rate is ever-increasing while the downlink bandwidth remains limited. The demand on compact and high-performance data handling and data storage modules is growing rapidly. The design must be scalable and very flexible to meet different capacity and data rate requirements. At the same time, the flexibility should not harm the maturity of the design since high-quality must be guaranteed under tight schedule.
We will present a flexible FPGA architecture, which implements the most timing-critical data acquisition, buffering, storage and downlink functions in a compact and high-performance PDHU which was presented on DASIA 2019 [ Paper Title: A Compact High-Performance Payload Data Handling Unit for Earth Observation and Science Satellites]. The main PDHU features are:
• Scalable Flash controller, up to 4 partitions can be supported. The mass memory capacity is scalable.
• Flexible logical to physical address mapping. The parallel accessed flash devices may have different flash block addresses so that flexible wear levelling algorithm can be supported in software.
• Flexible flash access interleaving scheme to maximize the flash interface performance.
• Double symbol error correction (Reed-Solomon) for flash mass memory.
• Flexible packet management: e.g., packet store based file management; logical address based mapping; application identifier based mapping; customized mapping.
• All memories (external and internal) are EDAC protected (Hamming / Reed-Solomon).
• Variable payload interfaces, e.g. WizardLink, SpaceWire, Channel links, etc.
• Variable downlink interfaces, e.g. WizardLink, LVDS parallel interface, etc.
• External SRAM as context memory to amend internal BRAM limitation (especially for RTAX FPGA).
• Variable TM/TC interfaces, e.g., CAN-Bus, MIL-Bus, SpaceWire, UART, HPC (high power command), RSA (relay status acquisition), etc.
• Implementation on RTAX / ProASIC3E / RTG4 FPGAs.
• On RTAX FPGAs, 2.2Gbps data acquisition + 640Mbps data downlink reached in an industry project.
• On RTG4 FPGAs, aggregate 6Gbps for simultaneous acquisition and downlink.
• Optional online/offline data compression/encryption.
As a result, the architecture has been implemented in the JUICE mission (on-board computer mass memory board), Biomass PDHU, FLEX PDHU, KACST Satellite Computer Board with mass memory, Mass Memory Module for KARI Kompsat-7, S4Pro (H2020 demo project). The same architecture is further adopted in several on-going PDHU proposals as well. The architecture also allows the integration of OBC functionalities so that it serves as a potential OBC-MM as well. Detailed configurations (e.g., interfaces, capacities, data rates, etc) will be presented in workshop.
Thales Alenia Space in the UK have developed a modular unit for rapid spacecraft system prototyping based on the Xilinx Zynq UltraScale+ MPSoC. The unit is delivered in Compact PCI (CPCI) form factor and supports a mixture of Space and Terrestrial TM/TC and Data interfaces:
• SpaceWire x 3
• SpaceFibre x 2
• CAN x 2
• 1553 x 2
• RS422 UART x 2
• Gigabit Ethernet x 2
• USB2.0/3.0 x 4
The CompactPCI Carrier Card functionality can be extended and tailored via an industry-standard FPGA Mezzanine Connector (FMC) which allows expansion cards with additional functionality to be used. A project-specific Mezzanine Module has been developed that includes opto-isolated trigger drivers for cameras, thermistor acquisition and additional TM/TC and Data interfaces.
In the context of H2020, the unit is being used as a rapid development platform for autonomous Space Robotics demonstrators based on a mixture of low and high TRL sensors.
The first release of the unit has been developed under the Horizon 2020 Integrated 3D Sensor Suite (I3DS) programme. The unit has been further developed under the H2020 European Robotic Orbital Support Services (EROSS) and Planetary Robots Deployed for Assembly and Construction (PRO-ACT) programmes.
The unit is a good starting point for understanding how to approach development with the next generation of Space Grade MPSoC FPGAs such as the NanoXplore NG-ULTRA.
The MPSoC architecture of the UltraScale+ with its quad core ARM A53 application processors, dual core ARM 5 real time processors and extensive FPGA fabric with AXI interconnect is complex. It requires a steep learning curve, and the robotics application of the technology for I3DS, EROSS and PRO-ACT with the lessons learned will be presented.
Permanent Magnet Synchronous Motor (PMSM) control is a field where real-time processing capabilities play a substantial role in the system’s performance. The usual tradeoff for the processing elements amounts to choosing between a DSP or an FPGA, with the latter seen as more complex to develop and maintain.
This tradeoff usually does not hold out against the specific constraints in the space industry: radiative environments causing SEEs (Single Event Effects), fault tolerance and reliability constraints. One favored answer was to rely on a thoroughly screened space-grade FPGA to bear the brunt of the reliability goals. However, the strong push to reduce recurring costs provides an incentive to explore architecture-based fault-tolerance solutions, where cheaper components verify each other dynamically.
In this talk we present a distributed motor control architecture based on GUARDS (“A Generic Upgradable Architecture for Real-Time Dependable Systems”) allowing:
We will then explain why using an industrial-grade FPGA is a particularly good fit for this type of architecture, and how we make it work at Watt & Well: Rapid prototyping on reprogrammable FPGAs allows for quick algorithm de-risking; The inherent parallel architecture of FPGAs allows both flexibility in protocol design and make meeting the real-time deadlines easier; Finally, fine-grained verification capabilities make reaching a very high design assurance level a systematic process.