In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
(ESA/Data Systems Division), MsKathleen Gerlo
(ESA/Software Systems Division)
Timing Properties of the LEON3-based GR712RC Board. Implications on Task Scheduling50m
Actitvity: Laboratory (R&D)
TO: Marco Zulianello/Software Systems Division
Multicore processors have been considered as an effective solution to cope with the increasingly performance requirements of Critical Real-Time Embedded (CRTE) systems, like those used in space, aerospace, automotive and railway domains. Multicores enable consolidating several functions on the same chip reducing overall system SWaP costs. It is also the case that multicore chips challenge providing timing guarantees on the execution of applications, which, in turn, challenges the timing validation and verification of the whole system. In the space domain we observe a clear trend towards multicores with LEON3 multicore-based chips such as the GR712RC and LEON4 multicore-based boards such as the NGMP. In this presentation I will cover time predictability and time composability properties of the GR712RC and compare it with the NGMP (ML510 board). In particular I’ll cover the work of this activity has done on covering how inter-task conflicts in the access to hardware shared resources, such on-chip buses or shared caches, affect applications timing behaviour. As part of this activity, special emphasis has been also been put on understanding how inter-task conflicts at chip level affect both timing analysis and schedulability of applications on multicores.
MrFrancisco J. Cazorla
(Barcelona Supercomputing Center)
Methodologies and Tools for Predictable, Real-Time Heterogeneous Embedded Systems50m
TO: Marco Zulianello/Software Systems Division
Since the core processors on space systems start to show their limits in terms of processing power and system budget (cost, power consumption, mass), the adoption of heterogeneous processors platforms (i.e., platforms with different types of processing elements such as general purpose processors and digital signal processors) will become necessary. With respect to single core systems, this type of architectures introduces new problems in the design flow of a task which needs to exploit multiple processing elements. Examples of these problems are how to divide the analysed task and to which processing element assign the different parts of the task. Moreover, adoption in the space system scenario further increases the complexity of the design process, since the predictability of the produced solutions has to be guaranteed. In this presentation, the research activity of a NPI contract trying to solve these issues is presented. Aim of this research is the
formulation and the implementation of a methodology to automatically port a sequential C code to a heterogeneous platform. The research in particular targets the MPPB platform developed by Recore. MPPB is an heterogeneous platform composed of a Leon2 processor, 2 Xentium DSPs, heterogeneous memories and high speed interfaces connected by a network-on-chip. The proposed design flow starts by the analysis of the C source code, possibly annotated with pragmas, which is performed exploiting the GNU GCC compiler. Task is then divided into chunks which are assigned and scheduled on the different processing elements of the platform. Finally, the source code corresponding to the solution and directly targeting the MPPB platform is produced.
(Politecnico di Milano)
Usage of SCOC3 + Basic SW in science missions50m
TO: David Sanchez de la Llana/Software Systems Division
The SCOC3 ASIC is a powerful chip developed by Astrium with ESA co-funding. It has a LEON 3 plus 2 x 1553, 2 x CAN, 7 x SPW, 1 x TMTC... It is flight qualified.
The ESA TRP Contract 4000104797 has developed a set of 'flight SW drivers' for the SCOC3. The BSW has passed CDR. Further testing will be completed at ESTEC on the KERTEL board (first quarter 2014).
The product will be put under "ESA Community License, type 3, permissive". This means free usage of BSW and doc for companies of ESA member states in ESA projects.
Use cases comparing the development effort of (SCOC3 + BSW) vs current technologies (LEON2 - AT-697F + FPGAs) in REAL ESA Projects will be presented.
A important potential decrease of SW engineering hours has been detected with usage of (SCOC3 + BSW).
MrDavid Sanchez de la Llana
(ESA/Softwares Systems Division)
MASTV (Multi Agent System for Testing and Verification)50m
Activity: Laboratory (R&D)
TO: Quirien Wijnands
Software Systems’ Testing and Verification is one of the major phases in a software development life-cycle. The Testing and Verification is a process that must be closely aligned with the Design and Development of that same software system and should start right from the very beginning of the project. In doing so, the definition of testcases, their execution and the analysis of test results is a time-consuming and costly part of the software development process and in almost all cases is underestimated at the start of the project. In cases where the design, development, tests definitions and tests execution are distributed over different persons the process becomes even harder since knowledge about the system needs to be transferred (e.g. what are the most critical or complex elements of the software).
Ideally, after making a modification/updates to a software module, all test cases could be rerun. However in most cases, due to limited resources (e.g. hardware availability) or limited time, this is not possible and only a subset of tests can be executed. Prioritisation of the test cases to be executed is a complex task (e.g. based on complexity and number of software modules, number of changes to the software etc.). As a consequence important testcases could be overlooked and “useless” ones are executed. In many cases, in reducing the cost, test campaigns just follow a predefined list of procedures without taking into account the knowledge and responses of the system under test at all.
Testing and Verification requires expertise based on the knowledge of the system under test, but at the same time can be a very repetitive task. Agent-based expert systems can emulate this human expertise, though in well-defined problem domains. Using a knowledge-base and rule-based behaviours, a set of agents can not only automate the testing process but also bring “intelligence” into the Testing and Verification process. For example, based on test results the order of the test scenario could be modified. In this way the Testing and Verification process can be enhanced, the possibility of human errors can be reduced and a cost reduction in the long run can be foreseen.
Using the concept of Agents as defined in the DAFA studies, in the MASTV activity an analysis was done on the usefulness of agents during the testing phase of a Software System.
In doing so the concept of "Software Quality" was used, how to characterize this and how to measure it. Based on this analysis and an identified Test Process a first design of the Multi Agent System was created.
Out of this some components (agents) were modelled and implemented as part of a software prototype.
(ESA/Software Systems Division)
EagleEye Evolution towards Time and Space Partitioning50m
TO: Felice Torelli/Software Systems Division
EagleEye is the reference mission implemented within the Avionics Test Bench (ATB) infrastructure to simulate an Earth Observation satellite. This is composed by a set of AOCS sensor/actuators, thermal/power subsystems and a simple optical payload (GoldenEye).
The main functions of the satellite are managed by the EagleEye Central Software.
The original EagleEye Central Software is designed in a traditional way as a monolithic application and it is characterised by the following main components:
• RTEMS RTOS (C language)
• RASTA SW Drivers (C language)
• SOIS/ECSS-E-ST-50-15C services (C and ADA languages)
• TMTC Stack (C language)
• OBOSS PUS library (ADA language)
• DMS SW (ADA language)
• GoldenEye Manager (ADA language)
• AOCS SW (C language)
EagleEye Central Software supports AT697E processor (LEON2-FT) and can be built to execute on Software Validation Facility and Real-time Test Bench configurations of ATB.
In this activity the consortium led by SSF and composed by Bright Ascension, FentISS and UPM ported the Central Software to LEON3 processor and refactored it as a time and space partitioned system.
The upgraded Central Software is based on XtratuM hypervisor, Edisoft RTEMS and ORK+, and it is composed by the following partitions:
1. AOCS partition (single thread, C language, XAL run-time);
2. DMS partition (multi thread, ADA language, ORK+);
3. I/O partition (multi thread, C language, RTEMS);
4. GoldenEye partition (multi thread, ADA language, ORK+);
5. FDIR partition ((single thread, C language, XAL run-time).
During this activity the Central Software has been validated in the ATB Software Validation Facility configuration based on TSIM-LEON3.
The presentation will provide with an overview of the project, of the new architecture of the Central Software and will highlight the lessons learned during the course of the activity.
TO: Felice Torelli/Software Systems Division
In the recent years, space industries in Europe and the Agency have worked together to raise the level of standardisation of the interfaces and building blocks of the spacecraft avionics. The most prominent initiatives in this context are the SAVOIR advisory group and its focus working groups such as SAVOIR-FAIRE covering the flight software domain.
The standardisation of building blocks and interfaces follows several complementary paths, one of these is definition of generic functional specification for the reusable elements of the avionics. These generic specifications are meant to become an input to the procurement process for new equipment with the purpose of having the same requirements for the same required functionality and defining consistently the components of a larger system architecture.
One of the first candidates to become an official generic functional specification is the “Flight Computer Initialisation Sequence - ESA Requirements” document [TEC-SWS/10-373/FT] describing the basic functionality required by the initialisation sequence (i.e. boot software) of any on-board computer.
Following a review at ESA and with industry, the specification has been used in this activity to implement the boot software of the AT697E processor board used as core of the Avionics Test Bench (ATB) On-Board Computer (OBC).
The output of this development activity is a reference implementation of the initialisation sequence building block and provided valuable lessons learned to be injected in the next update of the generic specification.
This activity is complemented by the development of the infrastructure to support remote access to the ATB located in the Avionics Lab.
The remote access infrastructure will ease the access to the ATB by industrial partners and will better exploit the simulation infrastructure.
Accessing the ATB from remote also allows to explore furthermore operational scenarios where the space segment functions are only accessible by the Monitor and Control System.
The presentation will provide with an overview of the project, of Initialisation Sequence design as well as of the Remote Access features and will highlight the lessons learned during the course of the activity.
Platform design vs. modular design: Siemens' "ProUST-FE" as a case study50m
Summary: A highly reconfigurable SCOE-device is presented, which can substitute
many COTS devices and leads to a high-performance, compact EGSE. Mission experiences are discussed. An EGSE, be it an Instrument-EGSE, a Power-SCOE or a Simulation Front-End,
tends to consist of ca. a dozen different interface types in varying combinations.
These are discrete analog and digital ones and serial interfaces such as MIL-1553 and SpaceWire. Test modules use to be built around a common bus standard, be it proprietary or
industry-standard such as venerable VME or PXI. Given that the interfaces in satellite platforms are standardized to a high extent it looks natural to reduce the diversity of COTS-equipment and define a single scalable, generic and reconfigurable platform which covers the EGSE-needs with an optimized “all-in-one”-approach. Technical progress in microelectronics, particularly FPGA and FPAA-technology increasingly favors a design paradigm where products show a versatility unseen before.
Following the “ProUST” product, which is a platform centered around power protection, Siemens has developed a ProUST-FrontEnd device, which merges serial digital interfaces with an array of multi-purpose analog/power circuits. Additionally it optimizes I/O-timing using PCI-Express-over-cable and fine-tuned logic. Access times of 0.6 μs for writes and 2 μs for reads have been achieved, which are particularly valuable in the RT-context of HW-in-the-loop simulations. At the heart of the innovation is the insight that the mental model, which draws a
clear boundary between HW and SW, does not reflect reality any more. Instead flexible technologies allow tailoring a unified EGSE element to diverse usage scenarios.
Reconfigurability not least has several system-level benefits:
- Excellent self-test capabilities
- A reduced spare set
- Low risk of obsolescence
- Openness for late changes or, on the contrary, early procurement before perfect knowledge of S/C-implementation and EGSE needs. The presentation outlines the product concept and discusses experiences from several missions.
TO: David Jameux/Data Systems Division
MOST (Modeling of SpaceWire Traffic) is a representative and powerful SpaceWire traffic simulator designed to support conception, development and validation of SpaceWire networks. Its recent improvements have targeted simplification and performance enhancement while still being used for sizing the SpaceWire networks of multiple TAS missions. This presentation will focus on its current capabilities and how they were employed on real use-cases then will present the new improvements brought to MOST.
With the increasing complexity of SpaceWire networks embedded on board satellites and the development of SpaceWire standards and components, this simulator tool proves itself more and more useful.
(Thales Alenia Space)
Reference Architecture for High Reliability - Availability Systems50m
TO: Claudio Monteleone/Data Systems Division
The scope of the work is the dependability assessment of on-board space computers (OBC) and approaches applied to achieve high reliability and availability of such systems, in order to provide a consolidated solution for the following objectives:
- establish generic requirements for the procurement or development of OBCs with a focus on well-defined reliability, availability, and maintainability requirements and study means, and
- provide recommendations to support the association of dependability figures to OBC configuration items throughout their life cycle (e.g. for allocation, prediction or assessment of dependability).
The results of this activity are applicable to a typical OBC of the following mission domains:
- Science and Earth Observation missions;
- Telecom missions;
- Commercial earth observation missions
Generic requirements for OBCs have been established taking into account the ongoing ESA study "Avionics System Reference Architecture" (ASRA). The requirements cover both functional aspects as well as non-functional aspects. The requirements identify details that have a particular impact on reliability and/or availability and are generic enough to be applicable for a typical OBC in an unmanned, non-launcher spacecraft (e.g. an earth observation satellite, a telecom satellite or the deep space probe of an interplanetary science mission).
A dependability plan has been then established describing the activities, processes and procedures to be executed to provide assurance of the dependability characteristics of OBCs. The identified plan also provides a life-cycle model for OBCs and its associated outcomes.
A technical note has been produced to provide a set of guidelines about associating dependability figures to computer configuration items throughout their life cycle. It covers the technical approach for measuring the dependability of OBCs, including:
- all the phases of the lifecycle of OBCs;
- configuration item levels from basic part to set levels;
- computer hardware, software and their integration;
- theoretical and practical aspects of dependability measurement.
A set of recommendations written in semi-formal language have been provided for measuring the dependability of OBCs. The recommendations have been established for associating dependability figures to OBC configuration items throughout their life cycle, such that it can be reused, totally or in part, for the procurement or development of OBC configuration items.
A technical note has been produced to cover the approach for demonstrating the dependability of OBCs throughout the their lifecycle, including theoretical and practical aspects. Several aspects of the HW/SW life-cycle that have impact on the OBC dependability are covered.
Finally an existing ESA mission has been chosen as an application case in order to validate the approach. In this application case, the dependability has been measured and the dependability assurance activities proposed have been practically applied.
On-board Software Reference Architecture Training Material40m
TO: Andreas Jung/Software Systems Division
The main result of the COrDeT-2 study – which ended in December 2012 - was the definition and prototype implementation of an on-board software reference architecture (OSRA) for the development of the on-board software of future software systems.
During the preliminary dissemination of the results on the OSRA and after the review of the feedback received from early adopters of the OSRA methodology (in particular the two parallel studies named “On-board software reference architecture consolidation” – OSRAc), it became apparent that dissemination of concepts and results using exclusively the project deliverables of COrDeT-2 was inadequate.
Project deliverables are in fact mainly written according to the project logic and to respond to contractual engagements. Furthermore they often resemble or consist of a technical specification and risk not to be easily readable by a person who did not participated to the study, and does not know the complete project context in which the results were obtained.
The goal of this study was then to create some dedicated training material for the dissemination of the OSRA results. The training material has the advantage of being developed specifically with pedagogical and training goals.
This presentation will recapitulate the major phases of the study: 1) preliminary analysis for the identification of major difficulties and misunderstandings concerning the OSRA; 2) selection of the stakeholders to be addressed by the training material (which includes actors of the OBSW development but also OBSW validation and other actors of the avionics / OBSW development, such as avionics engineers, database engineers, satellite operators, etc…);
3) definition of the training material product tree and realization of the training material.
The final product tree for the training material includes two groups of documents: 1) “general training material”, which provides introductory information on the OSRA (methodology, concepts, rationale) and targets all stakeholders; and 2) “stakeholder-specific training material”, which addresses technical aspects of the OSRA in details and targets given stakeholders.
In the training material we also highlight why the adoption of the OSRA brings considerable advantages not only to its direct users, but also to other actors of the avionics / OBSW process (e.g., avionics engineers, spacecraft database engineers, satellite operators), by highlighting potential improvements in their respective areas and enabled synergies.
Finally, we defined “training paths” for each stakeholder, in order to guide them in exploring the training material according to the tasks of their role and the relationship with OSRA.
The whole training material will be managed at SAVOIR level for dissemination in the ESA community.
(Thales Alenia Space (France))
OBSW reference architecture consolidation1h
TO: Andreas Jung
Presentation: TERMA GmbH
The objective of the projects was to consolidate the on-board software reference architecture specified by the CORDET2 project.
Based on existing, operational mission, the challenge was to expose the academic on-board software reference architecture to the reality of the real world.
The formal approach was
A Functional Chain Analysis
The work with the functional chains resulted in:
- A mindmap, describing on-board software functional chains
- A domain specific functional chain analysis process, supported by TopCased profiles
- A formal functional chain analysis of a subset of the identified functional chains
Consolidation of the on-board software reference architecture
The on-board software reference architecture was consolidated by:
- Mapping functional chain analysis result onto the reference architecture
- Actual construction of functional chain elements in the reference architecture
- Independent (academic) verification of the reference architecture
Identification of Building Blocks
A formal specification of building blocks was proposed and applied on an example application building block and on an example platform building block.
Presentation by SSV:
The On-Board Software reference Architecture Consolidation study had the objective to identify all
building blocks and interfaces of the core onboard software reference architecture and to verify and
consolidate the software architectural concepts described in SAVOIR-FAIRE documents and developed in
COrDeT2. To achieve this objective, we have
analysed functional chains of core on-board software for a range of missions. We defined and
used a domain engineering approach and addressed the variability factors described by
mapped identified functional chains onto the software architectural concepts of the software
verified the suitability and compatibility of the software architectural concepts of the software
reference architecture and proposed improvements;
studied software building blocks interfaces and reuse possibilities, as proposed by the on-board
software reference architecture documentation, in relation to ECSS-E-ST-40C and other relevant
During all activities, issues encountered were collected and improvement suggestions were provided.
Future work on harmonization in the on-board software development community should address the
issues raised and can be guided by our improvement suggestions.
The study presented here is one of the two parallel studies with identical tasks.
(TERMA), MrVictor Bos
Methods and Tools for On-Board Software Engineering1h 45m
Activity: Activity: STRIN (Strategic Initiative) for Ireland
This project addresses several topics where contemporary software engineering research may have useful applications in developing on-board flight software. It is made up of three parts
On-board Software Reference Architecture – Variability Management and Product Line Engineering30m
TO: Andreas Jung/Software Systems Division
A major goal of the Space Avionics Open Interface Architecture (SAVOIR) standardisation initiative is to improve efficiency, reduce costs, and decrease development times. To this end, the SAVOIR-FAIRE working group aims to define a reference architecture for onboard software. This architecture is expected to provide benefits for all stakeholders, including customers (reduced development time and risk), system integrators (reduced time-to-market, and substitutability of components), and suppliers (technical stability, and diversified customer base).
In this project, the OSRAc modelling technique is extended with variability concepts from software product line engineering, allowing the software for multiple space missions to be represented as a single product family. The architecture description profiles are extended with UML stereotypes to express the variation between products, and tools are provided to (1) represent these variations as a decision tree of configuration options, (2) interactively select the configuration for a specific mission, and (3) automatically derive a mission-specific software architecture from a given configuration.
Autonomous Software Systems Development Approaches30m
TO: Yuri Yushtein/Software Systems Division
For unmanned space exploration, spacecraft rely on automation and robotics technologies to utilise autonomy and autonomic computing principles to safeguard the spacecraft, address the operations issues, or maximise the science return. However, it appears that the design and implementation of autonomous spacecraft is an extremely challenging task. The problem stems from the very nature of such systems where features like environment monitoring and self-monitoring allow awareness capabilities to drive the system behaviour. The first and one of the biggest challenges in the design and implementation of such systems is how to handle requirements specifically related to the autonomy of a system. Within the mandate of the MTOBSE Project, we developed an approach to Autonomy Requirements Engineering where system goals are merged with special generic autonomy requirements. The approach helps engineers to identify and record the autonomy requirements for autonomous spacecraft in the form of special self-* objectives and other assistive requirements, capturing alternative objectives the system may pursue in the presence of factors threatening the achievement of the initial system goals. As a proof-of-concept case study, we applied the approach to capture the autonomy requirements for ESA’s BepiColombo Mission to Mercury.
Time and Space Partitioning kernel formalisation30m
TO: Martin Hiller/Software Systems Division
We developed a Reference Specification for a Separation/Partitioning Microkernel to support Integrated Modular Avionics, as currently being explored by SAVOIR-IMA. In addition we looked into the formal verification techniques that might be employed in order to verify the correctness of such a kernel implementation, to standards set by the Common Criteria (CC), most specifically their Separation Kernel Protection Profile (SKPP) for “high robustness”. The kernel so described is a Time-Space Partitioning (TSP) kernel that uses a fixed schedule ensuring temporal isolation for all partitions, and ensures all partitions have disjoint physical memory mappings. This helps to provide fault containment within partitions, ensure the tractability of verification, and reduces the scope for covert channels.
Based on the reference specification, we have constructed a formal model of the TSP kernel, using Isabelle/HOL as the formal logic, modelling notation and proof environment. In addition we have explored importing the source code of an existing TSP implementation (Xtratum) into the formal modelling environment. The key objective of this activity is to gather enough data to assess the feasibility and cost of formally verifying such a kernel to CC/SKPP to Evaluation Assurance Levels (EAL) 5 through 7. A key aspect of this is the layering of the model which helps high-level proofs to be kept separate from low-level details, so allowing a divide-and-conquer approach to verification to be adopted, with increasing EAL levels corresponding to proofs with more detail.
Space Internetworking Protocols and Delay Tolerant Networking Prototyping45m
TO: Chris Taylor/Data Systems Division
The primary objective of the study was to take a critical view of the proposal to move to a network architecture based on Delay/Disjoint Tolerant Networking (DTN). The DTN protocols are still emerging and are mostly targeted at terrestrial applications; their suitability for space usage is therefore unproven. A network architecture using a network layer protocol for all interconnection is fully appropriate for terrestrial applications but for remote space assets there are issues related to contingency and emergency operations which may require alternative or additional connectivity and functionality, for example for emergency commanding.
As a result, there were a number of key objectives of the study:
• To scrutinise the use of a network based architecture using DTN based protocol suites for use in ESA and cooperative missions and to identify any shortcomings or required changes and additions.
• To determine the impact on the existing space infrastructures, including protocols and the flight data handling system.
• To gain experience with operating DTN through the provision of a test-bed infrastructure suitable for cooperating in international experiments.
• To identify a deployment policy which takes account of the use and capabilities of the existing CCSDS and ECSS protocols, in particular CFDP.
• To provide a simulation capability able to evaluate complex internetworking scenarios.
• To complement on-going ground studies by focussing on flight segment aspects.
• To provide analysis and feedback to the CCSDS DTN working group.
The basic DTN protocol evaluation was required to be performed using a test-bed implementation but for more complex architectures the use of a Simulator was deemed a more appropriate approach. Therefore a Simulator was provided which can be configured to evaluate CFDP and CFDP-over-DTN scenarios involving multi-hop, multi-route configurations. The Simulator covered the use of existing CCSDS protocols as the underlying datalink including the CCSDS Proximity-1 protocol. In order to investigate the operation and capabilities of DTN and compare against CFDP, a testbed implementation was provided as an update to the existing RASTA infrastructure.