Speaker
Description
In-Orbit Servicing and debris removal are two of the most important emerging sectors of the space industry. The ever-increasing amount of space debris necessitates the development of debris removal, servicing, and life extension capabilities. These capabilities rely heavily on novel Guidance, Navigation and Control (GNC) technologies, notably Vision Based Navigation (VBN) using visual and depth sensors such as cameras or LIDAR to approach a target. VBN employs a variety of Image Processing (IP) algorithms such as Feature Tracking and Model Matching.
The high reliability requirements of space missions call for intensive testing and validation of these algorithms. This can be best achieved by testing the algorithms in real scenarios close to the final use case, which is impossible before launch. Thus, projects often rely on artificial data to test their IP algorithms, both rendered synthetically and collected in a laboratory.
The first approach consists of rendering the images a sensor would produce of the target and environment using physically and radiometrically accurate models. Multiple problems are associated with this approach. Firstly, even slightest deviations in chosen parameters between the simulation and reality can lead to high losses in representativeness. Secondly, running these rendered environments repeatedly is extremely computationally intensive and thus time consuming.
The laboratory approach consists in taking images of a physical mock-up of the target with a real visual sensor in a specialised facility. This approach allows to use the flight sensor and thus produce more representative data.
This comes at the expense of a high effort associated with making a realistic mock-up and recreating the illumination conditions. Additionally, this approach suffers from a highly time-consuming image acquisition process, thus yielding a limited amount of data.
This project is exploring an innovative solution to augment a sensor dataset by employing the images in the dataset itself. This data-driven solution involves performing image transformations of the existing data to yield additional points of view not contained in the original dataset. This has the potential to allow a user to perform a single open-loop image rendering or acquisition campaign, and then run closed-loop tests completely digitally. Thus, vastly enhancing the usability of any existing dataset. We present the principles behind this approach, evaluate its capabilities of representing a new point of view and compare the results with synthetic data.