Speaker
Description
The European Union is now developing a federated SST system composed of existing sensors and operations centres in Europe through the EU SST Support Framework. Potential future architectures are also being evaluated for the development of new future sensors, including both radar and telescope sensors and both tracking and surveillance sensors. This brings the need to analyse the performances of different sensor network architectures and topologies.
Typically, the performances of those architectures are normally measured in terms of number of observable objects and number of catalogable objects. By definition, an object is considered observable if the sensor network can observe the object at least once and generate the corresponding track. Similarly, an object is considered catalogable if it can be maintained in the catalogue through the update of its orbital information upon the generation of tracks corresponding to the object during survey observation activities. In order to do so, new tracks need to be correlated to the right object. Hence, the catalogability of an object is directly related to the ability of the system to correlate tracks properly. This depends on the revisit time (i.e., frequency of observations) for a given object population and sensor network, and furthermore, it is also driven by the on-ground infrastructure ability to maintain the catalogue to predict the orbits of the objects, depending on the accuracy of the radar measurements and Space weather indicators predictions (solar flux and magnetic field activity, among others).
Many previous studies are based on a rule-of-thumb stating that an object is catalogable if its revisit time is less than 24 hours. However, this assumption is not properly justified. Additionally, apart from the catalogability of an object, another aspect to consider is the accuracy of the orbital information being estimated from the correlated observations. Again, concepts such as catalogued and well-catalogued are normally used based on the revisit time of the objects. In the frame of these studies, two types of analyses can be performed: based on coverage analysis and based on full cataloguing processes. The first are less time consuming but of lower accuracy as they are based on analysing the observability windows of the objects of the population and based on rules-of-thumb for the revisit time (e.g. 24 hours) to determine the percentage of the population that can be catalogued. The second are much more time consuming and provide more insight but are driven by specific implementations of correlation algorithms. Hence, it is normally preferred to use the former for preliminary design analyses.
This paper presents a new methodology suited for Low Earth Orbit and developed to determine through a coverage analysis the population of objects that can be catalogued by a given sensor network, as well as the expectable accuracy of the orbital information generated from observations of the sensor network. The number of catalogable objects is derived from the observable population, assuming that tracks can be correlated to the right object as long as the position uncertainty (i.e., covariance) of the objects do not overlap. The growth of the position covariance of each object is driven by two main uncertainties: the one of the initial estimation of the semi-major axis of the orbit and the one of the drag effect in the object. The former is related to the accuracy of the observations from the sensors (and also to the observation geometry) while the latter is related to the space environment knowledge (i.e. capability to model the atmosphere density). Depending on the object altitude, the drag effect may be dominant with respect to the initial uncertainty of the semi-major axis. Moreover, the longer the revisit time the more relevant the drag effect becomes. On the other hand, the more populated the orbits become, the more important the initial uncertainty in the semi-major axis becomes since the more objects there are, the sooner their position covariance overlap.
The model described is used to characterize the performances of a sensor network composed of a single survey radar and optimize some of its design parameters. The main free design parameters optimized in the analysis are the location of the radar (i.e., latitude) and the elevation of the radar field-of-view (FoV), while other design parameters are kept fixed: the azimuth of the radar FoV is kept pointing southwards at all times, and the size of the FoV. The power of the radar is also a parameter considered in the analysis in order to optimize the location and pointing elevation as a function of the radar power. It is important to note that the radar location and field of view constrain the orbit observability and revisit times. Depending on the location and field of view of the radar, the revisit times of the observable population vary and hence the number of catalogable objects.
Some of the conclusions justified in the paper and related to cataloguing performance that can be derived from the analysis are:
-
The higher the number of objects, the more relevant the accuracy of the sensor is.
-
For a given sensor location and accuracy, there is a saturation limit in the number of catalogued objects, even if more objects are observed.
-
Higher elevations are better in terms of number of catalogable objects, while lower elevations are preferred in terms of revisit time and track duration.
-
The rule-of-thumb of 1-day revisit time for catalogable objects is too crude as it does not take into account the number of objects of the observable population and the accuracy of the sensor.