Space situational awareness (SSA) systems play a significant role in space navigation missions. One of the most essential tasks of this system is to recognize space objects such as spacecrafts and debris for various purposes including active debris removal, on-orbit servicing, and satellite formation. The complexity of object recognition in space is due to several sensing conditions, including the variety of object sizes with high contrast, low signal-to-noise ratio, noisy backgrounds, and several orbital scenarios. Existing methods have targeted the classification of images containing space objects with complex backgrounds using various convolutional neural networks. These methods sometimes lose attention on the objects in these images, which leads to misclassification and low accuracy. This paper proposes a decision fusion method that involves training an EfficientDet model with an EfficientNet-v2 backbone to detect space objects. Furthermore, the detected objects were augmented by blurring and by adding noise, and were then passed into the EfficientNet-B4 model for training. The decisions from both models were fused to find the final category among 11 categories. The experiments were conducted by utilizing a recently developed space object dataset (SPARK) generated from realistic space simulation environments. The dataset consists of 11 categories of objects with 150,000 RGB images and 150,000 depth images. The proposed object detection solution yielded superior performance and its feasibility for use in real-world SSA systems was demonstrated. Results show significant improvement in accuracy (94%), and performance metric (1.9223%) for object classification and in mean precision (78.45%) and mean recall (92.00%) for object detection.
* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.