Improving the Spatial Resolution of Small Satellites by Implementing a Super-Resolution Algorithm Based on the Optical Imaging Sensor’s Rotation Approach
The resolution of images is crucial in image data applications. The techniques that improve spatial resolution are highly valuable for enhancing performance and outcomes. Refined remote sensing has become the main area in the development of the current remote sensing field. Generally speaking, two main ways to improve satellite image resolution are "hardware way" and "software way."
Charge-coupled Device (CCD) and Complementary Metal Oxide Semiconductor (CMOS) image sensors have been widely used in satellite remote sensing. Due to technology limitations and image data transmission rate, the resolution of images obtained by high-resolution CCD cameras is very limited. Considering these factors, the technology of image super-resolution has been proposed, which is called the "software way." Super-resolution technology increases the number of image pixels in various ways, such as information fusion, prior constraint, etc., to improve image resolution, highlight feature details, and enhance image quality.
The paper proposes a super-resolution (SR) algorithm based on the rotation of the CCD sensor sampler to improve the spatial resolution of small satellite optical imaging. SR is the process of obtaining high-resolution (HR) images from one or more low-resolution (LR) images. SR is a technique that enhances image details by increasing the spatial resolution of imaging systems. It overcomes the limitations of conventional resolution enhancement techniques, such as noise generation, spectral distortion, and lack of detail sharpness. However, the SR algorithm and parameters should be selected according to different application scenarios and conditions.
Applying distinct linear arrays raised the difficulty level in imaging instruments. It was also challenging to register a pair of linear arrays. Additional instruments were needed to ensure the two linear arrays were still separated during skew maneuver observation. The innovation of this paper lies in the multirow sensor modeling employed to simulate the optical imaging process. It proposes a formula for reconstructing SR images and correcting deformations in the simulated images. Integrating imaging sensors into a single system offers an optimal solution for enhancing image quality compared to separate sensors, thereby mitigating some of the challenges associated with sensor separation.
The paper also proposes a method for quantitative evaluation of image quality improvement based on the Modulation Transfer Function (MTF). The result shows that the MTF value at a given spatial frequency and Nyquist frequency has increased by 39% and 1.88 times, respectively, compared to standard sampling. The sampling frequency has improved by 1.34 and 1.06 times compared to 45° and 26.56° oblique sampling, respectively. Also, the Field Of View (FOV) of the imaging system has improved by 0.24° and 0.05° compared to 45° and 26.56° oblique sampling, respectively.
The proposed method can effectively increase sampling density in the satellite's flight direction and vertical direction to obtain images with a higher overlap ratio, thus acquiring more information than conventional optical satellites. It has addressed issues related to enhancing the resolution quality of the images. In the future, the research should use the advancements in optical design and manufacturing to overcome the challenges of optical imaging that continue to affect image quality.