|
|
|
|
![]() |
Atmospheric turbulence severely degrades video quality by
introducing distortions such as geometric warping, blur, and
temporal flickering, posing significant challenges to both vi-
sual clarity and temporal consistency. Current state-of-the-art
methods are based on transformer, 3D architectures and re-
quire multi-frame input, but their large computational cost
and memory usage limit real-time deployment, especially
in resource-constrained scenarios. In this work, we propose
RMFAT — Recurrent Multi-scale Feature Atmospheric Tur-
bulence Mitigator designed for efficient and temporally con-
sistent video restoration under AT conditions. RMFAT adopts
a lightweight recurrent framework that restores each frame
using only two inputs at a time, significantly reducing tem-
poral window size and computational burden. It further inte-
grates multi-scale feature encoding and decoding with tem-
poral warping modules at both encoder and decoder stages to
enhance spatial detail and temporal coherence. Extensive ex-
periments conducted on synthetic and real-world atmospheric
turbulence datasets demonstrate that RMFAT not only outper-
forms existing methods in terms of clarity restoration (with
nearly a 9% improvement in SSIM) but also achieves sig-
nificantly improved inference speed (achieving a more than
fourfold reduction), making it particularly suitable for real-
time atmospheric turbulence suppression tasks.
Videos are from the data_CLEAR , all subjects give the consent to show their images.
If videos are not played correctly, please consider to use Chrome or download them
![]() |
![]() |
If you use our work, please cite:
@misc{liu2025rmfatrecurrentmultiscalefeature, title={RMFAT: Recurrent Multi-scale Feature Atmospheric Turbulence Mitigator}, author={Zhiming Liu and Nantheera Anantrasirichai}, year={2025}, eprint={2508.11409}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2508.11409}, }