MDEANet: modified detail-enhanced convolution and attention-based network for dehazing of remote sensing images
No Thumbnail Available
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Springer
Abstract
Image de-hazing aims to improve quality and restore clarity of hazy images. When airborne particles like dust and smoke absorb light, it can result in a haze, a typical meteorological phenomenon that degrades color accuracy, picture contrast, and overall visual perception. Numerous applications, including environmental monitoring, disaster management and remote sensing, heavily rely on satellite imaging. However, haze and air debris may considerably reduce the clarity and quality of satellite images, which can influence how well they can be used and interpreted. This paper proposes MDEANet (Modified Detail-Enhanced convolution and Attention-based Network), a deep learning-based algorithm for de-hazing of remote sensing images. As haze is unevenly distributed, this model has pixel attention and channel attention blocks which treat pixels and channels of an image differently depending on the haze distribution, giving more flexibility for de-hazing. Difference convolution (DC) captures gradients and improves the representation and adaptation abilities of CNN. The proposed model is trained on the RESIDE-OTS dataset. The proposed model is giving an average PSNR of 29.411, SSIM of 0.9495 and MVR of 0.0335 on RESIDE-OTS test images and it is giving an average MVR value of 0.0264 on satellite images, which are best values compared to existing models. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
Description
Keywords
Deep learning, Demulsification, Disaster prevention, Disasters, Environmental management, Image enhancement, Pixels, Remote sensing, Smoke, Airborne particle, Channel attention, Dehazing, Difference convolution, Image de-hazing, Meteorological phenomena, Pixel attention, Remote sensing images, Satellite images, Convolution
Citation
Multimedia Tools and Applications, 2025, 84, 18, pp. 18943-18966
