Automated Colorization of Grayscale Images Using Superpixels and K-Means Clustering
No Thumbnail Available
Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
Springer Science and Business Media Deutschland GmbH
Abstract
The process of transforming grayscale photos into aesthetically pleasing color images is called colorization. Convincing the audience of the realism of the outcome is the primary objective of colorization. Natural scenery makes up the majority of the grayscale photographs that require colorization. A broad range of colorization techniques have been created over the past 20 years; these vary from algorithmically basic procedures that need time and energy due to inevitable human participation to more complex ways that are also more automated. The complex field of automatic conversion mixes deep learning, machine learning, and art. Most of the earlier works which use deep learning, use every pixel values to train the models which is computationally expensive. We present a methodology for colorizing grayscale images using convolutional neural network (CNN), our method uses a combination of superpixel segmentation and K-Means clustering to significantly reduce number of pixel values. The process begins with the conversion of grayscale images to superpixels, which are perceptually uniform regions that aid in efficient colorization. Subsequently, K-Means clustering is applied within each superpixel to identify dominant color clusters, followed by quantization of color information to simplify representation. The prepared input, comprising grayscale images and quantized color information, is then fed into a CNN for colorization, leveraging spatial coherence and semantic context to predict plausible colors for grayscale pixels. The proposed methodology is evaluated on a diverse set of grayscale images, demonstrating its effectiveness in producing vibrant and visually appealing colorized outputs. Through experiments and analysis, we showcase the potential applications and benefits of the proposed approach in historical photograph restoration, movie colorization, and other domains requiring accurate and efficient grayscale image colorization. We use SSIM and PSNR as our evaluation metrics. SSIM is calculated based on the similarity of the luminance and brightness values of the considered and obtained rgb images for the grayscale images, and PSNR is calculated using Mean Squared Error (MSE) of the peak signal values within images. Our methodology’s SSIM and PSNR for the considered flower class is 81.5 and 25.6, respectively. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
Description
Keywords
Colorization, Convolutional neural networks (CNNs), Deep learning, Grayscale images, Image enhancement, K-Means clustering, Superpixel segmentation
Citation
Lecture Notes in Networks and Systems, 2025, Vol.1265 LNNS, , p. 545-555
