In recent years, image fusion methods based on deep networks have been proposed to combine infrared and visible images for achieving better fusion image. However, issues such as limited training data, scarce reference images and misalignment of multi-source images, still limit the fusion performance. To address these problems, we propose an end-to-end shallow convolutional neural network with structural constraints, which has only one convolutional layer to fuse infrared and visible images. Different from other methods, our proposed model requires less training data and reference images and is more robust to the misalignment of a couple of images. More specifically, the infrared image and the visible image are first provided as inputs to a convolutional layer to extract the information that should be fused; then, all feature maps are concatenated together and fed into a convolutional layer with one channel to obtain the fused image; finally, a structural similarity loss between the fused image and the input infrared and visible images is computed to update the network parameters and eliminate the effects of pixel misalignment. Extensive experiments show the effectiveness of our proposed method on fusion of infrared and visible images with the performance that outperforms the state-of-the-art methods.
Infrared and visible image fusion using a shallow CNN and structural similarity constraint
Roli, Fabio;
2020-01-01
Abstract
In recent years, image fusion methods based on deep networks have been proposed to combine infrared and visible images for achieving better fusion image. However, issues such as limited training data, scarce reference images and misalignment of multi-source images, still limit the fusion performance. To address these problems, we propose an end-to-end shallow convolutional neural network with structural constraints, which has only one convolutional layer to fuse infrared and visible images. Different from other methods, our proposed model requires less training data and reference images and is more robust to the misalignment of a couple of images. More specifically, the infrared image and the visible image are first provided as inputs to a convolutional layer to extract the information that should be fused; then, all feature maps are concatenated together and fed into a convolutional layer with one channel to obtain the fused image; finally, a structural similarity loss between the fused image and the input infrared and visible images is computed to update the network parameters and eliminate the effects of pixel misalignment. Extensive experiments show the effectiveness of our proposed method on fusion of infrared and visible images with the performance that outperforms the state-of-the-art methods.File | Dimensione | Formato | |
---|---|---|---|
IET Image Processing - 2020 - Li - Infrared and visible image fusion using a shallow CNN and structural similarity.pdf
Solo gestori archivio
Tipologia:
versione editoriale
Dimensione
1.77 MB
Formato
Adobe PDF
|
1.77 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.