下载中心
优秀审稿专家
优秀论文
相关链接
摘要
针对传统的遥感图像融合方法通常会引起光谱失真的问题和大多数基于深度学习的融合方法忽略充分利用每个卷积层信息的不足,本文结合密集连接卷积网络和残差网络的特性,提出了一个新的融合网络。该网络通过建立多个密集卷积块来充分利用卷积层的分级特征,同时块与块之间通过过渡层加快信息流动,从而最大程度地对特征进行极致利用并提取到丰富的特征。该网络应用残差学习拟合深层特征与浅层特征之间的残差,加快网络的收敛速度。实验中利用GaoFen-1(GF-1)和WorldView-2/3(WV-2/3)的多光谱图像MS (Multispectral Image)和全色图像PAN(Panchromatic Image)(MS与PAN的空间分辨率之比为4)评估本文提出方法的有效性。从视觉效果和定量评估结果两个方面来看,本文方法得到的融合结果要优于所对比的传统方法和深度学习方法,并且该网络具有鲁棒性,能够泛化到不需要预训练的其他卫星图像。本文方法通过特征的重复利用实现了光谱信息的高保真并提高了空间细节分辨能力,有利于遥感图像的应用研究。
Pan-sharpening (also known as remote sensing image fusion) aims to generate Multi-Spectral (MS) images with high spatial resolution and high spectral resolution by fusing high spatial resolution panchromatic (PAN) images and high spectral resolution MS images with low spatial resolution. Traditional pan-sharpening methods mainly include the component substitution method, the multiresolution analysis method, and the model-based optimization method. These fusion methods involve linear models, which are difficult to use in achieving the appropriate trade-off between spatial improvement and spectral preservation. In addition, they often introduce spectral or spatial distortion. Recently, many fusion methods based on deep learning have been proposed. However, their network depth is relatively shallow, and detailed information is inevitably lost during feature transfer. Hence, we propose a deep residual network with dense convolution for pan-sharpening.As the network becomes deep, the features of different levels become complementary to one other. However, most fusion methods based on deep learning ignore making full use of the information of each convolution layer. The densely connected convolutional network allows the features of all previous layers to be used as input for each layer in one densely connected block. To fully utilize the features learned from all convolution layers, we establish the multiple densely convolutional blocks to reuse features. Moreover, the information flow is accelerated by the transition layer between every two blocks. These maximize the use of features and extract rich features. Given the great correlation between deep features and shallow features, residual learning is used to supervise the densely convolutional structure to learn the difference between them, that is, residual features. Thus, residual learning combines shallow features and residual features to obtain further advanced information from MS and PAN images, which prepares for obtaining fused images with high spatial and spectral resolution.To evaluate the effectiveness of the proposed method, we conduct simulated and real-image experiments on the 4-band GaoFen-1 data and 8-band WorldView-2 data with multiple land types. The trained network is generalized well to WorldView-3 images without pre-training. The visual and the quantitative assessment results show that the high-resolution fused images obtained by using the proposed method are superior to the results produced by the traditional and deep learning methods. The proposed approach achieves high spectral fidelity and enhances spatial details by reusing features.The proposed method makes comprehensive use of the advantages of densely convolutional blocks and residual learning. In the feature extraction stage, different levels of features are connected in series through the densely convolutional blocks. This characteristic makes the transmission of features and gradients effective in alleviating the gradient disappearance problem and provides rich spatial and spectral feature for fusion results. In the feature fusion stage, residual learning is used to learn the difference between deep features and shallow features, that is, residual feature. Hence, the convergence speed of the network is accelerated. The experiment result shows that our network has good fusion and generalization abilities.