跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.10) 您好!臺灣時間:2025/09/30 17:42
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:傅于洳
研究生(外文):Fu, Yu-Ju
論文名稱:整合時空融合與深度學習於 Formosat-2與 Landsat-8之時序影像建置
論文名稱(外文):The Generation of Formosat-2 and Landsat-8 Time Series Images Using Spatiotemporal Fusion and Deep Learning Techniques
指導教授:張智安張智安引用關係
指導教授(外文):Teo, Tee-Ann
口試委員:史天元林玉菁黃智遠張智安
口試委員(外文):Shih, Tian-YuanLin, Yu-ChingHuang, Chih-YuanTeo, Tee-Ann
口試日期:2019-07-23
學位類別:碩士
校院名稱:國立交通大學
系所名稱:土木工程系所
學門:工程學門
學類:土木工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:55
中文關鍵詞:時序衛星影像時空影像融合影像超解析
外文關鍵詞:Time-series satellite imagesspatio-temporalimage fusionsuper-resolution
相關次數:
  • 被引用被引用:0
  • 點閱點閱:772
  • 評分評分:
  • 下載下載:53
  • 收藏至我的研究室書目清單書目收藏:1
時序衛星影像為衛星影像應用的重要發展方向之一,透過在時間軸向上整合不同時間點取樣的遙測影像,分析地表任一位置的時空變化,以瞭解環境的動態過程。時空融合技術具有融合兩類不同衛星影像的特性,可以產生同時具備高時間及高空間解析度的時序衛星影像,有助於補足時間軸向上的影像資料,提升建置時序衛星影像的能力。從時序資料技術方面討論,影像融合方法可分為離析法、加權法、貝氏法、學習法和混合法技術,其中,加權法是目前為止最普及的融合方法,學習法發展應用於超解析度問題及影像融合。在過去的時空融合研究中,融合的資料大多以固定視角取樣的衛星影像為主,例如Landsat、MODIS及Sentinel系列衛星,而以不同旋轉角度進行本體旋轉取樣的衛星,如福衛二號衛星,則較少被討論。
本研究提出加權法整合學習法之時空融合技術,應用於福衛二號(Formosat-2)與Landsat-8衛星影像,目的為結合不同融合方法的優點,並使用不同取樣方式之衛星影像,發展衛星遙測影像之時空融合技術。研究方法以具有物理意義的時空適應反射率融合模型(Spatial and Temporal Adaptive Reflectance Fusion Model, STARFM),以及導入卷積神經網路架構的深度學習超解析方法(Very Deep Super-Resolution, VDSR)為主,其中STARFM為透過計算影像在時間、光譜及空間上的變異量,並以加權函數推估高解析度影像的反射值;VDSR為利用殘差學習卷積神經網絡(Residual-learning Convolutional Neural Network),訓練並預測高解析度影像之高頻細節,以低解析度影像重建相對應之高解析度影像。
研究成果以兩種類型的定量評估指標討論與分析融合影像之整體光譜品質,分別為絕對性指標與相對性指標,成果顯示VDSR有助於降低融合影像與真實影像間之差距,能夠改善僅使用STARFM進行時空融合之影像,特別是應用於植被分布較多之區域。
Time-series satellite images are important development for satellite imagery. The time-series satellite images are the integration of temporal sequences of images, which can analyze the spatial-temporal variations of any position on the earth's surface. Spatio-temporal fusion technique has the characteristics of generating high-spatial and high-temporal resolution image from different sensors. It can improve the ability to construct time-series images.
Spatio-temporal image fusion includes five categories: unmixing-based, weight function-based, Bayesian-based, learning-based, and hybrid methods. The weight function-based method has been widely used for most of the spatio-temporal fusion application and the learning-based method is usually applied on image super-resolution. In the previous studies, most studies used the satellite images acquired at a fixed view angle, for example, Landsat, MODIS, and Sentinel images. The satellites acquire imagery in different view angles in the way of body rotation such as Formosat-2 are seldom used in spatio-temporal image fusion.
In this study, we proposed a hybrid fusion method based on weight function-based together with learning-based, to generate time series images from Formosat-2 and Landsat-8. The goal is to combine the advantages of different approaches as well as utilize satellite images acquired from different ways for developing remote sensing fusion technique. The proposed scheme contains two major parts: Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and Very Deep Super-Resolution (VDSR). STARFM is a physical meaning model which determines a weighting function to convert a coarse-resolution image to fine-resolution image from a homogeneous area. VDSR introduced a Convolutional Neural Network (CNN) into the super-resolution problem. It reconstructs a fine-resolution image from a coarse-resolution image using a residual-learning CNN to train and predict the high-frequency information of fine resolution image.
摘要 i
ABSTRACT ii
誌謝 iii
目錄 iv
圖目錄 vi
表目錄 vii
第一章 緒論 1
1.1 研究背景 1
1.1.1 時序影像技術的發展與應用 1
1.1.2 時空融合技術的發展與應用 1
1.1.3 深度學習技術的發展與應用 2
1.1.4 發展即可分析時序 (Analysis-Ready Time-Series, ART)影像的必要性與優勢 2
1.2 研究動機 3
1-3 研究目的與貢獻 6
1-4 論文架構 6
第二章 文獻回顧 7
2.1 影像時空融合 7
2.2 深度學習應用於影像處理 9
第三章 研究方法 11
3.1 資料前處理 11
3.1.1 影像套合 12
3.1.2 輻射校正 12
3.2 時空融合方法 13
3.2.1 時空適應反射率融合模型 13
3.2.2 時序時空融合 16
3.3 深度學習超解析方法 16
3.3.1 VDSR網路架構 17
3.3.2 VDSR訓練模型 17
3.4 精度評估 19
第四章 研究範圍與資料 23
4.1 研究範圍 23
4.2 研究資料 23
第五章 研究成果與分析 28
5.1 STARFM參數分析 28
5.1.1 移動視窗尺寸 28
5.1.2 影像時間密度 30
5.2 時空融合成果 32
5.2.1 色彩融合影響分析 32
5.2.2 影像超解析影響分析 35
5.2.3 融合模型分析 39
5.2.4 變遷區域與融合成果關聯分析 45
5.2.5 即可分析時序融合資料 47
第六章 結論與未來工作 50
6.1 結論 50
6.2 建議 51
6.3 未來工作 51
參考文獻 52
張莉雪, 陳伯傳, 周士傑, & 陳乃宇. (2014). 福衛二號排程與災防雲端應用服務. 航測及遙測學刊, 18(1), 13-27.
顏伸運, & 陳靜盈. (2015). 衛星影像巨量資料儲存與應用平台建置. 航測及遙測學刊, 19(4), 303-312.
蔡博閎, & 林昭宏. (2016). 衛星影像雲遮蔽區域之移除與填補演算法. Journal of Photogrammetry and Remote Sensing, 20(3), 217-229.
Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
Cheng, Q., Liu, H., Shen, H., Wu, P., & Zhang, L. (2017). A spatial and temporal nonlocal filter-based data fusion method. IEEE Transactions on Geoscience and Remote Sensing, 55(8), 4476-4488.
Dong, C., Loy, C. C., He, K., & Tang, X. (2016). Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2), 295-307.
Eigen, D., Krishnan, D., & Fergus, R. (2013). Restoring an image taken through a window covered with dirt or rain. In Proceedings of the IEEE International Conference on Computer Vision (pp. 633-640).
El Hajj, M., Bégué, A., Lafrance, B., Hagolle, O., Dedieu, G., & Rumeau, M. (2008). Relative radiometric normalization and atmospheric correction of a SPOT 5 time series. Sensors, 8(4), 2774-2791.
Gao, F., Masek, J., Schwaller, M., & Hall, F. (2006). On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Transactions on Geoscience and Remote sensing, 44(8), 2207-2218.
Gevaert, C. M., & García-Haro, F. J. (2015). A comparison of STARFM and an unmixing-based algorithm for Landsat and MODIS data fusion. Remote sensing of Environment, 156, 34-44.
Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
Guo, M., Zhang, H., Li, J., Zhang, L., & Shen, H. (2014). An online coupled dictionary learning approach for remote sensing image fusion. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(4), 1284-1294.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
He, W., & Yokoya, N. (2018). Multi-Temporal Sentinel-1 and-2 Data Fusion for Optical Image Simulation. ISPRS International Journal of Geo-Information, 7(10), 389.
Hilker, T., Wulder, M. A., Coops, N. C., Linke, J., McDermid, G., Masek, J. G., & White, J. C. (2009). A new data fusion model for high spatial-and temporal-resolution mapping of forest disturbance based on Landsat and MODIS. Remote Sensing of Environment, 113(8), 1613-1627.
Hore, A., & Ziou, D. (2010). Image quality metrics: PSNR vs. SSIM. In 20th IEEE International Conference on Pattern Recognition (pp. 2366-2369).
Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).
Huang, B., & Song, H. (2012). Spatiotemporal reflectance fusion via sparse representation. IEEE Transactions on Geoscience and Remote Sensing, 50(10), 3707-3716.
Huang, B., Zhang, H., Song, H., Wang, J., & Song, C. (2013). Unified fusion of remote-sensing imagery: Generating simultaneously high-resolution synthetic spatial–temporal–spectral earth observations. Remote sensing letters, 4(6), 561-569.
Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
Jönsson, P., & Eklundh, L. (2004). TIMESAT—a program for analyzing time-series of satellite sensor data. Computers & Geosciences, 30(8), 833-845.
Kim, J., Kwon Lee, J., & Mu Lee, K. (2016). Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1646-1654).
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436.
LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4), 541-551.
Lewis, A., Lymburner, L., Purss, M. B., Brooke, B., Evans, B., Ip, A., ... & Oliver, S. (2016). Rapid, high-resolution detection of environmental change over continental scales from satellite data–the Earth Observation Data Cube. International Journal of Digital Earth, 9(1), 106-111.
Li, X., Ling, F., Foody, G. M., Ge, Y., Zhang, Y., & Du, Y. (2017). Generating a series of fine spatial and temporal resolution land cover maps by fusing coarse spatial resolution remotely sensed images and fine spatial resolution land cover maps. Remote Sensing of Environment, 196, 293-311.
Li, H., Wu, X. J., & Kittler, J. (2018). Infrared and Visible Image Fusion using a Deep Learning Framework. In 24th IEEE International Conference on Pattern Recognition (ICPR) (pp. 2705-2710).
Liu, H., Shen, H., Wu, P., & Zhang, L. (2017). A spatial and temporal nonlocal filter-based data fusion method. IEEE Transactions on Geoscience and Remote Sensing, 55(8), 4476-4488.
Lymburner, L., Botha, E., Hestir, E., Anstee, J., Sagar, S., Dekker, A., & Malthus, T. (2016). Landsat 8: providing continuity and increased precision for measuring multi-decadal time series of total suspended matter. Remote Sensing of Environment, 185, 108-118.
McInerney, D., & Kempeneers, P. (2015). Orfeo toolbox. In Open Source Geospatial Tools. Springer, Cham. (pp. 199-217)
Mittal, A., Moorthy, A. K., & Bovik, A. C. (2012). No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 21(12), 4695-4708.
Mittal, A., Soundararajan, R., & Bovik, A. C. (2012). Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters, 20(3), 209-212.
Moosavi, V., Talebi, A., Mokhtari, M. H., Shamsi, S. R. F., & Niazi, Y. (2015). A wavelet-artificial intelligence fusion approach (WAIFA) for blending Landsat and MODIS surface temperature. Remote Sensing of Environment, 169, 243-254.
Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10) (pp. 807-814).
Randrianjatovo, R. N., Rakotondraompiana, S., & Rakotoniaina, S. (2014). Estimation of Land Surface Temperature over Reunion Island using the thermal infrared channels of Landsat-8. In 2014 IEEE Canada International Humanitarian Technology Conference-(IHTC) (pp. 1-4).
Rembold, F., Meroni, M., Urbano, F., Royer, A., Atzberger, C., Lemoine, G., & Haesen, D. (2015). Remote sensing time series analysis for crop monitoring with the SPIRITS software: new functionalities and use examples. Frontiers in Environmental Science, 3, 46.
Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91-99).
Rodrigues, A., Marcal, A. R., & Cunha, M. (2011). PhenoSat—A tool for vegetation temporal analysis from satellite image data. In IEEE 6th International Workshop on the Analysis of Multi-temporal Remote Sensing Images (pp. 45-48).
Sakamoto, T., Yokozawa, M., Toritani, H., Shibayama, M., Ishitsuka, N., & Ohno, H. (2005). A crop phenology detection method using time-series MODIS data. Remote sensing of environment, 96(3-4), 366-374.
Schowengerdt, R. A. (1980). Reconstruction of multispatial, multispectral image data using spatial frequency content. Photogrammetric Engineering and Remote Sensing, 46(10), 1325-1334.
Shannon, C. E. (1948). A mathematical theory of communication. Bell system technical journal, 27(3), 379-423.
Shen, H., Wu, P., Liu, Y., Ai, T., Wang, Y., & Liu, X. (2013). A spatial and temporal reflectance fusion model considering sensor observation differences. International journal of remote sensing, 34(12), 4367-4383.
Son, N. T., Chen, C. F., Chen, C. R., Sobue, S. I., Chiang, S. H., Maung, T. H., & Chang, L. Y. (2017). Delineating and predicting changes in rice cropping systems using multi-temporal MODIS data in Myanmar. Journal of Spatial Science, 62(2), 235-259.
Song, H., Liu, Q., Wang, G., Hang, R., & Huang, B. (2018). Spatiotemporal satellite image fusion using deep convolutional neural networks. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(3), 821-829.
Song, H., & Huang, B. (2012). Spatiotemporal satellite image fusion through one-pair image learning. IEEE Transactions on Geoscience and Remote Sensing, 51(4), 1883-1896.
Stone, H. S., Orchard, M. T., Chang, E. C., & Martucci, S. A. (2001). A fast direct Fourier-based algorithm for subpixel registration of images. IEEE Transactions on geoscience and remote sensing, 39(10), 2235-2243.
Storey, J., Roy, D. P., Masek, J., Gascon, F., Dwyer, J., & Choate, M. (2016). A note on the temporary misregistration of Landsat-8 Operational Land Imager (OLI) and Sentinel-2 Multi-Spectral Instrument (MSI) imagery. Remote Sensing of Environment, 186, 121-122.
Sugumaran, R., Hegeman, J. W., Sardeshmukh, V. B., & Armstrong, M. P. (2015). Processing remote-sensing data in cloud computing environments. In Remotely Sensed Data Characterization, Classification, and Accuracies (pp. 587-596). CRC Press.
Svoboda, P., Hradis, M., Barina, D., & Zemcik, P. (2016). Compression artifacts removal using convolutional neural networks. arXiv preprint arXiv:1605.00366.
Teo, T. A., Shih, T. Y., & Chen, B. (2017). Automatic Georeferencing Framework for Time Series Formosat-2 Satellite Imagery using Open Source Software. In Proceedings of the 38th Asian Conference on Remote Sensing.
Verbesselt, J., Hyndman, R., Newnham, G., & Culvenor, D. (2010). Detecting trend and seasonal changes in satellite image time series. Remote sensing of Environment, 114(1), 106-115.
Wagner, W. (2015). Big data infrastructures for processing sentinel data. In Photogrammetric week.15, 93-104.
Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
Wang, Q., Zhang, Y., Onojeghuo, A. O., Zhu, X., & Atkinson, P. M. (2017). Enhancing spatio-temporal fusion of MODIS and Landsat data by incorporating 250 m MODIS data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(9), 4116-4123.
Xue, J., Leung, Y., & Fung, T. (2017). A Bayesian data fusion approach to spatio-temporal fusion of remotely sensed images. Remote Sensing, 9(12), 1310.
Zeng, Z., Estes, L., Ziegler, A. D., Chen, A., Searchinger, T., Hua, F., ... & Wood, E. F. (2018). Highland cropland expansion and forest loss in Southeast Asia in the twenty-first century. Nature Geoscience, 11(8), 556.
Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017). Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Transactions on Image Processing, 26(7), 3142-3155.
Zhu, X., Chen, J., Gao, F., Chen, X., & Masek, J. G. (2010). An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sensing of Environment, 114(11), 2610-2623.
Zhu, X., Cai, F., Tian, J., & Williams, T. (2018). Spatiotemporal fusion of multisource remote sensing data: literature survey, taxonomy, principles, applications, and future directions. Remote Sensing, 10(4), 527.
Zhu, X., Helmer, E. H., Gao, F., Liu, D., Chen, J., & Lefsky, M. A. (2016). A flexible spatiotemporal method for fusing satellite images with different resolutions. Remote Sensing of Environment, 172, 165-177.
Zhukov, B., Oertel, D., Lanzl, F., & Reinhackel, G. (1999). Unmixing-based multisensor multiresolution image fusion. IEEE Transactions on Geoscience and Remote Sensing, 37(3), 1212-1226.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top