跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.172) 您好!臺灣時間:2024/12/13 22:31
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:謝孟財
研究生(外文):Mon-Chai Hsieh
論文名稱:利用機器學習方法進行東沙環礁水深反演之比較與分析
論文名稱(外文):Comparison of Machine Learning Methods on Satellite-Derived Bathymetry at Dongsha Atoll
指導教授:任玄任玄引用關係
指導教授(外文):Hsuan Ren
學位類別:碩士
校院名稱:國立中央大學
系所名稱:遙測科技碩士學位學程
學門:自然科學學門
學類:其他自然科學學類
論文種類:學術論文
論文出版年:2023
畢業學年度:112
語文別:英文
論文頁數:61
中文關鍵詞:水深估計特徵重要性類神經網路相鄰像素多層感知機卷積神經網路
外文關鍵詞:Satellite-derived bathymetrySDBFeature importanceNeural NetworkAdjacent-Pixel Multilayer PerceptronConvolutional Neural Network
相關次數:
  • 被引用被引用:0
  • 點閱點閱:55
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
水深資訊對於海洋研究和航行安全等方面至關重要,但繪製精確的海底地形圖一直是一項具有挑戰性的任務。傳統上,我們使用安裝在船隻上的聲納(Sonar)來進行測量,不過這種方法的測量幅寬狹窄且對於淺水區來說船隻難以進入;而相對先進的機載光達(LiDAR)方式雖然幅寬較聲納大,可以測量廣大的淺水區域,但卻伴隨高昂的成本和區域的航空管制等不利因素。近年來,ICESat-2衛星上搭載的光達感測器也提供一個估計水深的方式,但其空間解析度低。因此,使用光學衛星影像來反推水深資訊成為一個有潛力的替代方案,因為它具有再訪性、廣泛的覆蓋範圍和相對低廉的成本,能克服了其他方法的缺點。
然而,利用衛星影像來推導水深並不是一個容易的任務,因為這是一個由水深、水質、底部種類等多個因素交互作用的非線性系統。在這種情況下,基於機器學習的模型能表現出很好的處理能力。本研究採用了三種不同的類神經網路模型,包括類神經網路模型(Neural Network, NN)、相鄰像素多層感知機(Adjacent-Pixel Multilayer Perceptron, APMLP)和卷積神經網路模型(Convolutional Neural Network, CNN),來進行東沙環礁水深的估算。本次研究使用哨兵2號(Sentinel-2)和PlanetScope的衛星影像,結合光達測量的水深資料作為地真基準,整合了這兩組數據集,分別對這三種模型進行了訓練和分析。同時,我們也分析了訓練數據量和隱藏層數量對模型性能的影響。
實驗結果顯示,NN模型的估算結果具有較大誤差,而在多個隱藏層的設定下,APMLP模型表現最佳 (平均絕對誤差(MAE) = 0.78 m;方均根誤差(RMSE) = 1.57 m)。此外,我們更進一步對已訓練模型進行了特徵重要性的研究來窺探各波段對模型的影響力,由結果發現不論何種衛星影像,所有模型都認為綠色波段是水深反演中最重要的特徵,這與大氣與海水的光學特性相符,儘管藍光的透水性最好,但其容易被大氣散射,因而在淺水區中使得是綠光最容易穿透水體至海底,這顯示模型的成果是合理可信任的。
Bathymetric map is crucial for various applications, such as ocean related research and navigation safety. However, retrieving accurate water depths is always a challenging task. Traditionally, the water depth is measured by shipborne sonar system, but shallow waters are difficult for ships to access and this method is also constrained by its limited swath. While advanced airborne LiDAR system can measure wider than sonar of shallow water, however, it takes huge costs and has to consider air traffic issues. Recently, a spaceborne LiDAR sensor onboard ICEsat-2 can also provide some water depth measurement, but the spatial resolution is relatively low. Therefore, using optical satellite imagery to derive water depth becomes a potential alternate way. Satellite image offers periodic and wide coverage with lower costs, which can overcome the limitations of the other methods.
Deriving water depth from satellite imagery is not a straightforward task due to the complicated nonlinear system between factors such as water depth, water quality, and seafloor type. Based on the complexity, machine learning (ML)-based models have demonstrated effective capabilities. In this study, three models, NN (Neural Network), APMLP (Adjacent-Pixel Multilayer Perceptron), and CNN (Convolutional Neural Network), were adopted to estimate water depth in the Dongsha Atoll. The 2 datasets are integrated from Sentinel-2 and PlanetScope satellite imageries with the ground truth obtained by LiDAR measurement. Then these 3 models were trained by these datasets separately and analyzed their results. Additionally, we investigated the impact of amount of training data and the number of hidden layers on model performances.
The experimental results showed that the NN had the largest errors among them, and APMLP outperformed the other models when it was configured with multiple hidden layers (MAE = 0.78 m; RMSE = 1.57 m). Furthermore, this study investigated the feature importance to assess the influence of each spectral bands on the trained models. Regardless of the satellite imagery used, the results indicated that all models identified the green band as the most crucial feature for depth retrieval. The behavior is consistent with the optical property of shallow sea water, Despite the blue light has the strongest penetrating ability, it is easily scattered by the atmosphere, therefore, the green band will be the easiest to penetrate to the bottom of water. This emphasize the reliability and accuracy of the models' estimations.
摘 要 i
Abstract ii
Contents iv
List of Figures vi
List of Tables ix
Chapter 1. Introduction 1
1.1 Overview of bathymetry 1
1.2 Literature Review 2
1.3 Objective 5
Chapter 2. Methodology 6
2.1 Study Area 6
2.2 Data and Pre-processing 6
2.2.1 Satellite Imagery 6
2.2.2 Ground Truth 10
2.3 SDB models 11
2.3.1 Neural Network (NN) 11
2.3.2 Adjacent-Pixel MultiLayer Perceptron (APMLP) 12
2.3.3 Convolutional Neural Network (CNN) 14
2.4 Training Strategy 15
2.5 Feature Importance 16
2.6 Workflow 20
Chapter 3. Experiment Result & Analysis 21
3.1 Evaluation Indices 21
3.2 Result & Analysis 21
3.2.1 Model Comparison 21
3.2.2 The impact on the amount of training data 35
3.2.3 The impact on the number of layers in model 36
3.2.4 The feature importance of each band 39
Chapter 4. Conclusion & Future Works 43
4.1 Conclusion 43
4.2 Future Works 44
Reference 46
1. Legleiter, C.J., et al., Passive optical remote sensing of river channel morphology and in-stream habitat: Physical basis and feasibility. Remote Sensing of Environment, 2004. 93(4): p. 493-510.
2. Lyzenga, D.R., N.P. Malinas, and F.J. Tanis, Multispectral bathymetry using a simple physically based algorithm. IEEE Transactions on Geoscience and Remote Sensing, 2006. 44(8): p. 2251-2259.
3. Stumpf, R.P., K. Holderied, and M. Sinclair, Determination of water depth with high‐resolution satellite imagery over variable bottom types. Limnology and Oceanography, 2003. 48(1part2): p. 547-556.
4. Legleiter, C.J., D.A. Roberts, and R.L. Lawrence, Spectrally based remote sensing of river bathymetry. Earth Surface Processes and Landforms, 2009. 34(8): p. 1039-1059.
5. Niroumand-Jadidi, M., C.J. Legleiter, and F. Bovolo, River bathymetry retrieval from landsat-9 images based on neural networks and comparison to superdove and sentinel-2. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022. 15: p. 5250-5260.
6. Niroumand-Jadidi, M., A. Vitti, and D.R. Lyzenga, Multiple Optimal Depth Predictors Analysis (MODPA) for river bathymetry: Findings from spectroradiometry, simulations, and satellite imagery. Remote Sensing of Environment, 2018. 218: p. 132-147.
7. Yang, S., et al., Fully automated classification method for crops based on spatiotemporal deep-learning fusion technology. IEEE Transactions on Geoscience and Remote Sensing, 2021. 60: p. 1-16.
8. Castelluccio, M., et al., Land use classification in remote sensing images by convolutional neural networks. arXiv preprint arXiv:1508.00092, 2015.
9. Maxwell, A.E., T.A. Warner, and F. Fang, Implementation of machine-learning classification in remote sensing: An applied review. International journal of remote sensing, 2018. 39(9): p. 2784-2817.
10. Li, Q., et al. Medical image classification with convolutional neural network. in 2014 13th international conference on control automation robotics & vision (ICARCV). 2014. IEEE.
11. Thapa, A., et al., Deep Learning for Remote Sensing Image Scene Classification: A Review and Meta-Analysis. Remote Sensing, 2023. 15(19): p. 4804.
12. Cifuentes, J., et al., Air temperature forecasting using machine learning techniques: a review. Energies, 2020. 13(16): p. 4215.
13. Pavlyshenko, B.M., Machine-learning models for sales time series forecasting. Data, 2019. 4(1): p. 15.
14. Zhu, J., et al., An APMLP deep learning model for bathymetry retrieval using adjacent pixels. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021. 15: p. 235-246.
15. Zhou, W., et al., A Comparison of Machine Learning and Empirical Approaches for Deriving Bathymetry from Multispectral Imagery. Remote Sensing, 2023. 15(2): p. 393.
16. Lumban-Gaol, Y., K. Ohori, and R. Peters, Satellite-derived bathymetry using convolutional neural networks and multispectral sentinel-2 images. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2021. 43: p. 201-207.
17. Sagawa, T., et al., Satellite derived bathymetry using machine learning and multi-temporal satellite images. Remote Sensing, 2019. 11(10): p. 1155.
18. Dębska, B. and B. Guzowska-Świder, Application of artificial neural network in food classification. Analytica Chimica Acta, 2011. 705(1-2): p. 283-291.
19. Chebud, Y., et al., Water quality monitoring using remote sensing and an artificial neural network. Water, Air, & Soil Pollution, 2012. 223: p. 4875-4887.
20. Li, H., et al. A convolutional neural network cascade for face detection. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
21. Sultana, F., A. Sufian, and P. Dutta, A review of object detection models based on convolutional neural network. Intelligent computing: image processing based applications, 2020: p. 1-16.
22. Hong, S., et al. Learning transferrable knowledge for semantic segmentation with deep convolutional neural network. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
23. Sultana, F., A. Sufian, and P. Dutta, Evolution of image segmentation using deep convolutional neural network: A survey. Knowledge-Based Systems, 2020. 201: p. 106062.
24. Uzair, M. and N. Jamil. Effects of hidden layers on the efficiency of neural networks. in 2020 IEEE 23rd international multitopic conference (INMIC). 2020. IEEE.
25. Mascarenhas, V. and T. Keck, Marine Optics and Ocean Color Remote Sensing. Dans YOUMARES 8--Oceans Across Boundaries: Learning from each other (pp. 41-54). 2018, Springer.
電子全文 電子全文(網際網路公開日期:20251201)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊