跳到主要內容

臺灣博碩士論文加值系統

(98.82.120.188) 您好!臺灣時間:2024/09/13 03:16
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陳羿廷
研究生(外文):CHEN, YI-TING
論文名稱:FA-MobileUNet:改良U-Net架構應用於SAR油汙偵測
論文名稱(外文):FA-MobileUNet: An Improved U-Net Architecture for SAR Oil Spill Detection
指導教授:王榮華張麗娜張麗娜引用關係
指導教授(外文):WANG, JUNG-HUACHANG, LENA
口試委員:劉長遠范欽雄王榮華張麗娜卓大靖余憲政張陽郎
口試委員(外文):LIU, CHANG-YUANFAHN, CHIN-SHYURNGWANG, JUNG-HUACHANG, LENAJWO, DAH-JINGYU, XIAN-ZHENGCHANG, YANG-LANG
口試日期:2024-07-18
學位類別:博士
校院名稱:國立臺灣海洋大學
系所名稱:電機工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:英文
論文頁數:84
中文關鍵詞:油汙合成孔徑雷達U-NetMKLabSentinel-1語義分割模型
外文關鍵詞:oil spillsSynthetic Aperture Radar (SAR)U-NetMKLabSentinel-1semantic segmentation models
相關次數:
  • 被引用被引用:0
  • 點閱點閱:20
  • 評分評分:
  • 下載下載:1
  • 收藏至我的研究室書目清單書目收藏:0
油汙染被視為海洋和沿海環境的主要威脅之一。海面上的油汙因抑制了SAR雷達的回波,而在影像中產生黑色的區域。透過此獨特的雷達反射,可以使用SAR影像清楚的觀測油汙。然而,海洋上的自然現象也會產生類似的反射,這些類油汙的黑色區域會造成錯誤偵測為油汙。此外,這些海洋現象在SAR影像中具有各種不同的尺度,因此將多尺度的黑色區域正確辨別為油汙與類油汙是提升油汙偵測效能的關鍵。此外,海洋SAR影像包括多個目標物,如海面、陸地、船舶、油汙和與類油汙。由於固有的類別不平衡,為此任務訓練分類器尤其具有挑戰性。為了解決上述問題,更有效地萃取目標物的特徵至關重要。
在本研究中,提出了一種基於U-Net的輕量化模型Full-Scale Aggregated MobileUNet(FA-MobileUNet),以提高SAR影像的油汙偵測效能。首先,將輕量化的MobileNetv3模型用作U-Net編碼器的骨幹架構進行特徵萃取。其次,加入了空洞空間金字塔池化(atrous spatial pyramid pooling, ASPP)和改良卷積注意力模塊(convolutional block attention module, CBAM),以提高模型提取多尺度特徵的能力並增加網路模型計算的速度。最後,將編碼器不同尺度的特徵聚合(full-scale aggregated, FA)以增強網絡萃取多尺度特徵的能力。因此,所提出的深度學習模型加強了特徵的萃取並集成了不同尺度的特徵,以提高偵測油汙目標的準確性。
研究使用開源的MKLab油汙數據集進行實驗,並另外增加2015~2022年油汙事件影像,總共包含1239張Sentinel-1 VV極化的影像。實驗結果顯示,FA-MobileUNet對於海面、陸地、船舶、油汙和類油汙五類海洋目標的偵測,平均IoU(mean Intersection over Union, mIoU)達到80%以上。此外,所提出模型的偵測和計算效能也與其他語義分割模型進行了比較並驗證。FA-MobileUNet在油汙和類油汙偵測的IoU達到75.87%和72.69%,分別比原始U-Net模型高18.96%和25.57%。因此,所提出的模型可以更準確地將SAR影像中的黑色區域分類為油汙和類油汙。最後,實驗也透過台灣的油汙事件驗證此方法應用在油汙偵測的有效性。

Oil spills are a major threat to coastal and marine environments. Their unique radar backscatter intensity can be captured by synthetic aperture radar (SAR), causing dark regions in the images. However, many marine phenomena can also produce similar radar backscatter, causing these dark areas of look-alikes to be misclassified as oil spills. In addition, these marine phenomena exhibit diverse scales in SAR images, so correctly classifying multi-scale dark areas as oil spills or look-alikes is crucial to improve oil spill detection performance. Moreover, SAR images of the ocean include multiple targets: sea surface, land, ships, oil spills and look-alikes. Training a multi-category classifier encounters significant challenges due to the inherent class imbalance. Addressing this issue required extracting target features more effectively.
In this study, a lightweight U-Net-based model, Full-Scale Aggregated MobileUNet (FA-MobileUNet), was proposed to enhance the detection performance of oil spills using SAR images. First, a lightweight MobileNetv3 model was used as a backbone of the U-Net encoder for feature extraction. Next, the modified convolutional block attention module (CBAM) and atrous spatial pyramid pooling (ASPP) were used to improve the ability of networks to extract multiscale features and accelerate module computation. Finally, full-scale features from the encoder were aggregated to enhance the network’s competence in extracting features. The proposed modified network improved the accuracy of detecting oil spills target by enhancing the extraction and integration of features at different scales.
The study used the open source MKLab oil spill dataset for experiments, and extended the dataset by collecting oil spill images from 2015 to 2022, which contains a total of 1,239 Sentinel-1 VV polarized images. Experimental results showed that the mean intersection over union (mIoU) of the proposed model reached more than 80% for the detection of five types of marine targets including sea surface, land, ship, look-alikes and oil spill. Moreover, the IoU of the proposed model reached 75.87% and 72.69% for oil spill and look-alikes detection, which was 18.96% and 25.57% higher compared to the original U-Net model, respectively. Therefore, the proposed network can accurately distinguish between oil spills and look-alikes within the black areas of SAR images. Finally, experiments were conducted on different oil spill incidents in Taiwan to verify the effectiveness of the proposed oil spill detection model.

Contents
摘要 I
Abstract II
Contents III
List of Figures VI
List of Tables X
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Literature Review 2
1.3 Research Objectives 6
1.4 Dissertation Organization 6
Chapter 2 Background 7
2.1 Remote Sensing 7
2.1.1 SAR Data 7
2.1.2 Radar Polarization 8
2.1.3 Sentinel-1 Data 10
2.2 Deep Learning 11
2.2.1 Convolutional Neural Network 12
2.2.2 Pre-trained Backbone Networks 17
2.3 Semantic Segmentation Networks 22
2.3.1 PSPNet 22
2.3.2 LinkNet 23
2.3.3 DeepLabv2 24
2.3.4 DeepLabv3+ 24
2.4 Morphology 25
2.4.1 Dilation 26
2.4.2 Erosion 26
2.4.3 Opening 27
2.4.4 Closing 28
Chapter 3 Datasets and Proposed Oil Spill Detection Method 29
3.1 Oil Spill Datasets 29
3.1.1 MKLab Dataset 29
3.1.2 Extended MKLab Dataset 30
3.2 The Proposed Full-Scale Aggregated MobileUNet Model 33
3.2.1 U-Net 34
3.2.2 MobileNetv3 35
3.2.3 Attention Mechanism 38
3.2.4 Atrous Spatial Pyramid Pooling 43
3.2.5 Full-Scale Aggregation 46
3.3 Loss Function 49
3.4 Evaluation Metric 50
Chapter 4 Experimental Results and Discussion 51
4.1 Experimental Settings 51
4.2 Accuracy Assessment Based on Different Backbone Models 52
4.3 Ablation Study 53
4.4 Semantic Segmentation Networks Comparison 56
4.5 Oil Spill Detection Results Improvement 61
Chapter 5 Application of Oil Spill Detection 68
5.1 Improved Ship Detection Performance 68
5.2 Oil Pollution Incidents 70
5.2.1 Tracking Oil-discharge Ships 72
5.2.2 Oil Pollution Caused by Shipwreck 73
5.2.3 Undersea Oil Pipeline Rupture Incident 75
Chapter 6 Conclusions and Future Work 77
6.1 Conclusions 77
6.2 Future Work 77
References 79
Appendix A. Publications List 83


1.P.M. Kazaj, M. Koosheshi, A. Shahedi, A.V. Sadr, “U-Net-based models for skin lesion segmentation: more attention and augmentation,” arXiv:2210.16399, 2022.
2.A. AL Qurri, M. Alemekkawy, “Improved UNet with attention for medical image segmentation,” Sensors, vol. 23, no. 20, pp. 8589, 2023.
3.S. Cui, Y. Zhang, H. Wen, Y. Tang, H. Wang, “ASPP-UNet: A new semantic segmentation algorithm for thyroid nodule ultrasonic image,” AIIIPC, Kunming, China, pp. 323-328, 2022.
4.H. He, D. Yang, S. Wang, S. Wang, Y. Li, “Road extraction by using atrous spatial pyramid pooling integrated encoder-decoder network and structural similarity loss,” Remote Sens., vol. 11, no. 9, pp. 1015, 2019.
5.J. Wang, L. Zhou, Z. Yuan, H. Wang, C. Shi, “MIC-Net: multi-scale integrated context network for automatic retinal vessel segmentation in fundus image,” Math. Biosci. Eng., vol. 20, no. 4, pp. 6912-6931, 2023.
6.H. Huang, L. Lin, R. Tong, H. Hu, Q. Zhang, Y. Iwamoto, X. Han, Y.W. Chen, J. Wu, “UNet 3+: A full-scale connected UNet for medical image segmentation,” arXiv:2004:08790, 2020.
7.G. Calabresi, F. Del Frate, J. Lichtenegger, A. Petrocchi, P. Trivero, “Neural networks for oil spill detection using ERS-SAR data,” In Proc. IGARSS’99, Hamburg, Germany, vol. 38, pp. 2282-2287, 1999.
8.C.A. Kontovas, H.N. Psaraftis, N.P. Ventikos, “An empirical analysis of IOPCF oil spill cost data,” Mar. Pollut. Bull. vol. 60, no. 9, pp. 1455-466, 2010.
9.J. Fan, F. Zhang, D. Zhao, J. Wang, “Oil spill monitoring based on SAR remote sensing imagery,” Aquat. Procedia., vol. 3, pp. 112-118, 2015.
10.D. Fustes, D. Cantorna, C. Dafonte, B. Arcay, A. Iglesias, M. Manteiga, “A cloud-integrated web platform for marine monitoring using GIS and remote sensing. Application on oil spill detection through SAR images,” Future Gener. Comput. Syst., vol. 34, pp. 155-160, 2014.
11.A. Solberg, G. Storvik, R. Solberg, E. Volden, “Automatic detection of oil spills in ERS SAR images,” IEEE Trans. Geosci. Remote Sens., vol. 37, no. 4, pp. 1916-1924, 1999.
12.K. Topouzelis, “Oil spill detection by SAR images: Dark formation detection, feature extraction and classification algorithms,” Sensors, vol. 8, no. 10, pp. 6642-6659, 2008.
13.M. Fingas, C. Brown, “Review of oil spill remote sensing," Mar. Pollut. Bull., vol. 83, no. 1, pp. 9-23, 2014.
14.A.H.S. Solberg, “Remote sensing of ocean oil-spill pollution,” in Proc. IEEE, vol. 100, no. 10, pp. 2931-2945, 2012.
15.H. Espedal, O. Johannessen, “Cover: Detection of oil spills near offshore installations using synthetic aperture radar (SAR),” Int. J. Remote Sens., vol. 21, no. 11, pp. 2141-2144, 2000.
16.L. Chang, J.C. Tang, “A region-based GLRT detection of oil spills in SAR images,” Pattern Recognit. Lett., vol. 29, no. 14, pp. 1915-1923, 2008.
17.V. Karathanassi, K. Topouzelis, P. Pavlakis, D. Rokos, “An object-oriented methodology to detect oil spills,” Int. J. Remote Sens., vol. 27, no. 23, pp. 5235-5251, 2006.
18.K. Topouzelis, A. Psyllos, “Oil spill feature selection and classification using decision tree forest on SAR image data,” ISPRS J. Photogramm. Remote Sens., vol. 68, pp. 135-143, 2012.
19.I. Keramitsoglou, C. Cartalis, C.T. Kiranoudis, “Automatic identification of oil spills on satellite images,” Environ. Model. Softw., vol. 21, no. 5, pp. 640-652, 2006.
20.K. Karantzalos, D. Argialas, “Automatic detection and tracking of oil spills in SAR imagery with level set segmentation,” Int. J. Remote Sens., vol. 29, no. 21, pp. 6281-6296, 2008.
21.B. Fiscella, A. Giancaspro, F. Nirchio, P. Pavese, P. Trivero, “Oil spill detection using marine SAR images,” Int. J. Remote Sens., vol. 21, no. 18, pp. 3561-3566, 2000.
22.H. Espedal, “Satellite SAR oil spill detection using wind history information,” Int. J. Remote Sens., vol. 20, no. 1, pp. 49-65, 1999.
23.X. X. Zhu, D. Tuia, L. Mou, G.S. Xia, L. Zhang, F. Xu, F. Fraundorfer, “Deep learning in remote sensing: A comprehensive review and list of resources,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 4, pp. 8-36, 2017.
24.J. Long, E. Shelhamer, T. Darrell, “Fully convolutional networks for semantic segmentation,” In Proc. IEEE CVPR, Boston, MA, USA, pp. 3431-3440, 2015.
25.H. Zhao, J. Shi, X. Qi, X. Wang, J. Jia, “Pyramid scene parsing network,” In Proc. 2017 IEEE CVPR, Honolulu, HI, USA, pp. 6230-6239, 2017.
26.O. Ronneberger, P. Fischer, T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” MICCAI, Springer: New York, NY, USA, pp. 234-241, 2015.
27.L.C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A.L. Yuille, “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 4, pp. 834-848, 2018.
28.L.C. Chen, Y. Zhu, G. Papandreou, F. Schroff, “Adam, H. Encoder-Decoder with atrous separable convolution for semantic image segmentation,” In Proc. ECCV, Munich, Germany, 8-14 September, 2018.
29.Basit, A.; Siddique, M.A.; Sarfraz, M.S. Deep learning based oil spill classification using Unet convolutional neural network. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, pp. 833-851, 2021.
30.Y. Fan, X. Rui, G. Zhang, T. Yu, X. Xu, S. Poslad, “Feature merged network for oil spill detection using SAR images,” Remote Sens., vol. 13, no. 16, pp. 3174, 2021.
31.R. Rousso, N. Katz, G. Sharon, Y. Glizerin, E. Kosman, A. Shuster, “Automatic recognition of oil spills using neural networks and classic image processing,” Water, vol. 14, no. 7, pp. 1127, 2022.
32.M. Shaban, R. Salim, H.A. Khalifeh, A. Khelifi, A. Shalaby, S. El-Mashad, A. Mahmoud, M. Ghazal, A. El-Baz, “A deep-learning framework for the detection of oil spills from SAR data,” Sensors, vol. 21, no. 7, pp. 2351, 2021.
33.A.S. Mahmoud, S.A. Mohamed, R.A. El-Khoriby, H.M. Abdelsalam, I.A. El-Khodary, “Oil spill identification based on dual attention UNet model using Synthetic Aperture Radar images,” J. Indian Soc. Remote Sens., vol. 51, pp. 121-133, 2023.
34.C. Li, M. Wang, X. Yang, D. Chu, “DS-UNet: Dual-stream U-Net for oil spill detection of SAR image,” IEEE Geosci. Remote Sens. Lett., vol. 20, pp. 1-5, 2023.
35.X. Ma, J. Xu, P. Wu, P. Kong, “Oil spill detection based on deep convolutional neural networks using polarimetric scattering information from Sentinel-1 SAR images,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1-13, 2022.
36.J.S. Lee, “Refined filtering of image noise using local statistics,” Comput. Graph. Image Proc., vol. 15, no. 4, pp. 380-389, 1981.
37.C. Xiao, J. Sun, “Deep neural networks,” Int. J. Deep Learn. Healthc., pp. 41-61, 2021.
38.K. Banerjee, V. Prasad C, R. Raj Gupta, K. Vyas, A. H, B. Mishra, “Exploring alternatives to softmax function,” arXiv: 2011.11538, 2020.
39.Z. Yu, E.L. Tan, D. Ni, J. Qin, S. Chen, S. Li, B. Lei, T. Wang, “A deep convolutional neural network-based framework for automatic fetal facial standard plane recognition,” IEEE J. Biomed. Health Inform., vol. 22, no. 3, pp. 874-885, 2018.
40.N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, no. 1, pp. 1929-1958, 2014.
41.I. Salehin, D. Kang, “A review on dropout regularization approaches for deep neural networks within the scholarly domain,” Electronics, vol. 12, no. 14, pp. 3106, 2023.
42.S.H. Shabbeer Basha, S.R. Dubey, V. Pulabaigari, S. Mukherjee, “Impact of fully connected layers on performance of convolutional neural networks for image classification,” Neurocomputing, vol. 378, pp. 112-119, 2020.
43.K. Simonyan, A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014.
44.A. Krizhevsky, I. Sutskever, G. Hinton, “ImageNet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst., vol. 60, no. 6, pp. 84-90, 2017.
45.K. He, X. Zhang, S. Ren, J. Sun, “Deep residual learning for image recognition,” arXiv:1512.03385, 2015.
46.G. Huang, Z. Liu, L.V. Der Maaten, K.Q. Weinberger, “Densely connected convolutional networks,” arXiv:1608.06993, 2016.
47.Q. Tan, Q.V. Le, “EfficientNet: rethinking model scaling for convolutional neural networks,” arXiv:1905.11946, 2019.
48.C. Szegedy, V. Vanhoucke, S. Loffe, J. Shlens, Z. Wojna, “Rethinking the inception architecture for computer vision,” arXiv:1512.00567, 2015.
49.A.D. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, “MobileNets: efficient convolutional neural networks for mobile vision applications,” arXiv:1704.04861, 2017.
50.M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.C. Chen, “MobileNetV2: inverted residuals and linear bottlenecks,” arXiv:1801.04381, 2018.
51.A. Chaurasia, E. Culurciello, “LinkNet: Exploiting encoder representations for efficient semantic segmentation,” arXiv:1707.03718, 2017.
52.J. Serra, “Image analysis and mathematical morphology,” Academic Press, New York, 1982.
53.J. Serra, “Introduction to mathematical morphology,” Comput. Vision, Graphics, Image Process, vol. 35, pp. 283-305, 1986.
54.G. Orfanidis, K. Ioannidis, K. Avgerinakis, S. Vrochidis, I. Kompatsiaris, “A deep neural network for oil spill semantic segmentation in SAR images,” In Proc. IEEE ICIP, Athens, Greece, pp. 3773-3777, 2018.
55.M. Krestenitis, G. Orfanidis, K. Ioannidis, K. Avgerinakis, S. Vrochidis, I. Kompatsiaris, “Oil spill identification from satellite images using deep neural networks,” Remote Sens., vol. 11, no .15, pp. 1762, 2019.
56.N. Kanopoulos, N. Vasanthavada, R.L. Baker, “Design of an image edge detection filter using the Sobel operator,” IEEE JSSC., vol. 23, no. 2, pp. 358-367, 1988.
57.A. Howard, M. Sandler, G. Chu, L.C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q.V. Le, H. Adam, “Searching for MobileNetV3,” Proc. IEEE ICCV, Seoul, Korea, pp. 1314-1324, 2019.
58.J. Hu, L. Shen, G. Sun, “Squeeze-and-Excitation networks,” In Proc. IEEE CVPR, Salt Lake City, UT, USA, pp. 7132-7141, 2018.
59.S. Woo, J.C. Park, J.Y. Lee, I.S. Kweon, “CBAM: Convolutional block attention module,” In Pro. ECCV, Munich, Germany, pp. 3-19, 2018.
60.R. Mondal, P. Purkait, S. Santra, B. Chanda, “Morphological networks for image de-raining,” arXiv:1901.02411, 2019.
61.R. Decelle, P. Ngo, I. Debled-Rennesson, F. Mothe, F. Longuetaud, “Light U-Net with a new morphological attention gate model application to analyse wood sections,” In Proc. ICPRAM, Lisbon, Portugal, vol. 1, pp. 759-766, 2023.
62.Y. Shen, X. Zhong, F. Shih, “Deep morphological neural networks,” arXiv:1909.01532, 2019.
63.Zhixuhao. Zhixuhao/unet. 2017. Available online: https://github.com/zhixuhao/unet (accessed on 15 July 2021).
64.D.P. Kingma, J. Ba, “Adam: A method for stochastic optimization,” arXiv:1412.6980, 2015.
65.A. Basit, M.A. Siddique, M.K. Bhatti, M.S. Sarfraz, “Comparison of CNNs and vision transformers-based hybrid models using gradient profile loss for classification of oil spills in SAR images,” Remote Sens., vol. 14, no. 9, pp. 2085, 2022.
66.B. Zhang, E.J. Matchinski, B. Chen, X. Ye, L. Jing, K. Lee, Chapter 21 – Marine oil spills – oil pollution, sources and effects. Sheppard, C. (Ed.), World Seas: an environmental evaluation (second ed.), Academic Press, pp. 391-406, 2019.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top