跳到主要內容

臺灣博碩士論文加值系統

(44.212.94.18) 您好!臺灣時間:2023/12/12 00:33
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:田振嘉
研究生(外文):Chen-Chia Tien
論文名稱:顯著性檢測於內視鏡即時影像之研究
論文名稱(外文):Salient Object Detection in Real-time Endoscopy
指導教授:林詠章林詠章引用關係
指導教授(外文):Iuon-Chang Lin
口試委員:林家禎鄭辰仰
口試委員(外文):Chia-Chen LinChen-Yang Cheng
口試日期:2021-09-24
學位類別:碩士
校院名稱:國立中興大學
系所名稱:資訊管理學系所
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2023
畢業學年度:111
語文別:中文
論文頁數:32
中文關鍵詞:顯著性物體檢測內視鏡
外文關鍵詞:SODSalient Object DetectionEndoscopy
相關次數:
  • 被引用被引用:0
  • 點閱點閱:22
  • 評分評分:
  • 下載下載:4
  • 收藏至我的研究室書目清單書目收藏:0
Boundary-Aware Salient Object Detection) BASNet 是一個圖像分割架構,由預測精鍊架構和混合損益組成。建議的 BASNet 包括預測精煉架構和混合損益,用於高度精確的圖像分割。U2-Net 是一種雙層嵌套 U 結構架構,專為Salient Object Detection (SOD) 而設計。該架構允許網路更深入,實現高解析度,而不會顯著增加記憶體和計算成本。這是通過嵌套的U結構實現的:在底部,使用新的剩餘U塊 (RSU) 模組,該模組能夠提取階段內多尺度功能,而不會降低特徵圖解析度:在頂部,有一個U-Net樣結構,其中每個階段都由RSUF區塊填充。我們希望藉由這兩個的顯著性物體檢測的方法,能實現建立消化道內視鏡的即時輔助決策系統。為醫生提供醫療過程中的精準服務,並且降低病人在檢測或手術時的風險。
Boundary-Aware Salient Detection Object BASNet is an image segmentation architecture consisting of a predictive refining architecture and a hybrid profit and loss. The recommended BASNet includes a predictive refining architecture and hybrid profit and loss for highly accurate image segmentation. U2-Net is a two-tier nested U-structure designed for Salient Object Detection (SOD). The architecture allows the network to go deeper and achieve high resolution without significantly increasing memory and computing costs. This is achieved through nested U-structures: at the bottom, with the new Remaining U Block (RSU) module, the module is able to extract multiscale functionality within the stage without reducing the feature map resolution: at the top, there is a U-Net-like structure, each of which is filled with RSUF blocks. We hope that by means of the detection of these two significant objects, we can realize the real-time auxiliary decision-making system for the establishment of gastrointestinal endoscopes. Provide doctors with accurate services in the medical process and reduce the risk to patients during testing or surgery.
摘要 i
Abstract ii
目次 iii
表目次 v
圖目次 vi
第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究目的 1
1.3 研究架構 1
第二章 文獻探討 3
2.1 顯著性物體檢測 ( Salient Object Detection, SOD ) 3
2.1.1 以傳統方法進行顯著物件檢測 3
2.1.2 以深度學習方法進行顯著物件檢測 4
2.2 物件語義分割顯著性物體檢測存在問題 11
2.2.1 語義分割顯著性物體檢測存在問題 11
2.2.2 影像中的顯著性物體檢測 11
2.2.3 公開資料集的不足 11
2.3 資料集 12
2.3.1 顯著性檢測應用場景 12
2.3.2 常見的資料集 12
2.4 模型評估指標[49] 14
2.4.1 MAE (Mean Absolute Error) 14
2.4.2 PR-Curve 14
2.4.3 F-measure 14
2.4.4 S-measure 14
第三章 研究架構與方法 16
3.1 資料集 16
3.2 研究環境 17
3.2.1 雲端平台 17
3.2.2 Azure Machine Learning服務 18
3.3 演算法 18
3.3.1 U2-Net 18
3.3.2 BASNet 20
3.4 模型實驗架構 21
第四章 研究結果 24
4.1 模型訓練結果 24
4.2 模型效能驗證 25
第五章 結論與展望 27
參考書目 30
[1]Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J]. PAMI, 1998 (11): 1254-1259.
[2]Achanta R, Estrada F, Wils P, et al. Salient region detection and segmentation[C]//International conference on computer vision systems. Springer, Berlin, Heidelberg, 2008: 66-75.
[3]Vidal R, Ma Y, Sastry S. Generalized principal component analysis (GPCA)[J]. IEEE transactions on pattern analysis and machine intelligence, 2005, 27(12): 1945-1959.
[4]Achanta R, Hemami S, Estrada F, et al. Frequency-tuned salient region detection[J]. 2009.
[5]Cheng M M, Mitra N J, Huang X, et al. Global contrast based salient region detection[J]. PAMI, 2015, 37(3): 569-582.
[6]Wang M, Konrad J, Ishwar P, et al. Image saliency: From intrinsic to extrinsic context[C]. CVPR 2011. IEEE, 2011: 417-424.
[7]Wang L, Lu H, Ruan X, et al. Deep networks for saliency detection via local estimation and global search[C]. CVPR, 2015: 3183-3192.
[8]Zhao R, Ouyang W, Li H, et al. Saliency detection by multi-context deep learning[C]. CVPR, 2015: 1265-1274.
[9]J. Long, E. Shelhameand T. Darrell, “Fully convolutional networks for semantic segmentation”. CVPR, 2015, pp.3431–3440.
[10]Borji A, Cheng M M, Hou Q, et al. Salient object detection: A survey[J]. arXiv preprint arXiv:1411.5878, 2014.
[11]Li G, Yu Y. Visual saliency based on multiscale deep features[C]. CVPR, 2015: 5455-5463.
[12]Liu N, Han J. Dhsnet: Deep hierarchical saliency network for salient object detection[C]. CVPR, 2016: 678-686.
[13]Chen T, Lin L, Liu L, et al. Disc: Deep image saliency computing via progressive representation learning[J]. IEEE transactions on neural networks and learning systems, 2016, 27(6): 1135-1149.
[14]Lee G, Tai Y W, Kim J. Deep saliency with encoded low level distance map and high level features[C]. CVPR, 2016: 660-668.
[15]Li Z, Lang C, Chen Y, et al. Deep Reasoning with Multi-scale Context for Salient Object Detection[J]. CVPR, 2019.
[16]Jonathan Huang, Vivek Rathod, Chen Sun, et al. Speed/accuracy trade-offs for modern convolutional object detectors. CVPR, 2017.
[17]K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2018.
[18]K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CVPR, pages 770–778, 2015.
[19]G. Li and Y. Yu. Deep contrast learning for salient object detection. CVPR, pages 478–487, 2016.
[20]L. Wang, L. Wang, H. Lu, P. Zhang, and X. Ruan. Saliency detection with recurrent fully convolutional networks. ECCV, pages 825–841, 2016.
[21]Q. Hou, M.-M. Cheng, X. Hu, A. Borji, Z. Tu, and P. Torr. Deeply supervised salient object detection with short connections. CVPR, pages 5300–5309, 2017.
[22]Z. Luo, A. K. Mishra, A. Achkar, J. A. Eichel, S. Li, and P.-M. Jodoin. Non-local deep features for salient object detection. CVPR, pages 6593–6601, 2017.
[23]P. Zhang, D. Wang, H. Lu, H. Wang, and X. Ruan. Amulet: Aggregating multi-level convolutional features for salient object detection. ICCV, pages 202–211, 2017.
[24]S. Chen, X. Tan, B. Wang, and X. Hu. Reverse attention for salient object detection. ECCV, pages 236–252, 2018.
[25]L. Zhang, J. Dai, H. Lu, Y. He, and G. Wang. A bidirectional message passing model for salient object detection. CVPR, pages 1741–1750, 2018.
[26]X. Zhang, T. Wang, J. Qi, H. Lu, and G. Wang. Progressive attention guided recurrent network for salient object detection. CVPR, pages 714–722, 2018.
[27]N. Liu, J. Han, and M.-H. Yang. PiCANet: Learning pixelwise contextual attention for saliency detection. CVPR, pages 3089–3098, 2018.
[28]Hu X, Zhu L, Qin J, et al. Recurrently aggregating deep features for salient object detection[C]. AAAI , 2018.
[29]T. Wang, A. Borji, L. Zhang, P. Zhang, and H. Lu. A stagewise refinement model for detecting salient objects in images. ICCV, pages 4019–4028, 2017.
[30]T. Wang, L. Zhang, S. Wang, H. Lu, G. Yang, X. Ruan, and A. Borji. Detect globally, refine locally: A novel approach to saliency detection. CVPR, pages 3127–3135, 2018.
[31]Deng Z, Hu X, Zhu L, et al. R3Net: Recurrent residual refinement network for saliency detection[C]. IJCAI, 2018: 684-690.
[32]Huang G, Liu Z, van der Maaten L, Weinberger K Q. Densely connected convolutional networks[C]. CVPR, 2017.
[33]Chen S, Wang B, Tan X, et al. Embedding Attention and Residual Network for Accurate Salient Object Detection[J]. IEEE transactions on cybernetics, 2018.
[34]Yunzhi Zhuge, Yu Zeng, Huchuan Lu. Deep Embedding Features for Salient Object Detection[C]. AAAI, 2019.
[35]Wu H, Zheng S, Zhang J, et al. Fast end-to-end trainable guided filter[C]. CVPR, 2018: 1838-1847.
[36]Zhang J, Dai Y, Porikli F. Deep salient object detection by integrating multi-level cues[C]. WACV, 2017: 1-10.
[37]Zhang P, Liu W, Lu H, et al. Salient Object Detection with Lossless Feature Reflection and Weighted Structural Loss[J].TIP, 2019.
[38]Su J, Li J, Xia C, et al. Selectivity or Invariance: Boundary-aware Salient Object Detection[C]. CVPR, 2019.
[39]Zhang X, Zhou X, Lin M, et al. Shufflenet: An extremely efficient convolutional neural network for mobile devices[C]. CVPR, 2018: 6848-6856.
[40]Y. Chen, M. Rohrbach, Z. Yan, S. Yan, J. Feng, and Y. Kalantidis. Graph-based global reasoning networks. arXiv preprint arXiv:1811.12814, 2018.
[41]X. Wang and A. Gupta. Videos as space-time region graphs. arXiv preprint arXiv:1806.01810, 2018.
[42]T.-Y. Lin, A. RoyChowdhury, and S. Maji. Bilinear cnn models for fine-grained visual recognition. CVPR, pages 1449–1457, 2015.
[43]Li G, Xie Y, Lin L, et al. Instance-level salient object segmentation[C]. CVPR, 2017: 2386-2395.
[44]DengPing Fan, MingMing Cheng, JiangJiang Liu, et al. Salient Objects in Clutter: Bringing Salient Object Detection to the Foreground, ECCV, 2018.
[45]Fan R, Hou Q, Cheng M M, et al. S4Net: Single Stage Salient-Instance Segmentation[J]. arXiv preprint arXiv:1711.07618, 2017.
[46]Wang L, Wang L, Lu H, et al. Salient object detection with recurrent fully convolutional networks[J]. PAMI, 2018.
[47]Li X, Zhao L, Wei L, et al. Deepsaliency: Multi-task deep neural network model for salient object detection[J].TIP, 2016, 25(8): 3919-3930.
[48]Sen Jia, Neil D. B. Bruce. Richer and Deeper Supervision Network for Salient Object Detection[C]. Arxiv, 2019.
[49]: A. Borji, M.-M. Cheng, H. Jiang, and J. Li. Salient object detection: A benchmark. IEEE TIP, 24(12):5706–5722, 2015.
[50]DengPing Fan, MingMing Cheng, YunLiu, et al. Structure-measure: A new way to evaluate foreground maps[C]. IEEE ICCV, 2017.
[51]: R. Margolin, L. Zelnik-Manor, and A. Tal. How to evaluate foreground maps? In IEEE CVPR, pages 248–255, 2014.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top