跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.110) 您好!臺灣時間:2026/05/05 09:20
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:吳易璉
研究生(外文):Yi-Lian Wu
論文名稱:結合遞增式及輻狀基底函數類神經網路於超音波影像中前列腺之切割
論文名稱(外文):Integrated the Validation Incremental Neural Networks and Radial-Basis Function Neural Networks for Segmenting Prostate
指導教授:張傳育
學位類別:碩士
校院名稱:國立雲林科技大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2009
畢業學年度:97
語文別:中文
論文頁數:88
中文關鍵詞:輻狀基底函數類神經網路主動式輪廓模型經直腸超音波影像
外文關鍵詞:Active Contour ModelTRUS imagesRBFNN
相關次數:
  • 被引用被引用:0
  • 點閱點閱:253
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
在經由醫學影像檢測前列腺有無病變時,須要先由醫師手動圈選出前列腺的區域,這樣的方式是沒有效率且不可複製,在眾多的輪廓擷取方法中,最具知名的是主動式輪廓模型(Active Contour Model, ACM),雖然它被成功應用在輪廓的偵測上,但是它的缺點是在切割的過程中,需要人為給予初始輪廓,所以並不屬於自動化的系統。
為解決此一問題,本論文提出前列腺分類器粗略找出前列腺區域,並結合ACM對經直腸超音波影像中的前列腺進行切割。此一前列腺分類器是由兩個並列的遞增式網路及一個輻狀基底函數類神經網路結合而成,經由此分類器切割出前列腺的範圍後,再經由ACM的調整,取得較平滑的輪廓,得到最後切割的結果。如此一來在臨床有所幫助,也能夠成功解決ACM需要人為給予初始輪廓的問題。
實驗結果顯示,本論文所提出的切割法平均正確率可達94.79 %,與其他方法比較後,也有較高的準確率,證明本論文所提出的前列腺切割方式是有效且可行的。
Recently, Transrectal ultrasoundgraphy (TRUS) imaging is widely used to diagnose prostate disease. Before a physician can diagnose prostate lesions, contour of the prostate in TRUS images must be manually outlined. However, manual segmentation is time-consuming and inefficient. Therefore, an automatic segmentation of prostate in TRUS images is necessary. Among the segmentation methods, active contour model (ACM) is a successful contour detection method. But the shortcoming of ACM is that the determination of the initial contour is manual. Thus, in this paper, an automatic neural-network-based prostate segmentation method in TRUS images is proposed, which can omit the complicated step of determine the initial contour. The proposed system consists of the Validation Incremental Neural Network and Radial-Basis Function Neural Networks for prostate segmentation. Experimental results show that the proposed method has higher accuracy than Active Contour Model (ACM).
摘要 i
ABSTRACT ii
誌謝 iii
目錄 v
圖片索引 ix
第1章 1
1.1 研究動機 1
1.2 超音波影像簡介 2
1.3 相關文獻 3
1.3.1 邊緣檢測(Edge-Detection) 3
1.3.2 臨界值法(Thresholding) 4
1.3.3 區域成長法(Region Growing) 4
1.3.4 形態學的分水嶺切割(Morphological Watershed) 4
1.4 研究方法 5
1.5 章節大綱 7
第2章 8
2.1 平滑線性濾波器(Average Filter) 8
2.2 乘冪律轉換(Power-Law Transformations) 10
2.3 遮罩式標籤法 12
2.4 邊緣偵測 13
2.5 區域填充(Region Filling) 14
2.6 主動式輪廓模型(Active Contour Model, ACM) 16
第3章 17
3.1 遞增式類神經網路(Validation Incremental Neural Networks, VINN) 18
3.2 輻狀基底函數類神經網路(Radial Basis Function Networks, RBFNN) 21
3.3 前列腺分類器(Prostate Classifier) 24
第4章 26
4.1 前處理 26
4.1.1 影像強化 27
4.1.2 可疑前列腺區域定位 27
4.2 Pattern擷取 32
4.2.1 訓練樣本 32
4.2.2 測試樣本 33
4.3 特徵擷取 34
4.3.1 特徵選取(Feature Selection) 34
4.3.2 鄰近灰階相依矩陣(Neighboring Gray Level Dependence Matrix, NGLDM) 35
4.3.3 區域傅立葉係數特徵(Fourier Feature Based on Local Fourier Coefficients) 37
4.3.4 特徵向量 38
4.4 前列腺分類器(Prostate Classifier) 39
4.4.1 前列腺分類器之訓練程序 39
4.4.2 前列腺分類器之測試程序 41
4.5 前列腺區域修飾 42
4.6 邊界偵測(Edge Detection) 44
4.7 主動式輪廓模型(Active Contour Models, ACM) 46
第5章 47
5.1 影像資料與實驗環境 47
5.2 各實驗步驟之結果 47
5.2.1 前處理 47
5.2.1.1 影像強化 48
5.2.1.2 可疑前列腺區域定位 49
5.2.2 前列腺分類器分類 49
5.2.3 前列腺區域修飾 50
5.2.4 邊界偵測 51
5.2.5 主動式輪廓模型 51
5.2.6 切割結果與手繪輪廓比較 52
5.3 類神經網路之訓練樣本及測試樣本 53
5.4 前列腺切割之準確性評估準則 53
5.5 Pattern大小探討 54
5.6 VINNs之參數探討 55
5.6.1 臨界值探討 55
5.6.2 VINNs起始點之探討 56
5.7 RBFNN參數之探討 57
5.7.1 隱藏層中心點個數 57
5.7.2 權重學習速率 58
5.7.3 中心點學習速率 59
5.8 不同訓練影像之探討 60
5.9 重疊率之探討 60
5.10 主動式輪廓模型參數之探討 61
5.11 前列腺切割之準確性 64
5.12 VINNs之效能 65
5.13 網路切割效能之評估 66
5.14 ACM之效能 67
5.15 與ACM進行比較 67
5.16 方法比較 70
5.16.1 與Zhang所提出的方式進行比較 70
第6章 72
第7章 73
Integrating the Validation Incremental Neural Network and Radial-Basis Function Neural Network for Segmenting Prostate in Ultrasound Images 76
[1]D.E. Maroulis, M.A. Savelonas, D.K. Iakovidis, S.A. Karkanis, N. Dimitropoulos, “Variable Background Active Contour Model for Computer-Aided Delineation of Nodules in ThyPatternd Ultrasound Images,” IEEE Transactions on Information Technology in Biomedicine, vol. 11, no. 5, 2007, pp. 537-543.
[2] M.N. Kurnaz, Z. Dokur, and T. Ölmez, “An Incremental Neural Network for Tissue Segmentation in Ultrasound Images,” Computer Methods and Programs in Biomedicine, 2007, pp.187-195.
[3] F. Sahba, H.R. Tizhoosh, and M.M. Salama, “Application of Reinforcement Learning for Segmentation of Transrectal Ultrasound Images,” BMC Medical Imaging, 2008, pp.1471-2342.
[4] Y. Zhang, R. Sankar, and W. Qian, “Boundary Delineation in Transrectal Ultrasound Image for Prostate Cancer,” Comp. Biol. Med., 2007, pp.1591-1599.
[5] C.Y. Chang, M.F. Tsai, and S.J. Chen, “Classification of the ThyPatternd Nodules Using Support Vector Machines,” Proc. of IEEE World Congress on Computational Intelligence, 2008, pp.3093-3098
[6] H. Yu, M. Li, H.J. Zhang, and J. Feng, “Color Texture Moments for Content-Based Image Retrieval,” Proc. of International Conference on Image Processing, 2002, pp.24-28.
[7] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” International Journal of Computer Vision, 1987, pp.321-331.
[8] D.-R. Chen, R.-F. Chang, W.-J. Wu, W. K. Moon, and W.-L. Wu, “3-D breast ultrasound segmentation using active contour model,”
Ultrasound in Medicine and Biology, vol. 29, no. 7, July 2003, pp. 1017-1026.
[9] N. Betrouni, M. Vermandel, D. Pasquier, S. Maouche, and J. Rousseau, “Segmentation of Abdominal Ultrasound Images of the Prostate Using a Priori Information and an Adapted Noise Filter,” Computerized Medical Imaging and Graphics, 2005, pp.43-51.
[10] M.N. Kurnaz, Z. Dokur, and T. Ölmez, “Segmentation of Remote-Sensing Images by Incremental Neural Network,” Pattern Recognition Letters, 2005, pp.1096-1104.
[11]S.S. Mohamed, and M.M. Salama, “Spectral Clustering for TRUS Images,” BioMedical Engineering OnLine, 2007.
[12]F. Zhou, J.F. Feng, and Q.Y. Shi, “Texture Feature Based on Local Fourier Transform,” Proc. of IEEE International Conference of Image Processing, 2001,pp.610-613.
[13]Z.M. Dokur, T. Ölmez, “Tissue Segmentation in Ultrasound Images by Using Genetic Algorithms,” Expert Systems with Applications, 2007, pp.2739-2746.
[14]C. Sun and W.G. Wee, “Neighboring gray level dependence matrix for texture classification,” Computer Vision, Graphics, and Image Processing, 1983, pp.341-352.
[15]S. Tong, D.B. Downey, H.N. Cardinal, A. Fenster, “A three-dimensional ultrasound prostate imaging system,” Ultrasound Med. Biol., 1996, pp.735-746.
[16]C. Baillard, C. Barillot, and P. Bouthemy, “Robust adaptive segmentation of 3d medical images with level sets,” Institut national de recherche en informatique et en automatique (INRIA), Le Chesnay Cedex, France, Tech. Rep. 4071, Nov. 2000.
[17]R.C. Gonzalez, and R.E. Woods, Digital Image Processing 2nd Edition, 1992.
[18]S. Haykin, Neural Networks: A Comprehensive Foundation 2nd Edition, 1998.
[19]Yi-Wei Chen and Chih-Jen Lin, 2006, “Combining SVMs with various feature selection strategies”, Feature extraction, foundations and applications, Springer.
[20]Sergios Theodoridis and Konstantinos Koutroumbas, 2003, Pattern Recognition, Second Edition, Elsevier, USA.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top