跳到主要內容

臺灣博碩士論文加值系統

(44.220.44.148) 您好!臺灣時間:2024/06/21 16:11
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:吳孟芳
研究生(外文):Wu, Meng-Fang
論文名稱:深度學習方法於骨盆X光之骨折偵測
論文名稱(外文):Deep learning approaches for fracture detection in pelvic x-ray images
指導教授:鍾翊方
指導教授(外文):Chung, I-Fang
口試委員:秦群立黃彥華鄭啟桐鍾翊方
口試委員(外文):Qin, Qun-LiHuang, Yan-HuaCheng, Chi-TungChung, I-Fang
口試日期:2023-01-27
學位類別:碩士
校院名稱:國立陽明交通大學
系所名稱:生物醫學資訊研究所
學門:生命科學學門
學類:生物化學學類
論文種類:學術論文
論文出版年:2023
畢業學年度:111
語文別:中文
論文頁數:48
中文關鍵詞:Pelvic radiographs (PXRs)骨折偵測物件偵測影像分割Patch
外文關鍵詞:Pelvic radiographs (PXRs)fracture detectionobject detectionimage segmentationPatch
相關次數:
  • 被引用被引用:0
  • 點閱點閱:290
  • 評分評分:
  • 下載下載:70
  • 收藏至我的研究室書目清單書目收藏:1
在臨床上,Pelvic radiographs (PXRs) 是用來判斷骨盆腔所有可能發生骨折的醫學影像,種類包括髖部骨折、骨盆骨折、髖關節脫位和其他相關損傷等。其中,髖部骨折在事發比例以中年長者居多,一旦延誤診斷也許會產生嚴重後遺症甚至導致死亡。因此對於這些受損部位,如果能在急診室及早發現及治癒可以盡量避免不良後果的發生。近年來,對於輔助醫學影像判斷的應用中,利用深度學習進行影像辨識的方法是一種常見的策略,已被證明可以成功執行多種分類任務。但用於醫學圖像分析的深度學習算法的最大障礙是獲得醫學圖像的大規模標註,需要豐富的臨床經驗和大量的勞動來準確地註釋 PXR 上所有骨折部位。但因PXR骨折區域沒有明確的實例和邊界定義,因此對於bounding box的定義,是以醫生標記的骨折中心點延伸變成框,再與醫生確認是否框範圍合理。
為了要偵測PXR影像中骨折發生區域,甚至在發生多處骨折時,能找出各部位的骨折位置,因此本研究提出將物件偵及影像分割的作法作為訓練骨盆腔骨折偵測模型的策略,偵測出所有部位的骨折發生位置區域,並結合一些增強學習的策略,例如權重預訓練和資料擴增來提升識別骨折部位的成功率。此外,也嘗試將模型中的Intersection over Union (IoU)評估方法改為Generalized Intersection over Union(GIoU),結果顯示假陽性率些微降低。此外在一系列的模型及資料設定調整後,可以看到在Efficientdet的能力稍微比RetinaNet好一點,但precision結果沒有什麼差異。整體而言,在判斷每個解剖區發生骨折的狀況的效果都有80%以上且在多點骨折的預測能力也不會跟單點骨折差異太大。而在影像分割的模型中,可以看到預測的影像範例中,不只能準確預測到位置,且能大部分吻合mask的區域,證明了做法應用在骨折偵測是可行的。除此之外,在不同模型策略中,以Unet預測效果更略勝一籌。
此外,在所有模型預測時,我們改用Patch作為評估指標,以此解剖區域只要有出現骨折預測則能提醒醫師關注此區域。因此在模型預測時,只要預測及真實框出現在同一區域即為預測正確,而所有結果顯示模型的預測大多數皆有跟標準答案在同一區域,證明了三個模型在預測骨折的方向大致上是對的,但可能因為真實框並沒有標註到所有發生骨折的位置,因而造成誤判率高的問題,這樣的做法也證明了模型在骨折影像應用上是能或多或少給予醫師實質上的幫助的。
Clinically, Pelvic radiographs (PXRs) are medical images used to judge all possible fractures of the pelvic cavity, including hip fractures, pelvic fractures, hip dislocations, and other related injuries. Among them, hip fractures occur mostly in middle-aged and elderly people. Delayed diagnosis may cause serious sequelae and even lead to death. Therefore, for these damaged parts, if they can be detected and cured early in the emergency room, adverse consequences can be avoided as much as possible. In recent years, for the application of auxiliary medical image judgment, the method of image recognition using deep learning is a common strategy, which has been proven to successfully perform a variety of classification tasks. However, because there is no clear example and boundary definition of the PXR fracture area, the definition of the bounding box is to extend the fracture center point marked by the doctor into a box, and then confirm with the doctor whether the box range is reasonable.
In order to detect the fracture area in the PXR image, and even find out the fracture location of each part when multiple fractures occur, this study proposes the method of object detection and image segmentation as a strategy for training the pelvic fracture detection model , to detect the fracture location area of all parts, and combine some enhanced learning strategies, such as weight pre-training and data amplification to improve the success rate of fracture site identification. In addition, we also tried to modify the evaluation method of loss in the model, and the results showed that the false positive rate was slightly reduced. In addition, after a series of model and data setting adjustments, it can be seen that the ability of Efficientdet is slightly better than that of RetinaNet, but there is no difference in precision results. Overall, the effect of judging the occurrence of fractures in each anatomical area is more than 80%, and the predictive ability of multi-point fractures is not much different from that of single-point fractures. In the image segmentation model, it can be seen that in the predicted image examples, not only the position can be accurately predicted, but also most of the mask areas can be matched, which proves that the method is feasible for fracture detection. In addition, among different model strategies, the prediction effect of Unet is slightly better.
In addition, in all model predictions, we use Patch as the evaluation index instead, so that as long as there is a fracture prediction in this anatomical area, the physician can be reminded to pay attention to this area. All the results show that most of the predictions of the models are in the same area as the standard answer, which proves that the three models are roughly correct in predicting the direction of the fracture. Or less substantial help to physicians.
致謝_i
中文摘要_ii
英文摘要_iii
目錄_iv
圖目錄_v
表目錄_vii
第一章 研究背景_p1
1.1 骨盆腔X光影像(pelvic X-ray, PXR)_p1
1.2 深度學習(Deep learning)_p2
1.3 深度學習於影像分類_p3
1.4 深度學習於影像的物件偵測及影像分割_p5
1.5 研究目標_p6
1.6 論文架構_p6
第二章 文獻回顧_p7
2.1 深度學習用於醫學影像分類_p7
2.2 骨折區域檢測應用_p8
第三章 實驗材料與方法_p13
3.1 PXR資料集介紹_p13
3.2 研究流程_p18
3.3 PXR骨折偵測模型訓練_p18
3.3.1 RetinaNet模型架構及設定_p18
3.3.2 Efficientdet模型架構及設定_p21
3.4 PXR骨折影像分割模型架構設定_p22
3.5 骨折位置偵測之評估指標_p22
3.6 IoU評估_p22
3.7 Patch評估_p24
第四章 結果與討論_p27
4.1 RetinaNet模型的效能評估_p27
4.2 模型評估策略的比較_p32
4.3 物件偵測模型比較_p38
4.4 不同策略檢測骨折結果_p40
4.5 骨折預測特徵視覺化_p42
4.6 與文獻結果比較_p44
第五章 結論_p45
參考文獻_p46
[1] Mounts J, Clingenpeel J, McGuire E, Byers E, Kireeva Y, “Most frequently missed fractures in the emergency department,” Clinical Pediatrics, vol. 50, pp. 183-186, 2010
[2] Mattijssen-Horstink L., Langeraar J.J., Mauritz G.J. et al, “Radiologic discrepancies in diagnosis of fractures in a Dutch teaching emergency department: a retrospective analysis,” Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine, vol. 28, no.38, 2020
[3] Dawod MS., Alisi MS. et al., “Characteristics of Elderly Hip Fracture Patients in Jordan: A Multicenter Epidemiological Study,” International Journal of General Medicine, vol. 15, pp. 6591-6598, 2022
[4] McCulloch W.S., Pitts W., “A logical calculus of the ideas immanent in nervous activity,” The bulletin of mathematical biophysics, vol. 5, no. 4, pp. 115-133, 1943
[5] LeCun Y., Koray K., Clement F., “Convolutional networks and applications in vision,” IEEE International Symposium on Circuits and Systems (ISCAS), pp.253-256, 2010
[6] LeCun Y., Bottou L. et al., “Gradient based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998
[7] Krizhevsky A., Sutskever I., and Hinton G. E., “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, pp. 1097-1105, 2012
[8] Simonyan K. and Zisserman A., “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2015
[9] Szegedy C.et al, “Going Deeper with Convolutions,” IEEE Conference on Computer Vision and Pattern Recognition, pp.1-9, 2015
[10] Huang G., et al, “Densely connected convolutional networks,” IEEE Conference on Computer Vision and Pattern Recognition, pp.2261-2269, 2017
[11] Tomita N, Cheung YY, Hassanpour S, “Deep neural networks for automatic detection of osteoporotic vertebral fractures on CT scans,” Computers in Biology and Medicine, vol. 98, pp. 8-15, 2018
[12] He K., Zhang X., Ren S., and Sun J., “Deep residual learning for image recognition,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016
[13] Hochreiter S. and Schmidhuber J., “Long Short-Term Memory,” Neural Computation, vol.9, no. 8, pp.1735–1780, 1997
[14] Kim D.H., MacKinnon T., “Artificial intelligence in fracture detection: transfer learning from deep convolutional neural networks,” Clinical Radiology, vol. 73, pp. 439-445, 2018
[15] Szegedy C., Vanhoucke V. et al., “Rethinking the Inception Architecture for Computer Vision,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818-2826, 2016
[16] Cheng, CT., Ho, TY., Lee, TY. et al, “Application of a deep learning algorithm for detection and visualization of hip fractures on plain pelvic radiographs,” European Radiology, vol. 29, pp.5469–5477, 2019
[17] Selvaraju, R.R., et al., “Grad-CAM: Visual Explanations from Deep Networks viaGradient-Based Localization,” International Journal of Computer Vision, Vol.128, pp. 336-359, 2020
[18] Ronneberger O., Fischer P., and Brox T., “U-Net: Convolutional Networks for Biomedical Image Segmentation,” vol. 9351, pp.234-241, 2015
[19] Long J., Shelhamer E., Darrell T., “Fully convolutional networks for semantic segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.3431-3440, 2015
[20] Zheng S. et al., “Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, pp. 6877-6886, 2021
[21] Matthew A., Chen W. et al., “Computer vs human: Deep learning versus perceptual training for the detection of neck of femur fractures,” Journal of Medical Imaging and Radiation Oncology, vol.63, pp.27-32, 2019
[22] William G. et al, “Detecting hip fractures with radiologist-level performance using deep neural networks,” arXiv:1711.06504v1, 2017
[23] Lin T. Y. et al, “Focal loss for dense object detection,” International Conference on Computer Vision, pp. 2980-2988, 2017
[24] Lin T. Y. et al, “Feature Pyramid Networks for Object Detection,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 936-944, 2017
[25] Tan M., Pang R. and Le Q. V., “EfficientDet: Scalable and Efficient Object Detection,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10778-10787, 2020
[26] Cheng CT., Wang Y., Chen H.W. et al, “A scalable physician-level deep learning algorithm detects universal trauma on pelvic radiographs” Eur Radiol, vol. 29, pp.5469-5477, 2019
[27] Jiménez-Sánchez A., Kazi A., Albarqouni S. et al., “Precise proximal femur fracture classification for interactive training and surgical planning”. Int J CARS, vol.15, pp.847–857, 2020
[28] Lin M. et al, “Fine-Tuned Deep Convolutional Networks for the Detection of Femoral Neck Fractures on Pelvic Radiographs: A Multicenter Dataset Validation,” IEEE Access, vol.9, pp.78495-78503, 2021
[29] Park T, Yoon MA, Cho YC, Ham SJ, Ko Y, Kim S, Jeong H, Lee J, “Automated segmentation of the fractured vertebrae on CT and its applicability in a radiomics model to predict fracture malignancy,” Scientific Reports, vol.12, 2022
[30] Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, Quoc V. Le, “RandAugment: Practical automated data augmentation with a reduced search space,” arXiv:1909.13719 [cs.CV], 2019
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊