|
REFERENCE [1]衛生福利部中央健康保險署,"癌症登記報告", http://www.nhi.gov.tw/. [2]衛生福利部中央健康保險署," 103年各類癌症健保前10大醫療支出統 計”,http://www.nhi.gov.tw/ [3]Robert L. Barclay, Joseph J. Vicari, Andrea S. Doughty et al. "Colonoscopywithdrawal times and adenoma detection during screening colonoscopy," N Engl J Med, vol.355, No.24, pp.2533-2541, Dec. 2006. [4]Michal F. Kaminski, Jaroslaw Regula, Ewa Kraszewska et al. "Quality indicators for colonoscopy and the risk of interval cancer," N Engl J Med, vol.362, No.19, pp.1795-1803, May. 2010. [5]Nancy N. Baxter, Rinku Sutradhar, Shawn S. Forbes et al. "Analysis of administrative data finds endoscopist quality measures associated with post colonoscopy colorectal cancer," Gastroenterology, vol.140, No.1, pp.65-72, Sep. 2011. [6]Brenner H, Chang-Claude J, Seiler CM et al. "Interval cancers after negative colonoscopy: population-based case-control study," Gut, Vol.61, No.11, pp.1576-1582, Dec. 2012. [7]Brenner H, Chang-Claude J, Jansen L, et al. "Role of colonoscopy and polyp characteristics in colorectal cancer after colonoscopic polyp detection: a population-based case-control study," Ann Intern Med, Vol.157, No.4, pp.225-232, Aug. 2012. [8]邱瀚模,李宜家,"如何提升大腸內視鏡品質-實證與指引", 2013, pp2, pp66. [9]Litjens, G. et al. A survey on deep learning in medical image analysis. arXiv preprint arXiv:1702.05747 (2017) [10]K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics, 36(4):193–202, 1980 [11]Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [12]A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012. [13]Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. ImageNet large scale visual recognition challenge. CoRR, abs/1409.0575, 2014 [14]Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson, J. P., Kane, A. D., Menon, D. K., Rueckert, D., Glocker, B., 2017. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis 36, 61–78. [15]Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2015b. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567. [16]C. Shie, C. Chuang, C.Chou, M. Wu, and Edward Y. Chang “Transfer Representation Learning for Medical Image Analysis” http://infolab.stanford.edu/~echang/HTC_OM_Final.pdf [17]C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. CoRR, abs/1409.4842, 2014. 1, 2, 3, 5 [18]Sze, V. (2017, Mar 27). Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Retrieved from arXiv.org: https://arxiv.org/abs/1703.09039 [19]V. Nair and G. E. Hinton, “Rectified Linear Units Improve Restricted Boltzmann Machines,” in ICML, 2010. [20]S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. [21]I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. Book in preparation for MIT Press, 2016 [22]J. Zhang, W. Li, P. Ogunbona “Transfer Learning For Cross-Dataset Recognition: A Survey” [23]Karl Weiss, Taghi M Khoshgoftaar, and DingDing Wang. 2016. A survey of transfer learning. Journal of Big Data, 3(1):1–40 [24]https://en.wikipedia.org/wiki/Semi-supervised_learning [25]Rosenberg, C., Hebert, M., & Schneiderman, H. (2005). Semi-supervised selftraining of object detection models. Seventh IEEE Workshop on Applications of Computer Vision [26]X. Zhu, ‘‘Semi-supervised learning literature survey,’’ Dept. Comput. Sci., Univ. Wisconsin, Madison, WI, Tech. Rep. 1530, 2005 [27]A. Telea. “An image inpainting technique based on the fast marching method.” Journal of graphics tools., 9(1):23–34, 2004 [28]M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. “Image Inpainting.”In Proceedings SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, edited by Kurt Akeley, pp. 417—424, Reading, MA: Addison-Wesley, 2000 [29]M. Oliveira, B. Bowen, R. McKenna, and Y. -S. Chang. “Fast Digital Image Inpainting.” In Proc. VIIP 2001, pp. 261—266, 2001 [30]Sebastien C Wong, Adam Gatt, Victor Stamatescu, and Mark D McDonnell. Understanding data augmentation for classification: when to warp? arXiv preprint arXiv:1609.08764, 2016. [31]https://keras.io/preprocessing/image/ [32]E. Chang, “AdaBoost-Based Cecum Recognition System in Accordance with Boston Bowel Preparation Scale” 2016 [33]Y. Bengio, “Practical Recommendations for Gradient-Based Training of Deep Architectures,” Neural Networks: Tricks of the Trade, K.-R. Mu¨ller, G. Montavon, and G.B. Orr, eds., Springer 2013. [34]Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact of residual connections on learning (2016). arXiv:1602.07261 [35]K. He, G. Gkioxari, P. Dollar, and R. Girshick. Mask r-cnn. ´arXiv:1703.06870,2017.V. Nair and G. E. Hinton, “Rectified Linear Units ImproveRestricted Boltzmann Machines,” in ICML, 2010. [36]Tchoulack, S.; Langlois, J.M.P.; Cheriet, F., "A video stream processor for real-time detection and correction of specular reflections in endoscopic images," Circuits and Systems and TAISA Conference, 2008. NEWCAS-TAISA 2008. 2008 Joint 6th International IEEE Northeast Workshop on, vol., no., pp.49,52, 22-25 June 2008
|