跳到主要內容

臺灣博碩士論文加值系統

(44.201.72.250) 您好!臺灣時間:2023/09/27 10:55
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:鍾幸芸
研究生(外文):Jung, Shing-Yun
論文名稱:擴增深度神經網路之相關方法以達成以人為本的人工智慧
論文名稱(外文):Augmenting Deep Learning for Human-Centered Artificial Intelligence
指導教授:孫春在孫春在引用關係袁賢銘袁賢銘引用關係
指導教授(外文):Sun, Chuen-TsaiYuan, Shyan-Ming
口試委員:張智星洪炯宗洪宗貝陳尚澤易志偉胡毓志袁賢銘孫春在
口試委員(外文):Jang, Jyh-Shing RogerHorng, Jorng-TzongHong, Tzung-PeiChen, Shang-TseYi, Chih-WeiHu, Yuh-JyhYuan, Shyan-MingSun, Chuen-Tsai
口試日期:2022-07-27
學位類別:博士
校院名稱:國立陽明交通大學
系所名稱:資訊科學與工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:英文
論文頁數:102
中文關鍵詞:以人為本的人工智慧人機迴圈深度學習擴增演算法
外文關鍵詞:Human-Centered AIHuman-in-the-LoopDeep LearningAlgorithm Augmentation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:164
  • 評分評分:
  • 下載下載:38
  • 收藏至我的研究室書目清單書目收藏:1
在深度學習最近重新受到重視之後,人工智慧或機器學習的領域的研究和現實世界應用開發都快速增長。然而,無論是優化基於學習的演算法還是創新神經網路的架構,似乎都不足以解決人工智慧系統的可用性(usability)和可採用性(adoptability)。當前人工智慧的研究重點已經從以演算法為中心轉移到以人為中心的方向。本論文旨在用以人為本的人工智慧原則增強深度學習模型,以提高人工智慧系統的可用性和可採用性。我們在三個不同的應用領域探索增強深度學習模型的方法。1)工業,2)醫學,以及3)學術寫作的輔助工具。在工業領域中,我們提出了一個人機迴圈的配置,以便在模型訓練過程中獲得更多的人類認知能力的整合。然後,我們提出了位於底部卷積層和分類頭之間的卷積層和全連接層之間的等效變換,以增加工業影像輸入大小的靈活性。這種轉變不僅提高了基於深度學習的自動光學檢測系統的人為控制程度,而且還維持了原有缺陷檢測的自動化的過程。在醫學領域,由於邊緣裝置(edge device)被認為較能顧及病人的隱私,我們提出了一項特徵工程,特別為深度可分離卷積神經網路(DS-CNN)提取病人的肺部聲音特徵,對肺部聲音進行正確分類,減少邊緣裝置上模型的計算成本。關於學術寫作的輔助工具,我們重新構建了一個序列到序列模型的微調過程,以實現更多以作者為中心的可用性,從而生成引文文本。最後,我們在以人為本的人工智慧領域提出了一個新的分類法(taxonomy),並提出了一些未來研究的方向。
Since deep learning's recent renaissance, AI and learning-based algorithms have expanded in scientific research and real-world applications. However, optimizing learning-based algorithms or innovating neural network architectures may be inadequate to improve the usability and adoptability of AI systems. The focus of AI research has been shifting from algorithm-centered to human-centered orientation. This thesis aims to augment deep learning models with human-centered AI principles to enhance the usability and adoptability of AI systems. We explore augmenting deep learning models in three distinct application domains: 1) industry, 2) medicine, and 3) academic writing support. In industry, we propose a human-in-the-loop configuration to gain more integration of human cognitive capacity during the model training process. We then transform fully connected layers to our proposed equivalent convolution layers to liberate users from the fixed input size of images. This transformation not only increases the level of human control of the deep learning-based AOI system but also maintains the automation level of defect inspection. Regarding patient privacy in medical applications, edged devices are considered more privacy-friendly. In the medical field, we suggest a feature engineering process that extracts the specialized lung sound features for the depthwise separable convolution neural network (DS-CNN) to classify lung sounds correctly and reduce model computational costs on edge devices. In academic writing support, we reframe the fine-tuning process of sequence-to-sequence models for more author-centered usability to generate citation texts. Lastly, we recommend a new taxonomy in the human-centered AI landscape and identify future research work.
Dedication i
Acknowledgements ii
摘要 iii
Abstract iv
Table of Contents v
List of Figures viii
List of Tables x
1 Introduction 1
1.1 From Algorithm-Centered to Human-Centered AI 1
1.2 Human-in-the-Loop and Algorithm Augmentation 3
1.3 Putting Humans at the Center by Human-Centered Design Principles 5
1.4 Research Goal 6
2 Literature Review 9
2.1 Human-Centered AI in Industry 9
2.2 Human-Centered AI in Medicine 12
2.3 Human-Centered AI in Academic Writing Support 16
3 Systematic Framework to Enhance Human-in-the-Loop for Randomly Textured
Surface Defect Detection 21
3.1 Method 22
3.1.1 Dataset 22
3.1.2 Experiment Design 23
3.1.3 Model Interpretation 23
3.2 Results 24
3.2.1 Model Performance 24
3.3 Discussion 26
3.4 Summary 29
4 Modifying Network Architectures for Industrial Optical Automated Inspection 31
4.1 Method 34
4.1.1 Overall Deep Learning Based Automated Optical Inspection System 34
4.1.2 Transforming Fully Connected Layers to Equivalent Convolution Layers 36
4.2 Dataset and Results 40
4.3 Discussion 40
4.4 Summary 42
5 Feature Engineering and Network Architecture Search to Efficiently Classify
Lung Sounds 44
5.1 Method 47
5.1.1 Dataset 47
5.1.2 Feature Extraction 48
5.1.3 Depthwise Separable CNN Architecture Search 51
5.1.4 Performance Evaluation 52
5.1.5 Evaluation Metrics 54
5.2 Results 54
5.3 Discussion 56
5.4 Summary 60
6 User-Centered Citation Text Generation 62
6.1 Method 65
6.1.1 Extend Dataset and Proposed Concept 66
6.1.2 Design of Prepend Control Code 68
6.1.3 Pretrained Text Generation Model 70
6.2 Experiments 72
6.2.1 Evaluation Metrics 72
6.2.2 Implementation Detail 73
6.2.3 Design of Human Evaluation 74
6.3 Results and Discussion 75
6.4 Summary 78
7 Conclusion 82
Bibliography 84
[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep con- volutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
[2] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real- time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Conference Proceedings, pp. 779–788.
[3] L. Floridi and M. Chiriatti, “Gpt-3: Its nature, scope, limits, and consequences,” Minds and Machines, vol. 30, no. 4, pp. 681–694, 2020.
[4] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,”Journal of Machine Learning Research, vol. 21, pp. 1–67, 2020.

[5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural infor- mation processing systems, vol. 27, 2014.
[6] S. Russell and P. Norvig, Artificial intelligence —a modern approach, 3rd ed. Prentice Hall Upper Saddle River, NJ, USA:, 2002.

[7] 2015. [Online]. Available: https://www.bbc.com/news/technology-33347866

[8] 2016. [Online]. Available: https://www.theguardian.com/technology/2016/jun/30/ tesla-autopilot-death-self-driving-car-elon-musk
[9] 2021. [Online]. Available: https://edition.cnn.com/2021/04/29/tech/ nijeer-parks-facial-recognition-police-arrest/index.html

[10] 2017. [Online]. Available: https://www.montrealdeclaration-responsibleai.com/ the-declaration
[11] E. B. J. E. T. L. J. M. H. N. J. C. N. M. S. E. S. Y. S. J. C. Daniel Zhang, Nestor Maslej and R. Perrault, “The ai index 2022 annual report,” AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, Report, March 2022 2022. [Online]. Available: https://hai.stanford.edu/research/ai-index-2022
[12] T. Kaluarachchi, A. Reis, and S. Nanayakkara, “A review of recent deep learning ap- proaches in human-centered machine learning,” Sensors, vol. 21, no. 7, p. 2514, 2021.
[13] C. Emmanouilidis, S. Waschull, J. Bokhorst, and J. Wortmann, “Human in the ai loop in production environments,” in IFIP International Conference on Advances in Production Management Systems. Springer, Conference Proceedings, pp. 331–342.
[14] B. Shneiderman, “Human-centered artificial intelligence: Reliable, safe trustworthy,” International Journal of Human–Computer Interaction, vol. 36, no. 6, pp. 495–504, 2020.
[15] S. Munir, J. A. Stankovic, C.-J. M. Liang, and S. Lin, “Cyber physical system challenges for Human-in-the-Loop control,” in 8th International Workshop on Feedback Computing (Feedback Computing 13), Conference Proceedings.
[16] D. Romero, J. Stahre, T. Wuest, O. Noran, P. Bernus, Å. Fast-Berglund, and D. Gorecky, “Towards an operator 4.0 typology: a human-centric perspective on the fourth industrial revolution technologies,” in proceedings of the international conference on computers and industrial engineering (CIE46), Tianjin, China, Conference Proceedings, pp. 29–31.
[17] R. M. Monarch, Human-in-the-Loop Machine Learning: Active learning and annotation for human-centered AI. Simon and Schuster, 2021.
[18] C. Emmanouilidis, P. Pistofidis, L. Bertoncelj, V. Katsouros, A. Fournaris, C. Koulamas, and C. Ruiz-Carcel, “Enabling the human in the loop: Linked data and knowledge in industrial cyber-physical systems,” Annual reviews in control, vol. 47, pp. 249–265, 2019.
[19] T. Grønsund and M. Aanestad, “Augmenting the algorithm: Emerging human-in-the- loop work configurations,” The Journal of Strategic Information Systems, vol. 29, no. 2, p. 101614, 2020.
[20] M. Gillies, R. Fiebrink, A. Tanaka, J. Garcia, F. Bevilacqua, A. Heloir, F. Nunnari, W. Mackay, S. Amershi, and B. Lee, “Human-centred machine learning,” in Proceedings of the 2016 CHI conference extended abstracts on human factors in computing systems, Conference Proceedings, pp. 3558–3565.
[21] 2018. [Online]. Available: https://design.google/library/ux-ai/
[22] 2019. [Online]. Available: https://deeplearning.mit.edu/
[23] 2019. [Online]. Available: https://hai.stanford.edu/
[24] “Human-centered machine learning,” 2019. [Online]. Available: https://sites.google. com/view/hcml-2019
[25] 2019. [Online]. Available: https://pair.withgoogle.com/
[26] J. M. Rožanec, P. Zajec, K. Kenda, I. Novalija, B. Fortuna, D. Mladenić, E. Veliou, D. Pa- pamartzivanos, T. Giannetsos, and S. A. Menesidou, “Stardom: an architecture for trusted and secure human-centered manufacturing systems,” in IFIP International Conference on Advances in Production Management Systems. Springer, Conference Proceedings, pp. 199–207.
[27] M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, Conference Proceedings, pp. 1135– 1144.
[28] A. Bousdekis, S. Wellsandt, E. Bosani, K. Lepenioti, D. Apostolou, K. Hribernik, and G. Mentzas, “Human-ai collaboration in quality control with augmented manufacturing analytics,” in IFIP International Conference on Advances in Production Management Systems. Springer, Conference Proceedings, pp. 303–310.
[29] J. Soldatos, N. Kefalakis, G. Makantasis, A. Marguglio, and O. Lazaro, “Digital plat- form and operator 4.0 services for manufacturing repurposing during covid19,” in IFIP International Conference on Advances in Production Management Systems. Springer, Conference Proceedings, pp. 311–320.
[30] S. Y. Jung, Y.-H. Tsai, W.-Y. Chiu, J.-S. Hu, and C.-T. Sun, “Defect detection on randomly textured surfaces by convolutional neural networks,” in 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). IEEE, Conference Proceed- ings, pp. 1456–1461.
[31] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visu- alising image classification models and saliency maps,” arXiv preprint arXiv:1312.6034, 2013.
[32] Y.-H. Tsai, N. Lyu, S. Jung, K. Chang, J. Chang, and C.-T. Sun, “Deep learning based aoi system with equivalent convolutional layers transformed from fully connected lay- ers,” in 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatron- ics (AIM). IEEE, Conference Proceedings, pp. 103–107.
[33] Z. Luo, J.-T. Hsieh, N. Balachandar, S. Yeung, G. Pusiol, J. Luxenberg, G. Li, L.-J. Li, N. L. Downing, and A. Milstein, “Computer vision-based descriptive analytics of se- niors’daily activities for long-term health monitoring,” Machine Learning for Healthcare (MLHC), vol. 2, no. 1, 2018.
[34] C. J. Cai, S. Winter, D. Steiner, L. Wilcox, and M. Terry, “” hello ai”: uncovering the onboarding needs of medical practitioners for human-ai collaborative decision-making,”Proceedings of the ACM on Human-computer Interaction, vol. 3, no. CSCW, pp. 1–24, 2019.
[35] C. J. Cai, E. Reif, N. Hegde, J. Hipp, B. Kim, D. Smilkov, M. Wattenberg, F. Viegas, G. S. Corrado, and M. C. Stumpe, “Human-centered tools for coping with imperfect algorithms during medical decision-making,” in Proceedings of the 2019 chi conference on human factors in computing systems, Conference Proceedings, pp. 1–14.
[36] D. Wang, Q. Yang, A. Abdul, and B. Y. Lim, “Designing theory-driven user-centric ex- plainable ai,” in Proceedings of the 2019 CHI conference on human factors in computing systems, Conference Proceedings, pp. 1–15.
[37] E. Beede, E. Baylor, F. Hersch, A. Iurchenko, L. Wilcox, P. Ruamviboonsuk, and L. M. Vardoulakis, “A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy,” in Proceedings of the 2020 CHI conference on human factors in computing systems, Conference Proceedings, pp. 1–12.
[38] M. Schaekermann, G. Beaton, E. Sanoubari, A. Lim, K. Larson, and E. Law, “Ambiguity- aware ai assistants for medical data analysis,” in Proceedings of the 2020 CHI conference on human factors in computing systems, Conference Proceedings, pp. 1–14.
[39] Y. Xie, M. Chen, D. Kao, G. Gao, and X. Chen, “Chexplain: enabling physicians to explore and understand data-driven, ai-enabled medical imaging analysis,” in Proceed- ings of the 2020 CHI Conference on Human Factors in Computing Systems, Conference Proceedings, pp. 1–13.
[40] Y. Liang, H. W. Fan, Z. Fang, L. Miao, W. Li, X. Zhang, W. Sun, K. Wang, L. He, and X. Chen, “Oralcam: enabling self-examination and awareness of oral health using a smartphone camera,” in Proceedings of the 2020 CHI conference on human factors in computing systems, Conference Proceedings, pp. 1–13.
[41] O. Gómez-Carmona, D. Casado-Mansilla, F. A. Kraemer, D. López-de Ipiña, and J. García-Zubia, “Exploring the computational cost of machine learning at the edge for human-centric internet of things,” Future Generation Computer Systems, vol. 112, pp. 670–683, 2020.
[42] S.-Y. Jung, C.-H. Liao, Y.-S. Wu, S.-M. Yuan, and C.-T. Sun, “Efficiently classifying lung sounds through depthwise separable cnn models with fused stft and mfcc features,” Diagnostics, vol. 11, no. 4, p. 732, 2021.
[43] S. Teufel and M. Moens, “Summarizing scientific articles: experiments with relevance and rhetorical status,” Computational linguistics, vol. 28, no. 4, pp. 409–445, 2002.
[44] V. Qazvinian and D. Radev, “Scientific paper summarization using citation summary networks,” in Proceedings of the 22nd International Conference on Computational Lin- guistics (Coling 2008), Conference Proceedings, pp. 689–696.
[45] M. Yasunaga, J. Kasai, R. Zhang, A. R. Fabbri, I. Li, D. Friedman, and D. R. Radev, “Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks,” in Proceedings of the AAAI conference on artifi- cial intelligence, vol. 33, Conference Proceedings, pp. 7386–7393.
[46] M. Kardas, P. Czapla, P. Stenetorp, S. Ruder, S. Riedel, R. Taylor, and R. Stojnic, “Ax- cell: Automatic extraction of results from machine learning papers,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Conference Proceedings, pp. 8580–8594.
[47] H. Bonab, H. Zamani, E. Learned-Miller, and J. Allan, “Citation worthiness of sentences in scientific reports,” in The 41st International ACM SIGIR Conference on Research De- velopment in Information Retrieval, Conference Proceedings, pp. 1061–1064.
[48] C. Bhagavatula, S. Feldman, R. Power, and W. Ammar, “Content-based citation recom- mendation,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Conference Proceedings, pp. 238–251.
[49] M. Färber, A. Thiemann, and A. Jatowt, “To cite, or not to cite? detecting citation con- texts in text,” in European conference on information retrieval. Springer, Conference Proceedings, pp. 598–603.
[50] Z. Ali, G. Qi, P. Kefalas, W. A. Abro, and B. Ali, “A graph-based taxonomy of citation recommendation models,” Artificial Intelligence Review, vol. 53, no. 7, pp. 5217–5260, 2020.
[51] A. Cohan, W. Ammar, M. van Zuylen, and F. Cady, “Structural scaffolds for citation intent classification in scientific publications,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Conference Proceedings, pp. 3586–3596.
[52] X. Xing, X. Fan, and X. Wan, “Automatic generation of citation texts in scholarly pa- pers: A pilot study,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Conference Proceedings, pp. 6181–6190.
[53] K. Luu, X. Wu, R. Koncel-Kedziorski, K. Lo, I. Cachola, and N. A. Smith, “Explaining relationships between scientific documents,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), Conference Proceed- ings, pp. 2130–2144.
[54] S.-Y. Jung, T.-H. Lin, C.-H. Liao, S.-M. Yuan, and C.-T. Sun, “Intent-controllable citation text generation,” Mathematics, vol. 10, no. 10, p. 1763, 2022.
[55] J. P. Yun, S. Choi, B. Seo, and S. W. Kim, “Real-time vision-based defect inspection for high-speed steel products,” Optical Engineering, vol. 47, no. 7, p. 077204, 2008.
[56] C.-Y. Chang, C.-H. Chang, C.-H. Li, and M. Jeng, “Learning vector quantization neural networks for led wafer defect inspection,” in Innovative Computing, Information and Control, 2007. ICICIC’07. Second International Conference on. IEEE, Conference Proceedings, pp. 229–229.
[57] J. J. Liu and J. F. MacGregor, “Estimation and monitoring of product aesthetics: ap- plication to manufacturing of “engineered stone”countertops,” Machine Vision and Applications, vol. 16, no. 6, p. 374, 2006.
[58] V. Murino, M. Bicego, and I. A. Rossi, “Statistical classification of raw textile defects,” in Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 4. IEEE, Conference Proceedings, pp. 311–314.
[59] O. Silvén, M. Niskanen, and H. Kauppinen, “Wood inspection with non-supervised clus- tering,” Machine Vision and Applications, vol. 13, no. 5-6, pp. 275–285, 2003.
[60] X. Xie and M. Mirmehdi, “Texems: Texture exemplars for defect detection on random textured surfaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 8, pp. 1454–1464, 2007.
[61] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human- level performance on imagenet classification,” in Proceedings of the IEEE international conference on computer vision, Conference Proceedings, pp. 1026–1034.
[62] J. Masci, U. Meier, D. Ciresan, J. Schmidhuber, and G. Fricout, “Steel defect classifi- cation with max-pooling convolutional neural networks,” in Neural Networks (IJCNN), The 2012 International Joint Conference on. IEEE, Conference Proceedings, pp. 1–6.
[63] D. Soukup and R. Huber-Mörk, “Convolutional neural networks for steel surface defect detection from photometric stereo images,” in International Symposium on Visual Com- puting. Springer, Conference Proceedings, pp. 668–677.
[64] D. Weimer, B. Scholz-Reiter, and M. Shpitalni, “Design of deep convolutional neural network architectures for automated feature extraction in industrial inspection,” CIRP Annals, vol. 65, no. 1, pp. 417–420, 2016.
[65] J. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” in ICLR (workshop track), Conference Proceedings.
[66] S. Wang, J. Wan, D. Zhang, D. Li, and C. Zhang, “Towards smart factory for industry 4.0: a self-organized multi-agent system with big data based feedback and coordination,” Computer Networks, vol. 101, pp. 158–168, 2016.
[67] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” Advances in neural information processing systems, vol. 30, 2017.
[68] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, Conference Proceedings, pp. 618–626.
[69] M. H. Bennett, K. W. Tobin Jr, and S. S. Gleason, “Automatic defect classification: status and industry trends,” in Integrated Circuit Metrology, Inspection, and Process Control IX, vol. 2439. SPIE, Conference Proceedings, pp. 210–220.
[70] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning. nature, 521 (7553), 436-444,” Google Scholar Google Scholar Cross Ref Cross Ref, 2015.
[71] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016.
[72] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE con- ference on computer vision and pattern recognition, Conference Proceedings, pp. 1–9.
[73] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[74] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Con- ference Proceedings, pp. 770–778.
[75] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detec- tion with region proposal networks,” Advances in neural information processing systems, vol. 28, 2015.
[76] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in European conference on computer vision. Springer, Conference Proceedings, pp. 21–37.
[77] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic seg- mentation,” in Proceedings of the IEEE conference on computer vision and pattern recog- nition, Conference Proceedings, pp. 3431–3440.
[78] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, Conference Proceedings, pp. 2961–2969.
[79] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “Overfeat: Integrated recognition, localization and detection using convolutional networks,” arXiv preprint arXiv:1312.6229, 2013.
[80] M. Lin, Q. Chen, and S. Yan, “Network in network,” arXiv preprint arXiv:1312.4400, 2013.
[81] K. Weiss, T. M. Khoshgoftaar, and D. Wang, “A survey of transfer learning,” Journal of Big data, vol. 3, no. 1, pp. 1–40, 2016.
[82] Z. Yu, X. Wu, and X. Gu, “Fully convolutional networks for surface defect inspec- tion in industrial environment,” in International conference on computer vision systems. Springer, Conference Proceedings, pp. 417–426.
[83] G. K. Gajanan, “Review of defect detection and classification methods for fabric,” Inter- national Journal of Scientific Engineering Research, vol. 5, no. 3, p. 698, 2014.
[84] M. Ferguson, R. Ak, Y.-T. T. Lee, and K. H. Law, “Automatic localization of casting defects with convolutional neural networks,” in 2017 IEEE international conference on big data (big data). IEEE, Conference Proceedings, pp. 1726–1735.
[85] J. Li, Z. Su, J. Geng, and Y. Yin, “Real-time detection of steel strip surface defects based on improved yolo detection network,” IFAC-PapersOnLine, vol. 51, no. 21, pp. 76–81, 2018.
[86] Y. Cha, W. Choi, and O. Büyüköztürk, “Deep learning-based crack damage detection using convolutional neural networks,” Computer-Aided Civil and Infrastructure Engi- neering, vol. 32, no. 5, pp. 361–378, 2017.
[87] D. Weimer, H. Thamer, and B. Scholz-Reiter, “Learning defect classifiers for textured surfaces using neural networks and statistical feature representations,” Procedia CIRP, vol. 7, pp. 347–352, 2013.
[88] A. Sovijarvi, F. Dalmasso, J. Vanderschoot, L. Malmberg, G. Righini, and S. Stoneman, “Definition of terms for applications of respiratory sounds,” European Respiratory Re- view, vol. 10, no. 77, pp. 597–610, 2000.
[89] C. Jiang, J. Zhao, B. Huang, J. Zhu, and J. Yu, “A basic investigation into the optimization of cylindrical tubes used as acoustic stethoscopes for auscultation in covid-19 diagnosis,” The Journal of the Acoustical Society of America, vol. 149, no. 1, pp. 66–69, 2021.
[90] H. Pasterkamp, P. L. Brand, M. Everard, L. Garcia-Marcos, H. Melbye, and K. N. Priftis, “Towards the standardisation of lung sound nomenclature,” European Respiratory Jour- nal, vol. 47, no. 3, pp. 724–732, 2016.
[91] N. Gavriely, M. Nissan, A. Rubin, and D. W. Cugell, “Spectral characteristics of chest wall breath sounds in normal subjects,” Thorax, vol. 50, no. 12, pp. 1292–1300, 1995.
[92] A. J. Robertson and R. Coope, “Rales, rhonchi, and laennec,” Lancet, vol. 273, no. 6992, pp. 417–23, 1957.
[93] A.-A. A. H. Subcommittee, “Report on pulmonary nomenclature,” ATS News, vol. 3, pp. 5–6, 1977.
[94] N. Gavriely, Y. Palti, G. Alroy, and J. B. Grotberg, “Measurement and theory of wheezing breath sounds,” Journal of Applied Physiology, vol. 57, no. 2, pp. 481–492, 1984.
[95] N. Meslier, G. Charbonneau, and J. Racineux, “Wheezes,” European respiratory journal, vol. 8, no. 11, pp. 1942–1948, 1995.
[96] P. Forgacs, Lung sounds / Paul Forgacs. London: Bailliere Tindall, 1978.
[97] A. Sovijarvi, “Characteristics of breath sounds and adventitious respiratory sounds,” Eur Respir Rev, vol. 10, pp. 591–596, 2000.
[98] P. Piirilä and A. R. Sovijärvi, “Crackles: recording, analysis and clinical significance,” Eur Respir J, vol. 8, no. 12, pp. 2139–48, 1995.
[99] P. Piirilä, “Changes in crackle characteristics during the clinical course of pneumonia,” Chest, vol. 102, no. 1, pp. 176–83, 1992.
[100] R. L. Murphy, A. Vyshedskiy, V. A. Power-Charnitsky, D. S. Bana, P. M. Marinelli, A. Wong-Tse, and R. Paciej, “Automated lung sound analysis in patients with pneumo- nia,” Respir Care, vol. 49, no. 12, pp. 1490–7, 2004.
[101] Y. Huang, S. Meng, Y. Zhang, S. Wu, Y. Zhang, Y. Zhang, Y. Ye, Q. Wei, N. Zhao, J. Jiang, X. Ji, C. Zhou, C. Zheng, W. Zhang, L. Xie, Y. Hu, J. He, J. Chen, W. Wang, L. Cao, W. Xu, Y. Lei, Z. Jiang, W. Hu, W. Qin, W. Wang, Y. He, H. Xiao, X. Zheng, Y. Hu, W. Pan, C. Zhang, and J. Cai, “The respiratory sound features of covid-19 patients fill gaps between clinical data and screening methods,” medRxiv, p. 2020.04.07.20051060, 2020. [Online]. Available: https: //www.medrxiv.org/content/medrxiv/early/2020/04/10/2020.04.07.20051060.full.pdf
[102] R. X. A. Pramono, S. Bowyer, and E. Rodriguez-Villegas, “Automatic adventitious res- piratory sound analysis: A systematic review,” PloS one, vol. 12, no. 5, p. e0177926, 2017.
[103] S. Dara, P. Tumma, N. R. Eluri, and G. R. Kancharla, “Feature extraction in medical images by using deep learning approach,” International Journal of Pure and Applied Mathematics, vol. 120, no. 6, pp. 305–312, 2018.
[104] D. Bardou, K. Zhang, and S. M. Ahmad, “Lung sounds classification using convolutional neural networks,” Artificial intelligence in medicine, vol. 88, pp. 58–69, 2018.
[105] F. Demir, A. Sengur, and V. Bajaj, “Convolutional neural networks based efficient ap- proach for classification of lung diseases,” Health information science and systems, vol. 8, no. 1, pp. 1–8, 2020.
[106] J. Acharya and A. Basu, “Deep neural network for respiratory sound classification in wearable devices enabled by patient specific model tuning,” IEEE transactions on biomedical circuits and systems, vol. 14, no. 3, pp. 535–544, 2020.
[107] M. Aykanat, Ő. Kılıç, B. Kurt, and S. Saryal, “Classification of lung sounds using convo- lutional neural networks,” EURASIP Journal on Image and Video Processing, vol. 2017, no. 1, pp. 1–9, 2017.
[108] H. Chen, X. Yuan, Z. Pei, M. Li, and J. Li, “Triple-classification of respiratory sounds us- ing optimized s-transform and deep residual networks,” IEEE Access, vol. 7, pp. 32 845– 32 852, 2019.
[109] S. Gairola, F. Tom, N. Kwatra, and M. Jain, “Respirenet: A deep neural network for accurately detecting abnormal lung sounds in limited data setting,” in 2021 43rd An- nual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2021, pp. 527–530.
[110] Y.-S. Wu, C.-H. Liao, and S.-M. Yuan, “Automatic auscultation classification of abnor- mal lung sounds in critical patients through deep learning models,” in 202020 3rd IEEE International Conference on Knowledge Innovation and Invention (ICKII). IEEE, Con- ference Proceedings, pp. 9–11.
[111] B. Rocha, D. Filos, L. Mendes, I. Vogiatzis, E. Perantoni, E. Kaimakamis, P. Natsiavas, A. Oliveira, C. Jácome, and A. Marques, “Α respiratory sound database for the develop- ment of automated classification,” in International Conference on Biomedical and Health Informatics. Springer, Conference Proceedings, pp. 33–37.
[112] M. T. García-Ordás, J. A. Benítez-Andrades, I. García-Rodríguez, C. Benavides, and H. Alaiz-Moretón, “Detecting respiratory pathologies using convolutional neural net- works and variational autoencoders for unbalancing data,” Sensors, vol. 20, no. 4, p. 1214, 2020.
[113] L. Sifre, “Rigid-motion scattering for image classification,” Thesis, 2014.
[114] B. A. Reyes, N. Reljin, Y. Kong, Y. Nam, S. Ha, and K. H. Chon, “Towards the de- velopment of a mobile phonopneumogram: automatic breath-phase classification using smartphones,” Annals of biomedical engineering, vol. 44, no. 9, pp. 2746–2759, 2016.
[115] M. A. Azam, A. Shahzadi, A. Khalid, S. M. Anwar, and U. Naeem, “Smartphone based human breath analysis from respiratory sounds,” in 2018 40th Annual International Con- ference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, Conference Proceedings, pp. 445–448.
[116] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
[117] C.-H. Hsiao, T.-W. Lin, C.-W. Lin, F.-S. Hsu, F. Y.-S. Lin, C.-W. Chen, and C.-M. Chung, “Breathing sound segmentation and detection using transfer learning techniques on an attention-based encoder-decoder architecture,” in 2020 42nd Annual International Con- ference of the IEEE Engineering in Medicine Biology Society (EMBC). IEEE, Confer- ence Proceedings, pp. 754–759.
[118] N. Peng, A. Chen, G. Zhou, W. Chen, W. Zhang, J. Liu, and F. Ding, “Environment sound classification based on visual multi-feature fusion and gru-aws,” IEEE Access, vol. 8, pp. 191 100–191 114, 2020.
[119] J. S. Walker, Fast fourier transforms. CRC press, 1996, vol. 24.
[120] P. Cristea and Z. Valsan, “New cepstrum frequency scale for neural network speaker ver- ification,” in ICECS’99. Proceedings of ICECS’99. 6th IEEE International Conference on Electronics, Circuits and Systems (Cat. No. 99EX357), vol. 3. IEEE, Conference Proceedings, pp. 1573–1576.
[121] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[122] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” arXiv preprint arXiv:1412.3555, 2014.
[123] S. Bai, J. Z. Kolter, and V. Koltun, “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,” arXiv preprint arXiv:1803.01271, 2018.
[124] A. Glangetas, M.-A. Hartley, A. Cantais, D. S. Courvoisier, D. Rivollet, D. M. Shama, A. Perez, H. Spechbach, V. Trombert, and S. Bourquin, “Deep learning diagnostic and risk-stratification pattern detection for covid-19 in digital lung auscultations: clinical protocol for a case–control and prospective cohort study,” BMC pulmonary medicine, vol. 21, no. 1, pp. 1–8, 2021.
[125] R. S. Vasudevan, Y. Horiuchi, F. J. Torriani, B. Cotter, S. M. Maisel, S. S. Dadwal, R. Gaynes, and A. S. Maisel, “Persistent value of the stethoscope in the age of covid-19,” The American journal of medicine, 2020.
[126] A. Santini, “The importance of referencing,” The Journal of Critical Care Medicine, vol. 4, no. 1, pp. 3–4, 2018.
[127] K. W. Boyack, N. J. van Eck, G. Colavizza, and L. Waltman, “Characterizing in-text citations in scientific articles: A large-scale analysis,” Journal of Informetrics, vol. 12, no. 1, pp. 59–73, 2018.
[128] L. Bornmann and R. Mutz, “Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references,” Journal of the Association for Information Science and Technology, vol. 66, no. 11, pp. 2215–2222, 2015.
[129] T. Hsiao and K. Chen, “How authors cite references? a study of characteristics of in- text citations,” Proceedings of the Association for Information Science and Technology, vol. 55, no. 1, pp. 179–187, 2018.
[130] J. Nicolaisen and T. F. Frandsen, “Number of references: a large-scale study of interval ratios,” Scientometrics, vol. 126, no. 1, pp. 259–285, 2021.
[131] I. Ucar, F. López-Fernandino, P. Rodriguez-Ulibarri, L. Sesma-Sanchez, V. Urrea-Micó, and J. Sevilla, “Growth in the number of references in engineering journal papers during the 1972–2013 period,” Scientometrics, vol. 98, no. 3, pp. 1855–1864, 2014.
[132] I. Akin and M. MurrellJones, “Closing the gap in academic writing using the cognitive load theory,” Literacy Information and Computer Education Journal, vol. 9, no. 1, pp. 2833–2841, 2018.
[133] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
[134] K. Lo, L. L. Wang, M. Neumann, R. Kinney, and D. S. Weld, “S2orc: The semantic scholar open research corpus,” Conference Proceedings, pp. 4969–4983.
[135] A. Abu-Jbara, J. Ezra, and D. Radev, “Purpose and polarity of citation: Towards nlp-based bibliometrics,” in Proceedings of the 2013 conference of the North American chapter of the association for computational linguistics: Human language technologies, Conference Proceedings, pp. 596–606.
[136] R. Jha, A. Abu-Jbara, V. Qazvinian, and D. R. Radev, “Nlp-driven citation analysis for scientometrics,” Nat. Lang. Eng., vol. 23, no. 1, pp. 93–130, 2017.
[137] M. Valenzuela, V. Ha, and O. Etzioni, “Identifying meaningful citations,” in Workshops at the twenty-ninth AAAI conference on artificial intelligence, Conference Proceedings.
[138] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Conference Proceedings, pp. 7871– 7880.
[139] A. See, P. J. Liu, and C. D. Manning, “Get to the point: Summarization with pointer- generator networks,” arXiv preprint arXiv:1704.04368, 2017.
[140] C. Kobus, J. M. Crego, and J. Senellart, “Domain control for neural machine translation,” in Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, Conference Proceedings, pp. 372–378.
[141] A. Fan, D. Grangier, and M. Auli, “Controllable abstractive summarization,” in Proceed- ings of the 2nd Workshop on Neural Machine Translation and Generation, Conference Proceedings, pp. 45–54.
[142] J. He, W. Kryściński, B. McCann, N. Rajani, and C. Xiong, “Ctrlsum: Towards generic controllable text summarization,” arXiv preprint arXiv:2012.04281, 2020.
[143] B. Tan, L. Qin, E. Xing, and Z. Hu, “Summarizing text on any aspects: A knowledge- informed weakly-supervised approach,” in Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing (EMNLP), Conference Proceedings, pp. 6301–6309.
[144] C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” Conference Pro- ceedings, pp. 74–81.
[145] Y. Liu and M. Lapata, “Text summarization with pretrained encoders,” Conference Pro- ceedings, pp. 3721–3731.
[146] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “Bertscore: Evaluating text generation with bert,” in International Conference on Learning Representations, Confer- ence Proceedings.
[147] I. Beltagy, K. Lo, and A. Cohan, “Scibert: A pretrained language model for scientific text,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Conference Proceedings, pp. 3615–3620.
[148] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, and M. Funtowicz, “Transformers: State-of-the-art natural language process- ing,” in Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, Conference Proceedings, pp. 38–45.
[149] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in ICLR (Poster), Conference Proceedings.
[150] A. F. Hayes and K. Krippendorff, “Answering the call for a standard reliability measure for coding data,” Communication Methods and Measures, vol. 1, no. 1, pp. 77–89, 2007.
[151] S. Gabriel, A. Bosselut, J. Da, A. Holtzman, J. Buys, K. Lo, A. Celikyilmaz, and Y. Choi, “Discourse understanding and factual consistency in abstractive summarization,” arXiv preprint arXiv:1907.01272, 2019.
[152] S.-W. Su, S.-Y. Jung, X. Yu, S.-M. Yuan, and C.-T. Sun, “Modify, decompose and re- assemble: Learner-centered constructive teaching strategy for introductory programming course in college,” in 2022 IEEE 5th Eurasian Conference on Educational Innovation (ECEI). IEEE, Conference Proceedings, pp. 197–200.
[153] J. Bennedsen and M. E. Caspersen, “Failure rates in introductory programming: 12 years later,” ACM inroads, vol. 10, no. 2, pp. 30–36, 2019.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top