跳到主要內容

臺灣博碩士論文加值系統

(44.200.194.255) 您好!臺灣時間:2024/07/20 14:50
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:翁梓澄
研究生(外文):WENG, ZI-CHENG
論文名稱:以GAN深度學習演算法為基礎之警察影像識別探索性研究:以海關X光行李檢測儀影像為例
論文名稱(外文):An Exploratory Research on GAN-based Image Recognition: A Case of Customs X-ray Luggage Inspection Images
指導教授:蔡馥璟蔡馥璟引用關係
指導教授(外文):TSAI, FU-CHING
口試委員:詹明華郭俊良
口試委員(外文):CHAN, MING-HWAGUO, JIUNN-LIANG
口試日期:2024-01-18
學位類別:碩士
校院名稱:中央警察大學
系所名稱:刑事警察研究所
學門:軍警國防安全學門
學類:警政學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:115
中文關鍵詞:邊境安全管制海關X光行李檢測人工智慧生成對抗網路二元分類指標
外文關鍵詞:Border Security ControlCustoms X-ray Luggage InspectionArtificial IntelligenceGenerative Adversarial NetworkBinary Classification Metrics
相關次數:
  • 被引用被引用:0
  • 點閱點閱:79
  • 評分評分:
  • 下載下載:21
  • 收藏至我的研究室書目清單書目收藏:0
摘要
由於邊境安全管制之識別工作日益沈重,而人工智慧(AI)技術的興起為此項工作開闢新的藍海,有鑑於此,本研究對於海關X光毒品影像檢測提出了一種基於GAN深度學習演算法之異常檢測新方法,以緩解執法機關在辨識毒品影像上之負擔。關於科技執法在法律應用層面之討論,文獻參考了相關美國法案例,從中推論AI技術應用於X光行李圖像檢測之適法性,並採納相關技術資料及論點,選擇以GAN框架衍生之BiGAN模型作為X光行李檢測儀自動化識別之基礎。實驗以Python相關函式庫重新設計訓練模型,並採用k折交叉驗證方法及二元分類指標檢視模型之正確性。實驗結果表明,藉由訓練和驗證階段調整訓練次數取得模型最佳權重,再以最佳權重帶入測試模型檢測X光毒品影像,異常檢測的預測準確率平均達98.47%,代表研究提出的模型可以實現異常檢測之預測。此外,模型辨識異常影像的過程僅耗費約20秒的時間,將有助於執法人員快速地對可疑的行李進行檢測,從而提供實務一項新的方法。
Abstract
As the responsibilities of border security control intensify, the rise of artificial intelligence (AI) technology opens up new horizons for identification tasks. In light of this, this study proposes a novel approach to anomaly detection in customs X-ray drug image inspection, leveraging a Generative Adversarial Network (GAN) deep learning algorithm. The aim is to alleviate the burden on law enforcement agencies in recognizing drug images. In the context of legal applications of technology in law enforcement, the discussion references relevant U.S. legal cases to infer the legality of applying AI technology to X-ray luggage image inspection. Relevant technical data and arguments are considered, and a BiGAN model derived from the GAN framework is chosen as the foundation for the automation of X-ray luggage detection.
The experiment involves redesigning the training model using Python libraries, employing k-fold cross-validation, and using binary classification metrics to assess the model's accuracy. The results indicate that adjusting the training iterations during the training and validation phases to obtain the optimal model weights, and subsequently applying these weights to test the model on X-ray drug images, yields an average anomaly detection prediction accuracy of 98.47%. This suggests that the proposed model can effectively achieve anomaly detection predictions. Additionally, the process of identifying abnormal images by the model takes only about 20 seconds, facilitating rapid inspection of suspicious luggage by law enforcement officers and providing a practical new approach in the field.
目錄
第一章 緒論 1
第一節 研究背景 1
第二節 研究動機 5
第三節 研究目的 7
第四節 研究問題 9
第五節 研究流程 10
第二章 文獻探討 13
第一節 人工智慧於法律層面之容許性 13
第二節 海關查緝毒品之自動識別方法 27
第三節 GAN框架應用方法之研究 36
第四節 BIGAN模型之探討 52
第三章 研究設計與方法 57
第一節 研究範圍及對象 57
第二節 研究架構設計 58
第三節 實驗模型設計 66
第四節 分類評估指標 72
第四章 實驗結果分析 79
第一節 實驗環境設置 79
第二節 實驗資料描述 79
第三節 訓練與驗證過程 81
第四節 分類正確率 95
第五章 研究結論 99
第一節 研究發現 99
第二節 研究限制 101
第三節 未來研究方向 102
文獻參考 105
文獻參考
[1]J. Wu, X. Xu, and J. Yang, “Object Detection and X-Ray Security Imaging: A Survey,” IEEE Access, vol. 11, pp. 45416–45441, 2023, doi: 10.1109/ACCESS.2023.3273736.
[2]J. Liu and T. H. Lin, “A Framework for the Synthesis of X-Ray Security Inspection Images Based on Generative Adversarial Networks,” IEEE Access, vol. 11, pp. 63751–63760, 2023, doi: 10.1109/ACCESS.2023.3288087.
[3]I. Stoica et al., “A Berkeley View of Systems Challenges for AI,” Dec. 2017, [Online]. Available: http://arxiv.org/abs/1712.05855.
[4]Z. C. Weng and F. C. Tsai, “A Systematic Literature Review of Law Enforcement Image Recognition Methods based on Generative Adversarial Networks Framework,” in Procedia Computer Science, 2022, vol. 207, pp. 3629–3638, doi: 10.1016/j.procs.2022.09.423.
[5]J. Yang, Z. Zhao, H. Zhang, and Y. Shi, “Data augmentation for X-ray prohibited item images using generative adversarial networks,” IEEE Access, vol. 7, pp. 28894–28902, 2019, doi: 10.1109/ACCESS.2019.2902121.
[6]G. Tully, N. Cohen, D. Compton, G. Davies, R. Isbell, and T. Watson, “Quality standards for digital forensics: Learning from experience in England & Wales,” Forensic Sci. Int. Digit. Investig., vol. 32, Mar. 2020, doi: 10.1016/j.fsidi.2020.200905.
[7]柯雨瑞, “論國境執法面臨之問題及未來可行之發展方向---以國際機場執法為中心,” 2009.
[8]Carroll v. United States (45_S.Ct._280), vol. 267 U.S. 1. 1925, pp. 280–291.
[9]U.S v. Flores-Montano(124_S.Ct._1582), vol. 541 U.S.17. 2004, pp. 1582–1587.
[10]陳文貴, “行政檢查與令狀原則之界限探討,” 中原財經法學, vol. 第三十九期, pp. 130–186, Dec. 2017.
[11]E. E. Joh, “THE NEW SURVEILLANCE DISCRETION: AUTOMATED SUSPICION, BIG DATA, AND POLICING,” 2016.
[12]U.S. v. Taylor (90_F.3d_903), vol. 4th Cir. 1996, pp. 903–910.
[13]U.S. v. Wallace (811_F.Supp.2d_1265), vol. S.D.W.Va. 2011, pp. 1265–1276.
[14]Kyllo v. United States (121_S.Ct._2038), vol. 533 U.S. 2. 2001, pp. 2038–2053.
[15]U.S. v. Jones (132_S.Ct._945), vol. 565 U.S. 4. 2012, pp. 945–964.
[16]U.S.C, FEDERAL RULES OF CRIMINAL PROCEDURE Rule 41 (d) (1). 2020, p. 54.
[17]Katz v. United States (88_S.Ct._507), vol. 389 U.S. 3. 1967, pp. 507–523.
[18]張陳弘, “美國聯邦憲法增修條文第4條搜索令狀原則的新發展:以Jones, Jardines & Grady案為例,” 歐美研究, vol. 第48卷第二期, pp. 267–332, 2018.
[19]Carpenter v. U.S. (138_S.Ct._2206). 2018, pp. 2206–2272.
[20]林鈺雄, “干預保留與門檻理論—— 司法警察(官)一般調查權限之理論檢討,” 政大法學評論, vol. 第九十六期, pp. 189–231, 2005.
[21]湯德宗, 違憲審查基準體系建構初探——「階層式比例原則」構想, vol. 《憲法解釋之理論與實. 2009.
[22]Jon Krohn, Grant Beyleveld, and Aglaé Bassens, Deep Learning Illustrated: A Visual, Interactive Guide to Artificial Intelligence. Addison-Wesley, 2019.
[23]François Chollet, Deep Learning with Python. Manning, 2018.
[24]James Max Kanter and Kalyan Veeramachaneni, “Deep Feature Synthesis: Towards Automating Data Science Endeavors,” 2015.
[25]P. Ramachandran, B. Zoph, and Q. V. Le, “Searching for Activation Functions,” Oct. 2017, [Online]. Available: http://arxiv.org/abs/1710.05941.
[26]G. Bingham and R. Miikkulainen, “Discovering Parametric Activation Functions,” Jun. 2020, doi: 10.1016/j.neunet.2022.01.001.
[27]Garrett Bingham, William Macke, and Risto Miikkulainen, “Evolutionary Optimization of Deep Learning Activation Functions,” 2020.
[28]T. Viriyasaranon, S. H. Chae, and J. H. Choi, “MFA-net: Object detection for complex X-ray cargo and baggage security imagery,” PLoS One, vol. 17, no. 9 September, Sep. 2022, doi: 10.1371/journal.pone.0272961.
[29]R. Ahmed, M. J. Altamimi, and M. Hachem, “State-of-the-Art Analytical Approaches for Illicit Drug Profiling in Forensic Investigations,” Molecules, vol. 27, no. 19. MDPI, Oct. 01, 2022, doi: 10.3390/molecules27196602.
[30]M. Alsallal, B. Al-Ghzawi, M. Saeed Sharif, and S. Mohammed Mlkat al Mutoki, “A Machine Learning Technique to Detect Counterfeit Medicine Based on X-Ray Fluorescence Analyser,” Aug. 2018, doi: 10.1109/iCCECOME.2018.8659110.
[31]Y. Erdaw and E. Tachbele, “Machine learning model applied on chest X-ray images enables automatic detection of COVID-19 cases with high accuracy,” Int. J. Gen. Med., vol. 14, pp. 4923–4931, 2021, doi: 10.2147/IJGM.S325609.
[32]E. A. Geng et al., “Development of a machine learning algorithm to identify total and reverse shoulder arthroplasty implants from X-ray images,” J. Orthop., vol. 35, pp. 74–78, Jan. 2023, doi: 10.1016/j.jor.2022.11.004.
[33]K. El Asnaoui and Y. Chawki, “Using X-ray images and deep learning for automated detection of coronavirus disease,” J. Biomol. Struct. Dyn., pp. 1–12, 2020, doi: 10.1080/07391102.2020.1767212.
[34]T. Partridge et al., “Enhanced detection of threat materials by dark-field x-ray imaging combined with deep neural networks,” Nat. Commun., vol. 13, no. 1, Dec. 2022, doi: 10.1038/s41467-022-32402-0.
[35]R. Duwairi and A. Melhem, “A deep learning-based framework for automatic detection of drug resistance in tuberculosis patients,” Egypt. Informatics J., vol. 24, no. 1, pp. 139–148, Mar. 2023, doi: 10.1016/j.eij.2023.01.002.
[36]Harald Jentsch, “Automatic detection of narcotics,” smiths detection, Nov. 06, 2023. https://www.smithsdetection.com/insights/automatic-detection-of-narcotics/ (accessed Nov. 06, 2023).
[37]D. Vukadinovic, D. Anderson, and European Commission. Joint Research Centre., “X-ray baggage screening and artificial intelligence (AI) : a technical review of machine learning techniques for X-ray baggage screening.,” 2022.
[38]I. J. Goodfellow et al., “Generative Adversarial Nets,” 2014. [Online]. Available: http://www.github.com/goodfeli/adversarial.
[39]D. H. Hubel and T. N. Wiesel, “Effects of Monocular Deprivation in Kittens,” 1964.
[40]K. Fukushima, “Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position,” Biol. Cybern., vol. 36, pp. 193–202, 1980.
[41]Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, and E. Bottou, “Gradient-based Learning Applied To Document Recognition,” Proc. IEEE, vol. 86, no. 11, 1998, doi: 10.1109/5.726791ï.
[42]M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” Jan. 2017, [Online]. Available: http://arxiv.org/abs/1701.07875.
[43]I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved Training of Wasserstein GANs,” Mar. 2017, [Online]. Available: http://arxiv.org/abs/1704.00028.
[44]X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, “Least Squares Generative Adversarial Networks,” Nov. 2016, [Online]. Available: http://arxiv.org/abs/1611.04076.
[45]N. Kodali, J. Abernethy, J. Hays, and Z. Kira, “On Convergence and Stability of GANs,” May 2017, [Online]. Available: http://arxiv.org/abs/1705.07215.
[46]R. Atienza, Advanced deep learning with TensorFlow 2 and Keras : apply DL, GANs, VAEs, deep RL, unsupervised learning, object detection and segmentation, and more. Packt Publishing Ltd., 2020.
[47]V. Kukreja, D. Kumar, A. Kaur, Geetanjali, and Sakshi, “GAN-based synthetic data augmentation for increased CNN performance in Vehicle Number Plate Recognition,” in Proceedings of the 4th International Conference on Electronics, Communication and Aerospace Technology, ICECA 2020, Nov. 2020, pp. 1190–1195, doi: 10.1109/ICECA49313.2020.9297625.
[48]H. Qin, M. A. El-Yacoubi, Y. Li, and C. Liu, “Multi-Scale and Multi-Direction GAN for CNN-Based Single Palm-Vein Identification,” IEEE Trans. Inf. Forensics Secur., vol. 16, pp. 2652–2666, 2021, doi: 10.1109/TIFS.2021.3059340.
[49]T. Hu, C. Long, and C. Xiao, “A Novel Visual Representation on Text Using Diverse Conditional GAN for Visual Recognition,” IEEE Trans. Image Process., vol. 30, pp. 3499–3512, 2021, doi: 10.1109/TIP.2021.3061927.
[50]S. Niu, B. Li, X. Wang, and H. Lin, “Defect Image Sample Generation with GAN for Improving Defect Recognition,” IEEE Trans. Autom. Sci. Eng., vol. 17, no. 3, pp. 1611–1622, Jul. 2020, doi: 10.1109/TASE.2020.2967415.
[51]Y. Gao, L. Gao, and X. Li, “A Generative Adversarial Network Based Deep Learning Method for Low-Quality Defect Image Reconstruction and Recognition,” IEEE Trans. Ind. Informatics, vol. 17, no. 5, pp. 3231–3240, May 2021, doi: 10.1109/TII.2020.3008703.
[52]C. Han et al., “Infinite Brain Tumor Images: Can GAN-based Data Augmentation Improve Tumor Detection on MR Images?,” 2018.
[53]C. Mao, L. Huang, Y. Xiao, F. He, and Y. Liu, “Target Recognition of SAR Image Based on CN-GAN and CNN in Complex Environment,” IEEE Access, vol. 9, pp. 39608–39617, 2021, doi: 10.1109/ACCESS.2021.3064362.
[54]Y. Ma, K. Liu, Z. Guan, X. Xu, X. Qian, and H. Bao, “Background augmentation generative adversarial networks (BAGANs): Effective data generation based on GAN-augmented 3D synthesizing,” in Symmetry, Dec. 2018, vol. 10, no. 12, doi: 10.3390/sym10120734.
[55]Z. Xiong, W. Li, Q. Han, and Z. Cai, “Privacy-preserving auto-driving: A GAN-Based approach to protect vehicular camera data,” in Proceedings - IEEE International Conference on Data Mining, ICDM, Nov. 2019, vol. 2019-Novem, pp. 668–677, doi: 10.1109/ICDM.2019.00077.
[56]V. A. Mizginov, V. V. Kniaz, and N. A. Fomin, “A METHOD FOR SYNTHESIZING THERMAL IMAGES USING GAN MULTI-LAYERED APPROACH,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., vol. XLIV-2/W1-, pp. 155–162, Apr. 2021, doi: 10.5194/isprs-archives-xliv-2-w1-2021-155-2021.
[57]C. Dewi, R. C. Chen, Y. T. Liu, X. Jiang, and K. D. Hartomo, “Yolo V4 for Advanced Traffic Sign Recognition with Synthetic Training Data Generated by Various GAN,” IEEE Access, vol. 9, pp. 97228–97242, 2021, doi: 10.1109/ACCESS.2021.3094201.
[58]J. Zhang, Z. Lu, M. Li, and H. Wu, “GAN-Based Image Augmentation for Finger-Vein Biometric Recognition,” IEEE Access, vol. 7, pp. 183118–183132, 2019, doi: 10.1109/ACCESS.2019.2960411.
[59]T. Zhang, A. Wiliem, S. Yang, and B. Lovell, “TV-GAN: Generative adversarial network based thermal to visible face recognition,” in Proceedings - 2018 International Conference on Biometrics, ICB 2018, Jul. 2018, pp. 174–181, doi: 10.1109/ICB2018.2018.00035.
[60]L. Ma, R. Shuai, X. Ran, W. Liu, and C. Ye, “Combining DC-GAN with ResNet for blood cell image classification,” Med. Biol. Eng. Comput., vol. 58, no. 6, pp. 1251–1264, Jun. 2020, doi: 10.1007/s11517-020-02163-3.
[61]J. Deng, S. Cheng, N. Xue, Y. Zhou, and S. Zafeiriou, “UV-GAN: Adversarial Facial UV Map Completion for Pose-Invariant Face Recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Dec. 2018, pp. 7093–7102, doi: 10.1109/CVPR.2018.00741.
[62]X. Zhang et al., “DE-GAN: Domain Embedded GAN for High Quality Face Image Inpainting,” Pattern Recognit., vol. 124, Apr. 2022, doi: 10.1016/j.patcog.2021.108415.
[63]Jiaqi Qiu and Kangyang Xie, “A GAN-based Motion Blurred Image Restoration Algorithm,” 2019.
[64]V. A. Mizginov and S. Y. Danilov, “Synthetic thermal background and object texture generation using geometric information and GaN,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, 2019, vol. 42, no. 2/W12, pp. 149–154, doi: 10.5194/isprs-archives-XLII-2-W12-149-2019.
[65]Y. Xi et al., “DRL-GAN: Dual-Stream Representation Learning GAN for Low-Resolution Image Classification in UAV Applications,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 14, pp. 1705–1716, 2021, doi: 10.1109/JSTARS.2020.3043109.
[66]F. Peng, L. B. Zhang, and M. Long, “FD-GAN: Face De-Morphing Generative Adversarial Network for Restoring Accomplice’s Facial Image,” IEEE Access, vol. 7, pp. 75122–75131, 2019, doi: 10.1109/ACCESS.2019.2920713.
[67]M. Zhang and Q. Ling, “Supervised Pixel-Wise GAN for Face Super-Resolution,” IEEE Trans. Multimed., vol. 23, pp. 1938–1950, 2021, doi: 10.1109/TMM.2020.3006414.
[68]M. A. Özkanoğlu and S. Ozer, “InfraGAN: A GAN architecture to transfer visible images to infrared domain,” Pattern Recognit. Lett., vol. 155, pp. 69–76, Mar. 2022, doi: 10.1016/j.patrec.2022.01.026.
[69]Min Wu et al., “Remote Sensing Image Colorization Based on Multiscale SEnet GAN,” 2019.
[70]Gengxing Wang, Wenxiong Kang, Qiuxia Wu, Zhiyong Wang, and Junbin Gao, “Generative Adversarial Network (GAN) Based Data Augmentation for Palmprint Recognition.” 2018.
[71]W. Xiong, Y. He, Y. Zhang, W. Luo, L. Ma, and J. Luo, “Fine-grained Image-to-Image Transformation towards Visual Recognition,” 2020.
[72]V. Talreja, F. Taherkhani, M. C. Valenti, and N. M. Nasrabadi, “Attribute-Guided Coupled GAN for Cross-Resolution Face Recognition,” Aug. 2019, [Online]. Available: http://arxiv.org/abs/1908.01790.
[73]P. Zhang, Q. Wu, and J. Xu, “VT-GAN: View Transformation GAN for Gait Recognition Across Views,” 2019.
[74]J. J. Bird, C. M. Barnes, L. J. Manso, A. Ekárt, and D. R. Faria, “Fruit Quality and Defect Image Classification with Conditional GAN Data Augmentation,” Apr. 2021, [Online]. Available: http://arxiv.org/abs/2104.05647.
[75]H. Nazki, S. Yoon, and A. Fuentes, “Unsupervised Image Translation using Adversarial Networks for Improved Plant Disease Recognition A Preprint,” 2020.
[76]F. Taherkhani, V. Talreja, J. Dawson, M. C. Valenti, and N. M. Nasrabadi, “PF-cpGAN: Profile to Frontal Coupled GAN for Face Recognition in the Wild,” Apr. 2020, [Online]. Available: http://arxiv.org/abs/2005.02166.
[77]W. Sirichotedumrong and H. Kiya, “A GAN-Based Image Transformation Scheme for Privacy-Preserving Deep Neural Networks,” Jun. 2020, [Online]. Available: http://arxiv.org/abs/2006.01342.
[78]Y. Xi et al., “See Clearly in the Distance: Representation Learning GAN for Low Resolution Object Recognition,” IEEE Access, vol. 8, pp. 53203–53214, 2020, doi: 10.1109/ACCESS.2020.2978980.
[79]J. Guo and Y. Liu, “Image completion using structure and texture GAN network,” Neurocomputing, vol. 360, pp. 75–84, Sep. 2019, doi: 10.1016/j.neucom.2019.06.010.
[80]C. Hu, X. J. Wu, and Z. Q. Shu, “Bagging deep convolutional autoencoders trained with a mixture of real data and GAN-generated data,” KSII Trans. Internet Inf. Syst., vol. 13, no. 11, pp. 5427–5445, Nov. 2019, doi: 10.3837/tiis.2019.11.009.
[81]M. Luo, J. Cao, X. Ma, X. Zhang, and R. He, “FA-GAN: Face Augmentation GAN for Deformation-Invariant Face Recognition,” IEEE Trans. Inf. Forensics Secur., vol. 16, pp. 2341–2355, 2021, doi: 10.1109/TIFS.2021.3053460.
[82]J. Donahue, P. Krähenbühl, and T. Darrell, “Adversarial Feature Learning,” May 2017, [Online]. Available: http://arxiv.org/abs/1605.09782.
[83]S. Akcay, A. Atapour-Abarghouei, and T. P. Breckon, “GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training,” May 2018, [Online]. Available: http://arxiv.org/abs/1805.06725.
[84]P. Perera, R. Nallapati, and B. Xiang, “OCGAN: One-class Novelty Detection Using GANs with Constrained Latent Representations,” Mar. 2019, [Online]. Available: http://arxiv.org/abs/1903.08550.
[85]J. K. Dumagpi, W. Y. Jung, and Y. J. Jeong, “A new GAN-based anomaly detection (GBAD) approach for multi-threat object classification on large-scale x-ray security images,” IEICE Trans. Inf. Syst., vol. E103D, no. 2, pp. 454–458, 2020, doi: 10.1587/transinf.2019EDL8154.
[86]T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs, “Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery,” Mar. 2017, [Online]. Available: http://arxiv.org/abs/1703.05921.
[87]V. Dumoulin et al., “Adversarially Learned Inference,” Jun. 2017, [Online]. Available: http://arxiv.org/abs/1606.00704.
[88]H. Zenati, M. Romain, C. S. Foo, B. Lecouat, and V. R. Chandrasekhar, “Adversarially Learned Anomaly Detection,” Dec. 2018, [Online]. Available: http://arxiv.org/abs/1812.02288.
[89]F. Carrara, G. Amato, L. Brombin, F. Falchi, and C. Gennaro, “Combining GANs and AutoEncoders for Efficient Anomaly Detection,” Nov. 2020, [Online]. Available: http://arxiv.org/abs/2011.08102.
[90]S. Zhang, S. Jiang, and Y. Yan, “A Software Defect Prediction Approach Based on BiGAN Anomaly Detection,” Sci. Program., vol. 2022, 2022, doi: 10.1155/2022/5024399.
[91]D. Mery et al., “GDXray: The database of X-ray images for nondestructive testing,” 2015, [Online]. Available: http://dmery.ing.puc.cl.
[92]C. Miao et al., “SIXray : A Large-scale Security Inspection X-ray Benchmark for Prohibited Item Discovery in Overlapping Images,” Jan. 2019, [Online]. Available: http://arxiv.org/abs/1901.00303.
[93]Y. Wei, R. Tao, Z. Wu, Y. Ma, L. Zhang, and X. Liu, “Occluded Prohibited Items Detection: An X-ray Security Inspection Benchmark and De-occlusion Attention Module,” in MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia, Oct. 2020, pp. 138–146, doi: 10.1145/3394171.3413828.
[94]R. Tao et al., “Towards Real-world X-ray Security Inspection: A High-Quality Benchmark and Lateral Inhibition Module for Prohibited Items Detection,” Aug. 2021, [Online]. Available: http://arxiv.org/abs/2108.09917.
[95]L. Zhang, L. Jiang, R. Ji, and H. Fan, “PIDray: A Large-scale X-ray Benchmark for Real-World Prohibited Item Detection,” Nov. 2022, [Online]. Available: http://arxiv.org/abs/2211.10763.
[96]言有三, 深度學習之圖像識別:核心技術與案例實戰. 機械工業出版社, 2019.
[97]Matthew Skelton, “Database Lifecycle Management for ETL Systems,” redgate, Mar. 16, 2016. https://www.red-gate.com/simple-talk/devops/database-devops/database-lifecycle-management-for-etl-systems/ (accessed Nov. 09, 2023).
[98]Jason Brownlee, “What is the Difference Between Test and Validation Datasets?,” Machine Learning Mastery, Aug. 14, 2020. https://machinelearningmastery.com/difference-test-validation-datasets/ (accessed Dec. 25, 2023).
[99]Sarang Narkhede, “Understanding AUC - ROC Curve,” Towards Data Science, Jun. 27, 2018. https://towardsdatascience.com/understanding-auc-roc-curve-68b2303cc9c5 (accessed Dec. 24, 2023).
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊