|
參考文獻 [1]內政部統計處網頁,107年第30週內政統計通報。 網址:https://www.moi.gov.tw/stat/node.aspx?cate_sn=-1&belong_sn=7460&sn=7712.html [2]J. Redmon and A. Farhadi. “Yolo9000: Better, faster, stronger.” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pages 7263-7271. DOI: 10.1109/CVPR.2017.690 [3]YOLO v3-tiny source code. Available from https://github.com/AlexeyAB/darknet [4] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. “Mobilenets: Efficient convolutional neural networks formobile vision applications.” arXiv:1704.04861, 2017. [5]Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. “Deep residual learning for image recognition.” In CVPR, 2016, pp. 770-778 [6] K. He, X. Zhang, S. Ren, and J. Sun. “Identity mappings in deep residual networks.” In ECCV, 2016. 2, 3, 5, 7 [7]M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. “Mobilenetv2: Inverted residuals and linear bottlenecks.” In CVPR, 2018. [8]Zhang, X., Zhou, X., Lin, M., Sun, J.”Shufflenet: An extremely efficient convolutional neural network for mobile devices.” In CVPR. (2018) [9]N. Ma, X. Zhang, H.-T. Zheng, and J. Sun. “Shufflenet v2: Practical guidelines for efficient cnn architecture design.” The European Conference on Computer Vision (ECCV), 2018, pp. 116-131 [10]Bordes, A., Bottou, L., Gallinari, P. “Sgd-qn: Careful quasi-newton stochasticgradient descent.” Journal of Machine Learning Research 10, 1737–1754, 2009. [11]V. Sze, Y.-H. Chen, T.-J. Yang, and J. Emer. “Efficient processing of deep neuralnetworks: A tutorial and survey.” arXiv preprint arXiv:1703.09039, 2017. [12]A. Krizhevsky, I. Sutskever, and G. Hinton. “Imagenet classification with deepconvolutional neural networks.” In NIPS, 2012. [13]Simonyan, K.; Zisserman, A. “Very deep convolutional networks for large-scaleimage recognition.” arXiv 2014, arXiv:1409.1556 [14]C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V.Vanhoucke, A. Rabinovich, "Going deeper with convolutions", IEEE Conferenceon Computer Vision and Pattern Recognition CVPR, pp. 1-9, 2015. [15] D. Cristinacce, T. Cootes, "Facial Feature Detection Using AdaBoost WithShape Constrains", British Machine Vision Conference, 2003. [16]THE Star ONLINE : Apple proposed a smarter Siri at WWDC developerconference. 網址:https://www.thestar.com.my/tech/tech-news/2018/06/09/apple-proposes-a-smarter-siri-at-wwdc-developer-conference/#qKiLzfb0ALQ9CxLM.99. [17]MakeUseOf : What Is Google Assistant and How to Use It, By Ben Stegner,March 23, 2018. 網址:https://www.makeuseof.com/tag/what-is-google-assistant/ [18]科技報橘Tech Orange : Google 醫療 AI 新進展!精準檢測癌細胞擴散,正確率高達 99%。 網址:https://buzzorange.com/techorange/2018/12/07/google-ai-cancer-research/ [19]u-car : 安全為先,Volvo新世代S60安全科技體驗。網址:https://news.u-car.com.tw/article/13576/%E5%AE%89%E5%85%A8%E7%82%BA%E5%85%88%EF%BC%8CVolvo%E6%96%B0%E4%B8%96%E4%BB%A3S60%E5%AE%89%E5%85%A8%E7%A7%91%E6%8A%80%E9%AB%94%E9%A9%97 [20]車訊網:零傷亡目標Volvo New S60行人偵測式全自動煞車。 網址:https://carnews.com/article/info/200a6fe5-4b03-11e8-8ee2-42010af00004/ [21] S. Ren, K. He, R. B. Girshick, J. Sun, "Faster R-CNN: towards real-time object detection with region proposal networks", CoRR, vol. abs/1506.01497, 2015. [22] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. “You only look once: Unified,real-time object detection.” arXiv preprint arXiv:1506.02640, 2015. [23]J. Redmon, A. Farhadi, “Yolov3: An incremental improvement” CoRR, 2018. [24]Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. “Gradient-based learning appliedto document recognition.” Proceedings of the IEEE, 86(11):2278–2324, 1998. [25]Wikipedia : Activation function 網址:https://en.wikipedia.org/wiki/Activation_function [26]莫煩PYTHON:Batch Normalization 網址:https://morvanzhou.github.io/tutorials/machine-learning/ML-intro/3-08-batch-normalization/ [27]C. L. Zitnick and P. Doll´ar, “Edge boxes: Locating object proposals from edges,” in European Conference on Computer Vision (ECCV), 2014. [28]M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results,” 2007 [29]T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft COCO: Common Objects in Context,” in European Conference on Computer Vision (ECCV), 2014. [30]NEON Programmer's Guide – Arm. 網址: https://static.docs.arm.com/den0018/a/DEN0018A_neon_programmers_guide_en.pdf [31]Iandola, F.N., Moskewicz, M.W., Ashraf, K., Han, S., Dally, W.J., Keutzer, K., DeepScale∗& UC Berkeley Stanford University ”SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size”, arXiv preprint arXiv:1602.07360, 2016 [32]Hu, Jie; Shen, Li; Albanie, Samuel; Sun, Gang; Wu, Enhua ”Squeeze-and-Excitation Networks”, IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7132-7141, 2018 [33]M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and C. Chen. “Mobilenetv2: Inverted residuals and linear bottlenecks.” CVPR, 2018. [34]Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao.”YOLOv4: Optimal Speed and Accuracy of Object Detection”, arXiv preprint arXiv:2004.10934, 2020. [35]Diederik P. Kingma, Jimmy Lei Ba,” ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION”, ICLR ,2015 [36]Sebastian Ruder,” An overview of gradient descent optimization algorithms ”, arXiv preprint arXiv:1609.04747v2, 2017. [37]YOLO v4-tiny source code. Available from https://github.com/AlexeyAB/darknet [38]ncnn source code. Available from https://github.com/xiangweizeng/darknet2ncnn [39] 交通部運輸研究所90.04.24.運安字第900002569號函 網址: https://reurl.cc/gmvN8V [40] Renesas R-car H3規格 網址: https://www.renesas.com/tw/zh/solutions/automotive/soc/r-car-h3.html
|