|
[1] 臺北市政府警察局文山第二分局, “遊覽車撞自行車釀死亡車禍 警方呼籲應注意大型車視線死角”, Nov. 13, 2019. Accessed on: July 20, 2022. [Online]. Available: https://police.gov.taipei/News_Content.aspx?n=471D7CA98EADC7B6&sms=72544237BBE4C5F6&s=D9C49A543F472AB8&ccms_cs=1 [2] 游鎧丞, “有畫面講話才大聲!行車記錄器購買也有眉角要注意”, ETtoday新聞雲, Feb. 14, 2021. Accessed on: July 20, 2022. [Online]. Available: https://speed.ettoday.net/news/1894678 [3] 黃瀞瑩, 鍾尹倫, “6/1新規!未裝「行車視野輔助」 最高罰24000”, yahoo!新聞, June 1, 2021. Accessed on: July 20, 2022. [Online]. Available: https://tw.news.yahoo.com/6-1%E6%96%B0%E8%A6%8F-%E6%9C%AA%E8%A3%9D-%E8%A1%8C%E8%BB%8A%E8%A6%96%E9%87%8E%E8%BC%94%E5%8A%A9-%E6%9C%80%E9%AB%98%E7%BD%B024000-042903745.html [4] M. -L. Shih, S. -Y. Su, J. Kopf and J. -B. Huang, "3D Photography Using Context-Aware Layered Depth Inpainting," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 8025-8035, doi: 10.1109/CVPR42600.2020.00805. [5] R. Tucker and N. Snavely, "Single-View View Synthesis With Multiplane Images," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 548-557, doi: 10.1109/CVPR42600.2020.00063. [6] R. Rombach, P. Esser and B. Ommer, "Geometry-Free View Synthesis: Transformers and no 3D Priors," 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14336-14346, doi: 10.1109/ICCV48922.2021.01409. [7] K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi and M. Ebrahimi, "EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning", 2019, arXiv:1901.00212 [cs.CV] [8] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao and B. Catanzaro, "Image Inpainting for Irregular Holes Using Partial Convolutions", 2018, arXiv:1804.07723 [cs.CV] [9] R. G. de Albuquerque Azevedo and G. F. Lima, "A graphics composition architecture for multimedia applications based on layered-depth-image," 2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 2016, pp. 1-4, doi: 10.1109/3DTV.2016.7548882. [10] W. E. Lorensen and H. E. Cline, "Marching cubes: A high resolution 3D surface construction algorithm," ACM siggraph computer graphics, vol. 21, no. 4, pp. 163–169, 1987. [11] Wikipedia , "Polygon mesh", July 8, 2022. Accessed on: July 20, 2022. [Oneline]. Available: https://en.wikipedia.org/wiki/Polygon_mesh [12] Richard Wright , "Lesson 21 - Orthographic Projections", 2017. Accessed on: July 20, 2022. [Oneline]. Available: https://www.geofx.com/graphics/nehe-three-js/lessons17-24/lesson21/lesson21.html [13] M. Mehralian and B. Karasfi, "RDCGAN: Unsupervised Representation Learning With Regularized Deep Convolutional Generative Adversarial Networks," 2018 9th Conference on Artificial Intelligence and Robotics and 2nd Asia-Pacific International Symposium, 2018, pp. 31-38, doi: 10.1109/AIAR.2018.8769811. [14] D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes”, 2013, arXiv:1312.6114 [stat.ML] [15] R. Abdal, P. Zhu, N. Mitra and P. Wonka, "StyleFlow: Attribute-conditioned Exploration of StyleGAN-Generated Images using Conditional Continuous Normalizing Flows", 2020, arXiv:2008.02401 [cs.CV] [16] W. Grathwohl, R. T. Q. Chen, J. Bettencourt, I. Sutskever and D. Duvenaud, “FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models”, 2018, arXiv:1810.01367 [cs.LG] [17] E. Richardson et al., "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation," 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 2287-2296, doi: 10.1109/CVPR46437.2021.00232. [18] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen and T. Aila, "Analyzing and Improving the Image Quality of StyleGAN," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 8107-8116, doi: 10.1109/CVPR42600.2020.00813. [19] R. Girshick, J. Donahue, T. Darrell and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580-587, doi: 10.1109/CVPR.2014.81. [20] S. Ren, K. He, R. Girshick and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks”, 2015, arXiv:1506.01497 [cs.CV] [21] J. Redmon, S. Divvala, R. Girshick and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779-788, doi: 10.1109/CVPR.2016.91. [22] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu and A. C. Berg, "SSD: Single Shot Multibox Detector," in European conference on computer vision, 2016, pp. 21-37: Springer. [23] A. Bochkovskiy, C.-Y. Wang and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection”, 2020, arXiv:2004.10934 [cs.CV] [24] D. Bolya, C. Zhou, F. Xiao and Y. J. Lee, "YOLACT: Real-Time Instance Segmentation," 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9156-9165, doi: 10.1109/ICCV.2019.00925. [25] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, 2015, pp. 234-241: Springer. [26] R. Zhang, P. Isola, A. A. Efros, E. Shechtman and O. Wang, "The Unreasonable Effectiveness of Deep Features as a Perceptual Metric," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 586-595, doi: 10.1109/CVPR.2018.00068. [27] M. Cordts et al., "The Cityscapes Dataset for Semantic Urban Scene Understanding," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 3213-3223, doi: 10.1109/CVPR.2016.350. [28] C.-Y. Wang, A. Bochkovskiy and H.-Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors”, 2022, arXiv:2207.02696 [cs.CV]
|