跳到主要內容

臺灣博碩士論文加值系統

(44.200.101.84) 您好!臺灣時間:2023/10/05 10:37
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:劉昱鑫
研究生(外文):Liu, Yu-Hsin
論文名稱:非監督式學習單眼視覺深度、光流場及相機運動之估測
論文名稱(外文):Joint Unsupervised Learning of Multi-frame Depth, Optical Flow and Ego-motion by Watching Videos
指導教授:莊仁輝陳華總陳華總引用關係
指導教授(外文):Chuang, Jen-HuiChen, Hua-Tsung
口試委員:莊仁輝陳華總李東霖
口試委員(外文):Chuang, Jen-HuiChen, Hua-Tsung
口試日期:2019-07-19
學位類別:碩士
校院名稱:國立交通大學
系所名稱:資訊科學與工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:55
中文關鍵詞:非監督式學習深度預測光流預測KITTI資料集AirSim
外文關鍵詞:unsupervised learningdepth predictionoptical flowKITTIAirSim
相關次數:
  • 被引用被引用:0
  • 點閱點閱:272
  • 評分評分:
  • 下載下載:10
  • 收藏至我的研究室書目清單書目收藏:0
學習一個場景中的3D幾何資訊,例如:場景深度、光流場等,可以幫助機器人達到環境感知及障礙物閃避等功能。近年來有很多研究開發監督式學習的神經網路模型,利用大量的資料來學習影像中的3D幾何資訊。然而,這些方法非常訓練資料的收集以及訓練資料本身的正確性來達到準確的估測,但是這些方法只能訓練在已有的資料庫上,並無法準確估測訓練資料以外的場景。有鑑於此,本論文專注於開發非監督式學習的神經網路模型,只透過連續單眼影像來學習光流、場景深度以及相機運動之預測。
我們透過將原影像與模型估測深度做inverse warping合成目標影像,利用合成的目標影像與真實目標影像的差值來訓練我們的模型。基於此方法,我們進一步將連續三張單眼影像中任兩張圖片的所有排列組合當作我們訓練網路的資訊,實驗顯示,我們的方法可以有效提升場景深度預測的準確度。此外,因為場景中移動物體會導致遮蔽問題,嚴重影響預測的準確度,因此我們利用光流來預測遮蔽的區域,進而幫助訓練我們的深度模型,並大幅改善遮蔽問題。
透過在KITTI資料集的訓練,我們的單眼影像深度預測能優於其它非監督式學習場景深度的方法。此外,我們也利用AirSim收集空拍機視角的影像訓練我們提出的模型,並且展示本方法在空拍機視角上也能達到不錯的深度及光流預測。
Learning 3D geometry in a scene such as depth and optical flow can benefit robotics to perceive the environment and avoid obstacles. In recent years, many researchers develop deep neural networks and use supervised learning to learn 3D geometry in a scene. However, to achieve high performance, those methods require plenty of well-labelled training data, which is a big limitation for supervised learning methods, since they may not be generalized to work for outside the dataset. Consequently, we focus on developing an unsupervised learning system that can train deep neural networks to estimate optical flow, depth and ego-motion estimation with only single-view image sequences as inputs.
Accordingly, we exploit inverse warping technique to synthesize the target image using the predicted depth map and the source image, and use the difference between the true target image and the synthesized image to guide our training procedure. Based on this idea, we further use all permutations of image pairs in an image sequence with three frames to train our model. Besides, we introduce the soft occlusion maps estimated by optical flow to our networks to tackle the occlusion problem in the estimation of optical flow, depth and camera ego-motion. Experimental results show our approach can surpass previous works in monocular depth prediction for KITTI dataset.
Also, to verify the generalizability of our model, we train our model on a drone-view dataset collected by AirSim, and demonstrate our model can perform reasonably well on various camera poses and altitudes.
摘 要 i
Abstract ii
致 謝 iii
Contents iv
List of Figures vi
List of Tables viii
Chapter 1. Introduction 1
1.1 Motivation and background 1
1.2 Literature Review 2
1.2.1 Depth Estimation 2
1.2.2 Optical Flow Estimation 5
1.3 Contributions 7
1.4 Thesis Organization 7
Chapter 2. Related Work 8
2.1 Unsupervised Learning of Optical Flow 8
2.1.1 Network Structure 9
2.1.2 Loss Function 11
2.2 CNNs for Optical Flow 12
2.3 Unsupervised Learning of Depth and Camera Pose 14
2.3.1 View Synthesis 16
2.3.2 Network Structure 17
2.3.3 Loss Function 20
Chapter 3. The Proposed Method 22
3.1 Train Soft-UnFlowNet and Occlusion Maps 23
3.1.1 Network Architecture 23
3.1.2 Occlusion Detection and Handling 26
3.2.3 Loss Function 30
3.2 Train D-Net and P-Net with occlusion maps 30
3.2.1 Network Architecture of D-Net 31
3.2.2 Network Architecture of P-Net 34
3.2.3 Loss Function 35
Chapter 4. Experiments and Results 37
4.1 Datasets and Data Pre-processing 37
4.1.1 KITTI Dataset 37
4.1.2 AirSim Dataset 39
4.2 Implementation Details 40
4.3 Results 41
4.3.1 Evaluation Metrics 41
4.3.2 Depth Evaluation on KITTI Dataset 42
4.3.3 Depth Evaluation on AirSim Dataset 44
4.4 Ablation Study 44
Chapter 5. Conclusions 47
References 48
[1] B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial intelligence, vol. 17, no. 1-3, pp. 185–203, 1981.
[2] J. Janai, F. Gu ̈ney, A. Behl, and A. Geiger, “Computer vision for autonomous vehicles: Problems, datasets and state-of-the-art,” arXiv preprint arXiv:1704.05519, 2017.
[3] N. Bonneel, J. Tompkin, K. Sunkavalli, D. Sun, S. Paris, and H. Pfister, “Blind video temporal consistency,” ACM Transactions on Graphics (TOG), vol. 34, no. 6, p. 196, 2015.
[4] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox, “A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4040–4048.
[5] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe, “Unsupervised learning of depth and ego-motion from video,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1851–1858.
[6] M. Jaderberg, K. Simonyan, A. Zisserman, “Spatial transformer networks,” in Advances in neural information processing systems, 2015, pp. 2017–2025.
[7] C. Fehn, “Depth-image-based rendering (dibr), compression, and trans- mission for a new approach on 3d-tv,” in Stereoscopic Displays and Virtual Reality Systems XI, vol. 5291. International Society for Optics and Photonics, 2004, pp. 93–104.
[8] J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani, “Hierarchical model-based motion estimation,” in European conference on computer vision. Springer, 1992, pp. 237–252.
[9] R. Garg, V. K. BG, G. Carneiro, and I. Reid, “Unsupervised cnn for single view depth estimation: Geometry to the rescue,” in European Conference on Computer Vision. Springer, 2016, pp. 740–756.
[10] C. Godard, O. MacAodha, and G. J. Brostow, “Unsupervised monocular depth estimation with left-right consistency,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 270– 279.
[11] S. Baker, D. Scharstein, J. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A database and evaluation methodology for optical flow,” International Journal of Computer Vision, vol. 92, no. 1, pp. 1–31, 2011.
[12] D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, “A naturalistic open source movie for optical flow evaluation,” in European conference on computer vision. Springer, 2012, pp. 611–625.
[13] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012, pp. 3354–3361.
[14] K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in Advances in neural information processing systems, 2014, pp. 568–576.
[15] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2758–2766.
[16] J. Y. Jason, A. W. Harley, and K. G. Derpanis, “Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness,” in European Conference on Computer Vision. Springer, 2016, pp. 3–10.
[17] D. Sun, S. Roth, and M. J. Black, “A quantitative analysis of current practices in optical flow estimation and the principles behind them,” International Journal of Computer Vision, vol. 106, no. 2, pp. 115–137, 2014.
[18] A. Bruhn, J. Weickert, and C. Schno ̈rr, “Lucas/kanade meets horn/schunck: Combining local and global optic flow methods,” Inter- national journal of computer vision, vol. 61, no. 3, pp. 211–231, 2005.
[19] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “Flownet 2.0: Evolution of optical flow estimation with deep networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2462–2470.
[20] D. Fleet and Y. Weiss, “Optical flow estimation,” in Handbook of mathematical models in computer vision. Springer, 2006, pp. 237– 257.
[21] D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single image using a multi-scale deep network,” in Advances in neural information processing systems, 2014, pp. 2366–2374.
[22] Y. Wang, Y. Yang, Z. Yang, L. Zhao, P. Wang, and W. Xu, “Occlusion aware unsupervised learning of optical flow,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4884–4893.
[23] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
[24] D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8934–8943.
[25] A. Ranjan and M. J. Black, “Optical flow estimation using a spatial pyramid network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4161–4170.
[26] A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 2, pp. 504–511, 2012.
[27] Q. Chen and V. Koltun, “Full flow: Optical flow estimation by global optimization over regular grids,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4706–4714.
[28] J. Xu, R. Ranftl, and V. Koltun, “Accurate optical flow via direct cost volume processing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1289–1297.
[29] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122, 2015.
[30] T.-W. Hui, X. Tang, and C. Change Loy, “Liteflownet: A lightweight convolutional neural network for optical flow estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8981–8989.
[31] S. Meister, J. Hur, and S. Roth, “Unflow: Unsupervised learning of optical flow with a bidirectional census loss,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[32] F. Gu ̈ney and A. Geiger, “Deep discrete flow,” in Asian Conference on Computer Vision. Springer, 2016, pp. 207–224.
[33] M. J. Black and P. Anandan, “The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields,” Computer vision and image understanding, vol. 63, no. 1, pp. 75–104, 1996.
[34] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
[35] C. Wang, J. Miguel Buenaposada, R. Zhu, and S. Lucey, “Learning depth from monocular videos using direct methods,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2022–2030.
[36] H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 47–57, 2016.
[37] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
[38] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[39] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[40] G. Klein and D. Murray, “Parallel tracking and mapping for small ar workspaces,” in Proceedings of the 2007 6th IEEE and ACM Interna- tional Symposium on Mixed and Augmented Reality. IEEE Computer Society, 2007, pp. 1–10.
[41] A. Byravan and D. Fox, “Se3-nets: Learning rigid body motion using deep neural networks,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 173–180.
[42] H. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao, “Deep ordinal regression network for monocular depth estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2002–2011.
[43] A. Geiger, J. Ziegler, and C. Stiller, “Stereoscan: Dense 3d reconstruction in real-time,” in 2011 IEEE Intelligent Vehicles Symposium (IV). Ieee, 2011, pp. 963–968.
[44] I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab, “Deeper depth prediction with fully convolutional residual networks,” in 2016 Fourth international conference on 3D vision (3DV). IEEE, 2016, pp. 239–248.
[45] Z. Yang, P. Wang, W. Xu, L. Zhao, and R. Nevatia, “Unsupervised learning of geometry from videos with edge-aware depth-normal consistency,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[46] A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry, “End-to-end learning of geometry and context for deep stereo regression,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 66–75.
[47] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
[48] B. Li, C. Shen, Y. Dai, A. Van Den Hengel, and M. He, “Depth and surface normal estimation from monocular images using regression on deep features and hierarchical crfs,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1119– 1127.
[49] A. B. Owen, “A robust hybrid of lasso and ridge regression,” Contem- porary Mathematics, vol. 443, no. 7, pp. 59–72, 2007.
[50] J. Xie, R. Girshick, and A. Farhadi, “Deep3d: Fully automatic 2d-to-3d video conversion with deep convolutional neural networks,” in European Conference on Computer Vision. Springer, 2016, pp. 842–857.
[51] N. Mayer, E. Ilg, P. Fischer, C. Hazirbas, D. Cremers, A. Dosovitskiy, and T. Brox, “What makes good synthetic training data for learning dis- parity and optical flow estimation?” International Journal of Computer Vision, vol. 126, no. 9, pp. 942–960, 2018.
[52] Y. Zou, Z. Luo, and J.-B. Huang, “Df-net: Unsupervised joint learning of depth and flow using cross-task consistency,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 36–53.
[53] H. Zhan, R. Garg, C. Saroj Weerasekera, K. Li, H. Agarwal, and I. Reid, “Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 340–349.
[54] P. Wang, X. Shen, Z. Lin, S. Cohen, B. Price, and A. L. Yuille, “Towards unified depth and semantic prediction from a single image,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2800–2809.
[55] L. Ladicky, J. Shi, and M. Pollefeys, “Pulling things out of perspective,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 89–96.
[56] K. Karsch, C. Liu, and S. B. Kang, “Depth transfer: Depth extraction from video using non-parametric sampling,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 11, pp. 2144– 2158, 2014.
[57] A. Saxena, S. H. Chung, and A. Y. Ng, “Learning depth from single monocular images,” in Advances in neural information processing systems, 2006, pp. 1161–1168.
[58] A. Saxena, M. Sun, and A. Y. Ng, “Make3d: Learning 3d scene structure from a single still image,” IEEE transactions on pattern analysis and machine intelligence, vol. 31, no. 5, pp. 824–840, 2008.
[59] B. Liu, S. Gould, and D. Koller, “Single image depth estimation from predicted semantic labels,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 2010, pp. 1253– 1260.
[60] Y. Wu, S. Ying, and L. Zheng, “Size-to-depth: A new perspective for single image depth estimation,” arXiv preprint arXiv:1801.04461, 2018.
[61] S. Vijayanarasimhan, S. Ricco, C. Schmid, R. Sukthankar, and K. Fragkiadaki, “Sfm-net: Learning of structure and motion from video,” arXiv preprint arXiv:1704.07804, 2017.
[62] S. Vedula, P. Rander, R. Collins, and T. Kanade, “Three-dimensional scene flow,” IEEE transactions on pattern analysis and machine intel- ligence, vol. 27, no. 3, pp. 475–480, 2005.
[63] T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 3, pp. 500–513, 2010.
[64] E.Me ́minandP.Pe ́rez, “Dense estimation and object-based segmentation of the optical flow with robust techniques,” IEEE Transactions on Image Processing, vol. 7, no. 5, pp. 703–719, 1998.
[65] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in European conference on computer vision. Springer, 2004, pp. 25–36.
[66] N. Papenberg, A. Bruhn, T. Brox, S. Didas, and J. Weickert, “Highly accurate optic flow computation with theoretically justified warping,” International Journal of Computer Vision, vol. 67, no. 2, pp. 141–158, 2006.
[67] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” 1981.
[68] A. S. Kristensen, D. Ahsan, S. Mehmood, and S. Ahmed, “Rescue emergency drone for fast response to medical emergencies due to traffic accidents,” World Academy of Science, Engineering and Technology International Journal of Health and Medical Engineering, vol. 11, no. 11, pp. 637–41, 2017.
[69] F. Nex and F. Remondino, “Uav for 3d mapping applications: a review,” Applied geomatics, vol. 6, no. 1, pp. 1–15, 2014.
[70] V. Soundararajan and A. Agrawal, “Automated package delivery to a delivery receptacle,” Jan. 26 2016, uS Patent 9,244,147.
[71] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid, “Deepflow: Large displacement optical flow with deep matching,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 1385–1392.
[72] J. Revaud, P. Weinzaepfel, Z. Harchaoui, and C. Schmid, “Epicflow: Edge-preserving interpolation of correspondences for optical flow,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1164–1172.
[73] Z. Ren, J. Yan, B. Ni, B. Liu, X. Yang, and H. Zha, “Unsupervised deep learning for optical flow estimation,” in Thirty-First AAAI Conference on Artificial Intelligence, 2017.
[74] S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and service robotics. Springer, 2018, pp. 621–635.
[75] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[76] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, “Tensorflow: A system for large- scale machine learning,” in 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), 2016, pp. 265–283.
[77] F. Liu, C. Shen, G. Lin, and I.Reid, “Learning depth from single monocular images using deep convolutional neural fields,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 10, pp. 2024– 2039, 2015.
[78] R. Mahjourian, M. Wicke, and A. Angelova, “Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5667–5675.
[79] Z. Yin and J. Shi, “Geonet: Unsupervised learning of dense depth, optical flow and camera pose,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1983–1992.
[80] Z. Yang, P. Wang, Y. Wang, W. Xu, and R. Nevatia, “Lego: Learning edge with geometry all at once by watching videos,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 225–234.
[81] Z. Yang, P. Wang, Y. Wang, W. Xu, R. Nevatia, “Every pixel counts: Unsupervised geometry learning with holistic 3d motion understanding,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 0–0.
[82] C. Luo, Z. Yang, P. Wang, Y. Wang, W. Xu, R. Nevatia, and A. Yuille, “Every pixel counts++: Joint learning of geometry and motion with 3d holistic understanding,” arXiv preprint arXiv:1810.06125, 2018.
[83] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
[84] M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3061–3070.
[85] J. Irizarry, M. Gheisari, and B. N. Walker, “Usability assessment of drone technology as safety inspection tools,” Journal of Information Technology in Construction (ITcon), vol. 17, no. 12, pp. 194–212, 2012.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top