跳到主要內容

臺灣博碩士論文加值系統

(34.236.36.94) 您好!臺灣時間:2021/07/24 21:47
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:魏誠寬
研究生(外文):Cheng-Kuan Wei
論文名稱:同時學習音素模型及無標註聲學組型之HMM狀態之語者調適
論文名稱(外文):Speaker Adaptation by Joint Learning the HMM states of Phoneme Models and Acoustic Tokens Discovered without Annotations
指導教授:李琳山李琳山引用關係
指導教授(外文):Lin-shan Lee
口試委員:李宏毅于天立
口試委員(外文):Hung-yi LeeTian-Li Yu
口試日期:2015-07-29
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:電機工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2015
畢業學年度:103
語文別:中文
論文頁數:86
中文關鍵詞:非監督式聲學組型多目標學習類神經網路聲學模型語者調適個人化語音辨識
外文關鍵詞:unsupervised acoustic tokenmulti-task learningneural network-based acoustic modelspeech adaptationpersonalized speech recognition
相關次數:
  • 被引用被引用:0
  • 點閱點閱:130
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在語音辨識中,以深層類神經網路 (deep neural network, DNN) 來建構聲學模型 (acoustic model, AM) 的作法已成為主流。但在訓練深層類神經網路時,學習率的調整常是必須且最花時間的步驟。本論文以英文的評效語料 (benchmark corpus) 詳細測試兩種隨著訓練過程中錯誤表面 (error surface) 的變化自動調適學習率的方法:調適次梯度法 (adaptive subgradient method, AdaGrad) 及其結合滑動窗後的改進版本 (AdaDelta)。實驗結果顯示這兩種方法確能減少對學習率的倚賴並加速訓練,其中又以調適次梯度法更為適合快速實驗的情境。

另一方面,在個人化語音辨識的情境下,今日個人化的語料已經相當的豐富,但其中大都沒有人工標註的文字轉寫,所以本文也探討了在深層類神經網路的聲學模型架構中,利用將人工標註的音素及以非監督式 (unsupervised) 方式自動產生的聲學組型共用類神經網路的隱藏層的方法,以非監督式聲學組型的隱藏式馬可夫模型 (Hidden Markov Model, HMM) 狀態作為另一組訓練目標,協助我們利用大量沒有文字轉寫的語料來進行語者調適 (speaker adaptation) 。在參照 Facebook 個人動態 (status) 錄製成的中英雙語 (bilingual) 語料的實驗中,我們證實這個方法是有效的,尤其在含文字轉寫的語料量越少時,幫助越明顯。

此外,我們也實作了一套透過圖形處理器 (graphics processing unit, GPU) 加速,並實作任意有向無環圖結構及遞迴式的深層類神經網路函數庫及工具。

中文摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
一、導論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 研究動機. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 研究方向. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 章節安排. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
二、背景知識. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 基於深層類神經網路之自動語音辨識. . . . . . . . . . . . . . . . . . 5
2.1.1 自動語音辨識. . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 聲學模型. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.3 類神經網路. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.4 以深層類神經網路作為聲學模型. . . . . . . . . . . . . . . . 11
2.1.5 辭典. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.6 語言模型. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 以非監督式方式自動發現之聲學組型. . . . . . . . . . . . . . . . . . 16
2.2.1 尋找聲學組型之演算法. . . . . . . . . . . . . . . . . . . . . . 16
2.2.2 模型粗細度空間(Model Granularity Space) . . . . . . . . . . . 18
2.3 個人化的語音辨識情境(Scenario) . . . . . . . . . . . . . . . . . . . 20
2.4 本章總結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
三、動態調適隨機梯度下降法之學習率. . . . . . . . . . . . . . . . . . . . . . 23
3.1 調適次梯度法(AdaGrad) . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.1 簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.2 演算法推導. . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.3 遺憾損失的上界. . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 結合滑動窗(sliding window)之調適次梯度法(AdaDelta) . . . . . . . 28
3.2.1 簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.2 演算法推導. . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 實驗結果與分析. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.1 實驗設定. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.2 實驗結果與分析. . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4 本章總結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
四、以自動發現之聲學組型加強類神經網路之個人化聲學模型. . . . . . . . 37
4.1 共用隱藏層之音素與聲學組型類神經網路聲學模型. . . . . . . . . . 37
4.1.1 簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1.2 分開訓練的訓練流程. . . . . . . . . . . . . . . . . . . . . . . 39
4.2 分開訓練並結合不同粗細度之聲學組型. . . . . . . . . . . . . . . . . 42
4.2.1 簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.2 訓練流程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.3 實驗結果與分析. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3.1 實驗設定. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3.2 實驗結果與分析. . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.4 本章總結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
五、進一步改進以非監督式聲學組型加強之類神經網路個人化聲學模型. . . 53
5.1 融合訓練同筆資料上的所有目標. . . . . . . . . . . . . . . . . . . . . 53
5.1.1 簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.1.2 融合訓練的訓練流程. . . . . . . . . . . . . . . . . . . . . . . 54
5.2 融合訓練並結合不同粗細度之聲學組型. . . . . . . . . . . . . . . . . 56
5.2.1 簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.2.2 訓練流程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.3 實驗結果與分析. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.3.1 實驗設定. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.3.2 實驗結果與分析. . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.4 本章總結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
六、實作有向無環圖結構及遞迴式之深層類神經網路函式庫及工具. . . . . . 66
6.1 任意有向無環圖結構之類神經網路的預測與訓練. . . . . . . . . . . 66
6.2 遞迴神經網路的預測與訓練. . . . . . . . . . . . . . . . . . . . . . . 69
6.3 程式用法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.4 程式碼架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.5 本章總結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
七、結論與展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.1 總結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.2 未來展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
參考文獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
附錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

[1] Ra´ul Rojas, Neural networks: a systematic introduction, Springer Science & Business Media, 1996.
[2] Lori Lamel, Jean-Luc Gauvain, and Gilles Adda, “Unsupervised acoustic model training,” in Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference on. IEEE, 2002, vol. 1, pp. I–877.
[3] Lori Lamel, Jean-Luc Gauvain, and Gilles Adda, “Lightly supervised and unsupervised acoustic model training,” Computer Speech & Language, vol. 16, no. 1, pp. 115–129, 2002.
[4] James Baker, “The dragon system–an overview,” Acoustics, speech and signal processing, IEEE transactions on, vol. 23, no. 1, pp. 24–29, 1975.
[5] Lawrence Rabiner, “A tutorial on hidden markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, 1989.
[6] Sadaoki Furui, “Digital speech processing, synthesis, and recognition (revised and expanded),” Digital Speech Processing, Synthesis, and Recognition (Second Edition, Revised and Expanded), 2000.
[7] Janet Baker, Li Deng, James Glass, Sanjeev Khudanpur, Chin-Hui Lee, Nelson Morgan, and Douglas O’Shaughnessy, “Developments and directions in speech recognition and understanding, part 1 [dsp education],” Signal Processing Magazine, IEEE, vol. 26, no. 3, pp. 75–80, 2009.
[8] B-H Juang, “Maximum-likelihood estimation for mixture multivariate stochastic observations of markov chains,” AT&T technical journal, vol. 64, no. 6, pp. 1235–1249, 1985.
[9] Richard P Lippmann, “An introduction to computing with neural nets,” ASSP Magazine, IEEE, vol. 4, no. 2, pp. 4–22, 1987.
[10] Herve A Bourlard and Nelson Morgan, Connectionist speech recognition: a hybrid approach, vol. 247, Springer Science & Business Media, 1994.
[11] Geoffrey Hinton, Simon Osindero, and Yee-Whye Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554, 2006.
[12] John Nickolls, Ian Buck, Michael Garland, and Kevin Skadron, “Scalable parallel programming with cuda,” Queue, vol. 6, no. 2, pp. 40–53, 2008.
[13] Abdel-rahman Mohamed, Tara N Sainath, George Dahl, Bhuvana Ramabhadran, Geoffrey E Hinton, and Michael A Picheny, “Deep belief networks using discriminative features for phone recognition,” in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011, pp. 5060–5063.
[14] Abdel-rahman Mohamed, George E Dahl, and Geoffrey Hinton, “Acoustic modeling using deep belief networks,” Audio, Speech, and Language Processing, IEEE Transactions on, vol. 20, no. 1, pp. 14–22, 2012.
[15] George E Dahl, Dong Yu, Li Deng, and Alex Acero, “Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition,” Audio, Speech, and Language Processing, IEEE Transactions on, vol. 20, no. 1, pp. 30–42, 2012.
[16] George Dahl, Abdel-rahman Mohamed, Geoffrey E Hinton, et al., “Phone recognition with the mean-covariance restricted boltzmann machine,” in Advances in neural information processing systems, 2010, pp. 469–477.
[17] Cheng-Tao Chung, Chun-an Chan, and Lin-shan Lee, “Unsupervised discovery of linguistic structure including two-level acoustic patterns using three cascaded stages of iterative optimization,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 8081–8085.
[18] Cheng-Tao Chung, Chun-an Chan, and Lin-shan Lee, “Unsupervised spoken term detection with spoken queries by multi-level acoustic patterns with varying model granularity,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 7814–7818.
[19] Yun-Chiao Li, Hung-yi Lee, Cheng-Tao Chung, Chun-an Chan, and Lin-shan Lee, “Towards unsupervised semantic retrieval of spoken content with query expansion based on automatically discovered acoustic patterns,” in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE, 2013, pp. 198–203.
[20] Hung-yi Lee, Yun-Chiao Li, Cheng-Tao Chung, and Lin-shan Lee, “Enhancing query expansion for semantic retrieval of spoken content with automatically discovered acoustic patterns,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 8297–8301.
[21] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams, “Learning representations by back-propagating errors,” Cognitive modeling, vol. 5, 1988.
[22] John Duchi, Elad Hazan, and Yoram Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” The Journal of Machine Learning Research, vol. 12, pp. 2121–2159, 2011.
[23] Najim Dehak, Patrick Kenny, R´eda Dehak, Pierre Dumouchel, and Pierre Ouellet, “Front-end factor analysis for speaker verification,” Audio, Speech, and Language Processing, IEEE Transactions on, vol. 19, no. 4, pp. 788–798, 2011.
[24] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, Design patterns: elements of reusable object-oriented software, Pearson Education, 1994.
[25] Anthony J Robinson, “An application of recurrent nets to phone probability estimation,” Neural Networks, IEEE Transactions on, vol. 5, no. 2, pp. 298–305, 1994.
[26] Sepp Hochreiter and Jurgen Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[27] “教育部重編國語辭典修訂本,” .
[28] Tomas Mikolov, Stefan Kombrink, Anoop Deoras, Lukar Burget, and Jan Cernocky, “Rnnlm-recurrent neural network language modeling toolkit,” in Proc. of the 2011 ASRU Workshop, 2011, pp. 196–201.
[29] Tomas Mikolov, Anoop Deoras, Stefan Kombrink, Lukas Burget, and Jan Cernocky, “Empirical evaluation and combination of advanced language modeling techniques.,” in INTERSPEECH, 2011, number s 1, pp. 605–608.
[30] Aren Jansen, Kenneth Church, and Hynek Hermansky, “Towards spoken term discovery at scale with zero resources.,” in INTERSPEECH, 2010, pp. 1676–1679.
[31] Tsung-HsienWen, Aaron Heidel, Hung-yi Lee, Yu Tsao, and Lin-Shan Lee, “Recurrent neural network based language model personalization by social network crowdsourcing.,” in INTERSPEECH, 2013, pp. 2703–2707.
[32] Tsung-Hsien Wen, Hung-Yi Lee, Tai-Yuan Chen, and Lin-Shan Lee, “Personalized language modeling by crowd sourcing with social network data for voice access of cloud applications,” in Spoken Language Technology Workshop (SLT), 2012 IEEE. IEEE, 2012, pp. 188–193.
[33] Jerome R Bellegarda, “Statistical language model adaptation: review and perspectives,” Speech communication, vol. 42, no. 1, pp. 93–108, 2004.
[34] Jean-Luc Gauvain and Chin-Hui Lee, “Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains,” Speech and audio processing, ieee transactions on, vol. 2, no. 2, pp. 291–298, 1994.
[35] Christopher J Leggetter and Philip C Woodland, “Maximum likelihood linear regression for speaker adaptation of continuous density hidden markov models,” Computer Speech & Language, vol. 9, no. 2, pp. 171–185, 1995.
[36] Phil C Woodland, “Speaker adaptation for continuous density hmms: A review,” in ISCA Tutorial and Research Workshop (ITRW) on Adaptation Methods for Speech Recognition, 2001.
[37] Jeff Z Ma and Richard M Schwartz, “Unsupervised versus supervised training of acoustic models.,” in Interspeech, 2008, pp. 2374–2377.
[38] Frank Wessel and Hermann Ney, “Unsupervised training of acoustic models for large vocabulary continuous speech recognition,” Speech and Audio Processing, IEEE Transactions on, vol. 13, no. 1, pp. 23–31, 2005.
[39] Christian Gollan, Stefan Hahn, Ralf Schluter, and Hermann Ney, “An improved method for unsupervised training of lvcsr systems.,” in INTERSPEECH, 2007, pp. 2101–2104.
[40] Olga Kapralova, John Alex, Eugene Weinstein, Pedro Moreno, and Olivier Siohan, “A big data approach to acoustic model training corpus selection,” in Fifteenth Annual Conference of the International Speech Communication Association, 2014.
[41] Gerard Salton and Christopher Buckley, “Term-weighting approaches in automatic text retrieval,” Information processing & management, vol. 24, no. 5, pp. 513–523, 1988.
[42] Nicolo Cesa-Bianchi, Alex Conconi, and Claudio Gentile, “On the generalization ability of on-line learning algorithms,” Information Theory, IEEE Transactions on, vol. 50, no. 9, pp. 2050–2057, 2004.
[43] Martin Zinkevich, “Online convex programming and generalized infinitesimal gradient ascent,” 2003.
[44] Jacob Abernethy, Peter L Bartlett, Alexander Rakhlin, and Ambuj Tewari, “Optimal strategies and minimax lower bounds for online convex games,” in Proceedings of the nineteenth annual conference on computational learning theory, 2008.
[45] Herbert Robbins and Sutton Monro, “A stochastic approximation method,” The annals of mathematical statistics, pp. 400–407, 1951.
[46] Matthew D Zeiler, “Adadelta: An adaptive learning rate method,” arXiv preprint arXiv:1212.5701, 2012.
[47] Sue Becker and Yann Le Cun, “Improving the convergence of back-propagation learning with second order methods,” in Proceedings of the 1988 connectionist models summer school. San Matteo, CA: Morgan Kaufmann, 1988, pp. 29–37.
[48] Sadaoki Furui, “Cepstral analysis technique for automatic speaker verification,” Acoustics, Speech and Signal Processing, IEEE Transactions on, vol. 29, no. 2, pp. 254–272, 1981.
[49] Murat Saraclar and Richard Sproat, “Lattice-based search for spoken utterance retrieval,” Urbana, vol. 51, pp. 61801, 2004.
[50] Jui-Ting Huang, Jinyu Li, Dong Yu, Li Deng, and Yifan Gong, “Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 7304–7308.
[51] Georg Heigold, Vincent Vanhoucke, Andrew Senior, Patrick Nguyen, M Ranzato, Matthieu Devin, and Jeffrey Dean, “Multilingual acoustic models using distributed deep neural networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 8619–8623.
[52] Rich Caruana, “Multitask learning,” Machine learning, vol. 28, no. 1, pp. 41–75, 1997.
[53] Krunoslav Kovac, Multitask learning for Bayesian neural networks, Ph.D. thesis, University of Toronto, 2005.
[54] Frank Seide, Gang Li, Xie Chen, and Dong Yu, “Feature engineering in contextdependent deep neural networks for conversational speech transcription,” in Automatic Speech Recognition and Understanding (ASRU), 2011 IEEE Workshop on. IEEE, 2011, pp. 24–29.
[55] “zero-resource-speech-tokenizer,” .
[56] 周伯威, “以深層與卷積類神經網路建構聲學模型之大字彙連續語音辨識,” 2015.
[57] GU Yue-guo, “On multimedia learning and multimodal learning [j],” Computer-Assisted Foreign Language Education, vol. 2, pp. 001, 2007.
[58] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng, “Multimodal deep learning,” in Proceedings of the 28th international conference on machine learning (ICML-11), 2011, pp. 689–696.
[59] Nitish Srivastava and Ruslan R Salakhutdinov, “Multimodal learning with deep boltzmann machines,” in Advances in neural information processing systems, 2012, pp. 2222–2230.
[60] Dong Yu, Adam Eversole, Mike Seltzer, Kaisheng Yao, Zhiheng Huang, Brian Guenter, Oleksii Kuchaiev, Yu Zhang, Frank Seide, Huaming Wang, Jasha Droppo, Geoffrey Zweig, Chris Rossbach, Jon Currey, Jie Gao, Avner May, Baolin Peng, Andreas Stolcke, and Malcolm Slaney, “An introduction to computational networks and the computational network toolkit,” Tech. Rep. MSR-TR-2014-112, August 2014.
[61] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell, “Caffe: Convolutional architecture for fast feature embedding,” arXiv preprint arXiv:1408.5093, 2014.
[62] Tom´aˇs Mikolov, “Statistical language models based on neural networks,” Presentation at Google, Mountain View, 2nd April, 2012.
[63] Alex Graves, Marcus Liwicki, Santiago Fern´andez, Roman Bertolami, Horst Bunke, and Jurgen Schmidhuber, “A novel connectionist system for unconstrained handwriting recognition,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 31, no. 5, pp. 855–868, 2009.
[64] Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton, “Speech recognition with deep recurrent neural networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 6645–6649.
[65] Peter J McCann and Barry L Kalman, “Strategies for the parallel training of simple recurrent neural networks,” 1994.
[66] Uroˇs Lotriˇc and Andrej Dobnikar, Parallel implementations of recurrent neural network learning, Springer, 2009. 84
[67] Volodymyr Turchenko and Lucio Grandinetti, “Parallel batch pattern bp training algorithm of recurrent neural network,” in Intelligent Engineering Systems (INES), 2010 14th International Conference on. IEEE, 2010, pp. 25–30.
[68] Felix Weninger, “Introducing currennt: The munich open-source cuda recurrent neural network toolkit,” Journal of Machine Learning Research, vol. 16, pp. 547–551, 2015.
[69] Paul J Werbos, “Backpropagation through time: what it does and how to do it,” Proceedings of the IEEE, vol. 78, no. 10, pp. 1550–1560, 1990.85

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
無相關期刊
 
無相關點閱論文