跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.24) 您好!臺灣時間:2026/04/07 19:46
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:林翰隆
研究生(外文):LIN, HAN-LONG
論文名稱:基於深度學習建構畢業生薪資級距預測模型
論文名稱(外文):Building Graduate Salary Grading Prediction Model Based on Deep Learning
指導教授:郭忠義郭忠義引用關係
指導教授(外文):KUO, JONG-YIH
口試委員:李允中薛念林馬尚彬
口試委員(外文):LEE, JONATHANHSUEH, NIEN-LINMA, SHANG-PIN
口試日期:2019-07-24
學位類別:碩士
校院名稱:國立臺北科技大學
系所名稱:資訊工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:55
中文關鍵詞:類神經網路深度學習次序迴歸堆疊式降噪自動編碼器
外文關鍵詞:Neural NetworkDeep LearningOrdinal RegressionStacked Denoising Autoencoder
相關次數:
  • 被引用被引用:1
  • 點閱點閱:464
  • 評分評分:
  • 下載下載:5
  • 收藏至我的研究室書目清單書目收藏:2
本論文使用深度學習建構薪資級距預測模型,由於薪資級距的每個級距之間存在順序關係,因此本論文將該類問題視為次序迴歸(Ordinal Regression)問題,並使用多輸出神經網路框架建構深度神經網路,使網路模型在訓練期間能夠考慮到每個薪資級距之間的次序關係,模型使用堆疊式降噪自動編碼器(Stacked De-noising Autoencoder)進行網路預訓練,預訓練後的權重做為網路模型的初始權重,在訓練階段使用丟棄法(Dropout)與Bootstrap Aggregating提升模型效能,該模型使用近幾年畢業生的在學、成績、家長資料做為輸入,並預測即將畢業或已畢業學生的薪資級距,提供相關資料給學校以利研究人員掌握薪資趨勢。
This paper used deep learning to build a salary grading prediction model. Due to the order relationship between each grading of salary grading, this paper regards this kind of problem as an ordinal regression problem. This paper used multiple output deep neural network to solve the ordinal regression problem so that the network learns the correlation between these salary grading during training. This model is pre-trained using Stacked De-noising Autoencoder. After pre-training, the corresponding weights are taken as the initial weights of neural network. During training, this paper used the Dropout and Bootstrap Aggregating to improve model performance. This model used the graduates’ personal information, grades, and family data as input feature, and predict salary grading of graduating or graduated students. This result will be provided to the school researchers to grasp the salary trend.
摘 要 i
ABSTRACT ii
誌 謝 iii
目 錄 iv
表目錄 vii
圖目錄 viii
第一章 緒論 1
1.1 研究動機與目的 1
1.2 研究貢獻 1
1.3 章節編排 2
第二章 文獻探討 3
2.1 深度學習(Deep Learning) 3
2.1.1 深度神經網路 3
2.1.2 最佳化方法 5
2.2 次序迴歸問題(Ordinal Regression Problem) 5
2.2.1 二元分類器解次序迴歸 5
2.2.2 多輸出卷積神經網路框架 7
2.3 堆疊式降噪自動編碼器 9
2.3.1 自動編碼器 9
2.3.2 降噪自動編碼器 10
2.3.3 堆疊式降噪自動編碼器 11
2.4 避免過擬合之方法 12
2.4.1 丟棄法 12
2.4.2 Bootstrap Aggregating 13
2.5 教育資料相關預測案例 14
第三章 薪資級距預測模型設計與實作 16
3.1 系統資訊 16
3.2 模型設計流程說明 16
3.3 資料準備 18
3.3.1 資料欄位與組成 18
3.3.2 資料的前置處理與清理 22
3.3.3 資料切割 24
3.4 深度神經網路設計 24
3.4.1 神經網路架構 24
3.4.2 預訓練神經網路 25
3.4.3 多輸出深度神經網路模型 28
3.4.4 Bootstrap Aggregating訓練演算法 31
3.5 模型預測 31
第四章 實驗 33
4.1 效能評分標準 33
4.2 模型效能 33
4.2.1 觀測預訓練重建資料實驗 34
4.2.2 預訓練雜訊率實驗 35
4.2.3 使用預訓練與未使用預訓練的效能比較 37
4.2.4 不同隱藏層數與神經元個數實驗 38
4.2.5 不同最佳化方法及學習率比較 38
4.3 機器學習方法比較 41
4.4 教育資料探勘相關研究方法比較 50
第五章 結論與未來研究方向 52
5.1 結論 52
5.2 未來研究方向 52
參考文獻 53
[1] 教育部統計處「99-101 學年度大專校院畢業生 就業薪資巨量分析」
[2] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986.
[3] J. Duchi, E. Hazan, and Y. Singer, “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization,” Journal of Machine Learning Research, vol. 12, no. Jul, pp. 2121–2159, 2011.
[4] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” Proceedings of the 27th International Conference on Machine Learning, pp. 807-814, 2010.
[5] E. Frank and M. Hall, “A simple approach to ordinal classification,” Proceedings of European Conference on Machine Learning, pp. 145–156, 2001.
[6] Z. Niu, M. Zhou, L. Wang, X. Gao and G. Hua, “Ordinal Regression with Multiple Output CNN for Age Estimation,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4920–4928, 2016.
[7] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, issue 5786, pp. 504-507, 2006.
[8] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol, “Extracting and Composing Robust Features with Denoising Autoencoders,” Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103, 2008.
[9] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” Journal of Machine Learning Research, vol. 11, pp. 3371–3408, 2010.
[10] A. Y. Ng, “Preventing ‘Overfitting’ of Cross-Validation Data,” Proceedings of the Fourteenth International Conference on Machine Learning, pp. 245–253, 1997.
[11] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, pp. 1929-1958, 2014.
[12] L. Breiman, “Bagging Predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996.
[13] J. Y. Kuo, H. T. Chung, P. F. Wang, and B. Lei, “Building Student Course Performance Prediction Model Based on Deep Learning,” Journal of Information Science and engineering, 2019.
[14] J. Y. Kuo, C. W. Pan and B. Lei, “Using Stacked Denoising Autoencoder for the Student Dropout Prediction,” Proceedings of the 2017 IEEE International Symposium on Multimedia, pp. 483–488, 2017.
[15] P. Prabu and Bendangnuksung, “Students’ Performance Prediction using Deep Neural Networks,” International Journal of Applied Engineering Research, vol. 13, pp. 1171–1176, 2018.
[16] E. A. Amrieh, T. Hamtini, and I. Aljarah, “Preprocessing and analyzing educational data set using X-API for improving student's performance,” Proceedings of the 2015 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies, pp. 1-5, 2015.
[17] Students' Academic Performance Dataset, Available: https://www.kaggle.com/aljarah/xAPI-Edu-Data.
[18] Kaggle: Your Home for Data Science, Available: https://www.kaggle.com/.
[19] B.-H. Kim, E. Vizitei, and V. Ganapathi, “GritNet: Student performance prediction with deep learning,” Proceedings of the 11th International Conference on Educational Data Mining, pp. 625–629, 2018.
[20] Udacity: Learn the Latest Tech Skills; Advance Your Career, Available: https://www.udacity.com/.
[21] A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional LSTM networks,” Proceedings of International Joint Conference on Neural Networks, pp. 23–43, 2005.
[22] J. D. Keeler, D. E. Rumelhart, and W. K. Leow, “Integrated segmentation and recognition of hand-printed numerals,” Proceedings of Advances in Neural Information Processing Systems, pp. 557–563, 1991.
[23] J. Han, M. Kamber, and J. Pei, “Data Mining: Concepts and Techniques,” Morgan Kaufmann, 2000.
[24] M. Abadi et al., “TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems,” Mar. 2016.
[25] F. Pedregosa et al., "Scikit-learn: Machine learning in Python," the Journal of Machine Learning Research, vol. 12, pp. 2825-2830, 2011.
[26] S. Van der Walt, S. Chris Colbert, and G. Varoquaux, “The NumPy array: A structure for efficient numerical computation,” Computing in Science and Engineering, 2011.
[27] W. McKinney, pandas: a python data analysis library, Available: http://pandas.sourceforge.net
[28] L. Gaudette and N. Japkowicz, “Evaluation methods for ordinal classification,” Proceedings of the 22nd Canadian Conference on Artificial Intelligence, Kelowna, CA, pp. 207–210, 2009.
[29] S. Baccianella, A. Esuli, and F. Sebastiani, “Evaluation measures for ordinal regression,” Proceedings of the Ninth International Conference on Intelligent Systems Design and Applications, IEEE, pp. 283–287, 2009.
[30] J. Shlens, “A Tutorial on Principle Component Analysis,” Available: http://www.snl.salk.edu/∼shlens/pca.pdf, 2009.
[31] Amjad Abu Saa, “Educational Data Mining & Students’ Performance Prediction,” International Journal of Advanced Computer Science and Applications, vol. 7, pp. 212-220, 2016.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊