(3.236.222.124) 您好!臺灣時間:2021/05/19 10:48
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:楊章豪
研究生(外文):Zhang-Hao Yang
論文名稱:用於社群網路壓縮的階層式複數區塊自動編碼器
論文名稱(外文):A Hierarchical Multi-Block Autoencoder on Social Network Compression
指導教授:施國琛施國琛引用關係
學位類別:博士
校院名稱:國立中央大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:73
中文關鍵詞:機器學習深度學習自動編碼器社區檢測群集分析動態社群網路分析
相關次數:
  • 被引用被引用:0
  • 點閱點閱:39
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
隨著機器學習的日益普及,越來越多的產業引入機器學習來輔助產業發展,也使得它更加融入我們的生活,與之對應的所需技術也層出不窮。然而擴展到更多領域的機器學習都勢必會經歷到的瓶頸,那就是設備資源的限制。以常用在圖形辨識任務的卷積神經網路來說,作為輸入資料的圖片可以自由縮放成訓練所需的尺寸。但是,對於社群網路來說,社群網路圖的圖象化尺寸遠超過一般的圖形資料,並且難以割捨其中的資料內容而無法使用一般的縮放技術,也就不可能以普通的操作進行機器學習的訓練。我們提出了一套應用於動態社群網路分析時所使用的系統。藉由我們提出的階層式群集演算法,並採用多區塊分割法,最後結合 autoencoder 所形成的複合壓縮技術。在確保資料不失真以及壓縮效率上取得最佳平衡。實驗證明,我們所提出的方法能大幅增加神經網路模型能處理的社群網路資料量,並且能減少預測模型的運算負擔,同時也可降低對硬體設備的依賴程度。
With the increasing popularity of machine learning, more and more industries have introduced machine learning to assist the development of the industry, which has made it more integrated into our lives, and the corresponding required technologies have also emerged. However, the bottleneck that machine learning is bound to experience is the limitation of equipment resources when expands to more fields. In the case of convolutional neural networks, which is commonly used in graphic recognition tasks, uses the pictures as input data that can be freely scaled to the size required for training. However, for the social network, the image size of the social network graph is much larger than the general graphic data, and it is difficult to discard the data content in it, so it is impossible to use the general scaling technology, and it is impossible to use ordinary operations for machine learning training. We have proposed a system for use in dynamic social network analysis. With the hierarchical clustering algorithm that we proposed, and then using the multi-block partition method, finally combined with the
autoencoder, it becomes a composite compression technology formed. The best balance is achieved in ensuring that the data is not distorted and compression efficiency. Experiments show that our proposed method can greatly increase the amount of social network data that the neural network model can process, and can reduce the
computational burden of the prediction model, and can also reduce the degree of dependence on hardware devices.
中文摘要 ....................................................................................................................................... i
Abstract......................................................................................................................................... ii
Contents....................................................................................................................................... iv
List of figures................................................................................................................................ v
List of tables................................................................................................................................. vi
1 Introduction...................................................................................................................... 1
2 Related work .................................................................................................................... 7
2.1 Dimensionality reduction and machine learning feature extraction methods.............. 7
2.2 Autoencoder ................................................................................................................ 9
2.3 Multi-level / hierarchical autoencoder application.................................................... 14
3 Preliminary..................................................................................................................... 18
4 Proposed Model: HM-AE.............................................................................................. 19
4.1 Network transformation............................................................................................. 20
4.2 Hierarchical clustering............................................................................................... 21
4.3 HM-AE learning........................................................................................................ 24
5 Experiment..................................................................................................................... 29
5.1 Accuracy discussion .................................................................................................. 33
5.2 Hyper parameter setting discussion........................................................................... 37
5.3 Threshold influence ................................................................................................... 50
6 Conclusion ..................................................................................................................... 56
7 Reference ....................................................................................................................... 58
[1] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with
neural networks” Science Volume 313, July 2006.
[2] P. Vincent, H. Larochelle, Y. Bengio, and P. A. Manzagol, “Extracting and
Composing Robust Features with Denoising Autoencoders” Machine Learning,
Proceedings of the 25th International Conference (ICML) 2008, June 2008.
[3] Andrew Ng, “Sparse autoencoder” CS294A Lecture notes, 72(2011).
[4] A. Makhzani, and B. Frey, “k-Sparse Autoencoders”, ICLR 2014, Mar 2014
[5] M. Udommitrak and B. Kijsirikul, “Incremental Feature Construction for Deep
Learning Using Sparse Auto-Encoder”, International Journal of Electrical Energy,
Vol. 1, No. 3, pp. 173-176, September 2013
[6] P. Baldi, “Autoencoders, unsupervised learning, and deep architectures.” Journal
of Machine Learning Research, Workshop and Conference Proceedings,
Proceedings of the 2011 ICML Workshop on Unsupervised and Transfer Learning,
vol. 27, Bellevue, WA, pp. 37–50 (2012)
[7] D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes”, International
Conference on Learning Representations (ICLR), 2014
[8] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, “Contractive autoencoders: Explicit invariance during feature extraction”, ICML'11 Proceedings of the 28th International Conference on International Conference on Machine
Learning, page 833-840, June 2011.
[9] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, “Greedy Layer-Wise
Training of Deep Networks”, NIPS'06 Proceedings of the 19th International
Conference on Neural Information Processing Systems, page 153-160, Dec 2006.
[10] G. E. Hinton, A. Krizhevsky, and S. D. Wang, “Transforming Auto-encoders”,
Artificial Neural Networks and Machine Learning – ICANN 2011 pp 44-51, 2011
[11] G. Alain and Y. Bengio, “What regularized auto-encoders learn from the datagenerating distribution”, The Journal of Machine Learning Research vol 15 issue
1, page 3563-3593, Jan 2014.
[12] M. Tschannen, O. Bachem, and M. Lucic, “Recent Advances in AutoencoderBased Representation Learning”, NeurIPS 2018, Dec 2018.
[13] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational
Autoencoder for Deep Learning of Images, Labels and Captions”, Advances in
Neural Information Processing Systems 29 (NIPS 2016), Sep 2016.
[14] J. Li, M. T. Luong, and D. Jurafsky, “A Hierarchical Neural Autoencoder for
Paragraphs and Documents”, Proceedings of the 53rd Annual Meeting of the
Association for Computational Linguistics and the 7th International Joint
Conference on Natural Language Processing, July 2015.
[15] X. Lu, Y. Tsao, S. Matsuda, and C. Hori, “Speech Enhancement Based on Deep
Denoising Autoencoder”, INTERSPEECH 2013, January 2013.
[16] Boser, B. E.; Guyon, I. M.; Vapnik, V. N. “A training algorithm for optimal margin
classifiers.” Proceedings of the fifth annual workshop on Computational learning
theory – COLT '92. 1992: 144.
[17] Yann LeCun, L. Bottou, Y. Bengio, and P Haffner, “Gradient-Based Learning
Applied to Document Recognition” proc of the IEEE, November 1998.
[18] C. Cortes, and V. Vapnik, “Support-vector networks”, Machine Learning. 1995, 20
(3): 273–297
[19] P. Smolensky, Chapter 6: Information Processing in Dynamical Systems:
Foundations of Harmony Theory. MIT Press. 1986: 194–281.
[20] G.E. Hinton, S. Osindero, and Y.W. Teh, “A fast learning algorithm for deep belief
nets”, Neural Computation 18, 1527–1554 (2006)
[21] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for
accurate object detection and semantic segmentation Tech report (v5)”, CVPR
2014, Computer Vision and Pattern Recognition, October 2014.
[22] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep
Convolutional Neural Networks”, Advances in neural information processing
systems 25(2), January 2012.
[23] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner and G. Monfardini, "The Graph
Neural Network Model," in IEEE Transactions on Neural Networks, vol. 20, no. 1,
pp. 61-80, Jan. 2009, doi: 10.1109/TNN.2008.2005605.
[24] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra, “DRAW: A
Recurrent Neural Network For Image Generation”, Computer Vision and Pattern
Recognition (cs.CV); Machine Learning (cs.LG); Neural and Evolutionary
Computing (cs.NE), May 2015.
[25] J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu, “CNN-RNN: A Unified
Framework for Multi-label Image Classification”, 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), June 2016.
[26] W. Byeon, T. M. Breuel, F. Raue and M. Liwicki, "Scene labeling with LSTM
recurrent neural networks," 2015 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Boston, MA, 2015, pp. 3547-3555, doi:
10.1109/CVPR.2015.7298977.
[27] K. Simonyan, and A. Zisserman, “Very Deep Convolutional Networks for LargeScale Image Recognition”, Computer Vision and Pattern Recognition (cs.CV),
April 2015.
[28] https://medium.com/@chenchoulo/convolution-neural-network-cnn-175d924bfcc1
[29] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition”, Computer Vision and Pattern Recognition (cs.CV), December 2015.
[30] S, Hochreiter, and J. Schmidhuber, “LONG SHORT-TERM MEMORY”, Neural
Computation 9(8):1735-1780, November 1997.
[31] Olshausen, B. A. and Field, D. J. (1997). Sparse coding with an overcomplete basis
set: a strategy employed by V1? Vision Research, 37, 3311–3325.
[32] Y. Bengio, E. Thibodeau-Laufer, G. Alain, J. Yoinski, “Deep Generative Stochastic
Networks Trainable by Backprop”, arXiv preprint arXiv:1306.1091, June 2013.
[33] Y. Bengio, A. Courville, and P. Vincent, “Representation Learning: A Review and
New Perspectives”, arXiv:1206.5538 [cs.LG], April 2014.
[34] T. Wong, and Z. Luo, “Recurrent Auto-Encoder Model for Multidimensional Time
Series Representation”, ICLR 2018, January 2018.
[35] X. Wu, G. Jiang, X. Wang, P. Xie and X. Li, "A Multi-Level-Denoising
Autoencoder Approach for Wind Turbine Fault Detection," in IEEE Access, vol. 7,
pp. 59376-59387, 2019, doi: 10.1109/ACCESS.2019.2914731.
[36] J. Chen, S. Sathe, C. C. Aggarwal, and D. Turagea, “Outlier Detection with
Autoencoder Ensembles”, 2017 SIAM International Conference on Data Mining,
June 2017.
[37] J. Li, M. T. Luong, and D. Jurafsky, “A Hierarchical Neural Autoencoder for
Paragraphs and Documents”, arXiv:1506.01057 [cs.CL], June 2015.
[38] Li, Y., Wang, Z., Yang, X. et al. “Efficient convolutional hierarchical autoencoder
for human motion prediction.”, Vis Comput 35, 1143–1156 (2019).
https://doi.org/10.1007/s00371-019-01692-9, June 2019
[39] D. Bouchacourt, R. Tomioka, and S. Nowozin, “Multi-Level Variational
Autoencoder: Learning Disentangled Representations from Grouped
Observations”, The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18), May 2017.
[40] Y. C. Chen, W. C. Peng, and S. Y. Lee, “Efficient Algorithms for Influence
Maximization in Social Networks,” Knowledge and Information Systems, Vol. 33,
Issue 3, pp 577-601, 2012. [SCI, IF=2.397, 64/134]
[41] J. Shlens, “Notes on Kullback-Leibler Divergence and Likelihood Theory”,
arXiv:1404.2000, April 2014.
[42] https://tdhopper.com/blog/cross-entropy-and-kl-divergence
[43] T. Derr, C. Aggarwal, and J. Tang, “Signed Network Modeling Based on Structural
Balance Theory”, CIKM '18: Proceedings of the 27th ACM International
Conference on Information and Knowledge Management, Pages 557–566 October
2018
[44] Chen, Y. A novel algorithm for mining opinion leaders in social networks. World
Wide Web 22, 1279–1295 (2019). https://doi.org/10.1007/s11280-018-0586-x
電子全文 電子全文(網際網路公開日期:20230801)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top