跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.172) 您好!臺灣時間:2025/09/10 20:01
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:楊子萲
研究生(外文):Yang, Tzu-Hsuan
論文名稱:應用深度學習架構於社群網路資料分析:以Twitter圖文資料為例
論文名稱(外文):Analyzing Social Network Data Using Deep Neural Networks: A Case Study Using Twitter Posts
指導教授:廖文宏廖文宏引用關係
指導教授(外文):Liao, Wen-Hung
口試委員:鄭宇君紀明德
口試委員(外文):Cheng, Yu-ChungChi, Ming-Te
口試日期:2018-10-26
學位類別:碩士
校院名稱:國立政治大學
系所名稱:資訊科學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2018
畢業學年度:107
語文別:中文
論文頁數:73
中文關鍵詞:推特圖文分析Word2Vec深度學習社群網路
外文關鍵詞:TwitterSocial networksGraphical and text analysisWord2VecDeep learning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:496
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
社群平台的發展日益蓬勃,人們分享動態的方式不僅只有文字,發文時搭配影像也是使用者常見的互動方式,然而有時候僅靠單方面的文字或是圖片並不能了解使用者真正想傳達的訊息,因此本研究以影像與文字分析技術為基礎,期望可藉由社群平台的多樣化資訊,分析圖片與文字之間的關係。
由於Twitter的發文字數限制使得Twitter上的使用者較容易在貼文中明確表達重點,因此本研究從Twitter蒐集了2017年間擁有台灣關鍵字的推文資料,經過資料清洗後,從中分析哪些推文屬於觀光類型,哪些推文屬於非觀光類型,利用深度學習模型框架將圖文資訊進行整合,最後再進行分群,探討各類別的特性。
透過此研究,可探索圖文之間相互輔助的關聯性,也可瞭解社群平台的貼文類型分佈,深化我們對於社群平台的理解,亦可透過本研究的框架提供質化分析研究者必要的資訊。
Interaction on various social networking platforms has become an important part of our daily life. Apart from text messages, image is also a popular media format utilized for online communication. Text or image alone, however, cannot fully convey the ideas that users wish to express. In the thesis, we employ computer vision and word embedding techniques to analyze the relationship between image content and text messages and explore the rich information entangled.
The limitation on the total number of characters compels Twitter users to compose their messages more succinctly, suggesting a stronger association between text and image. In this study, we collected all tweets which include keywords related to Taiwan during 2017. After data cleaning, we apply machine learning techniques to classify tweets into to ‘travel’ and ‘non-travel’ types. This is achieved by employing deep neural networks to process and integrate text and image information. Within each class, we use hierarchical clustering to further partition the data into different clusters and investigate their characteristics.
Through this research, we expect to identify the relationship between text and images in a tweet and gain more understanding of the properties of tweets on social networking platforms. The proposed framework and corresponding analytical results should also prove useful for qualitative research.
第一章 緒論 1
1.1 研究背景 1
1.2 研究目的與方法 3
1.3 論文貢獻 5
1.4 論文架構 5
第二章 技術背景與相關研究 7
2.1 深度學習的演進 7
2.2 相關研究 10
2.2.1 卷積神經網路與相關模型簡介 11
2.2.2 詞袋簡介 15
2.2.3 Word2Vec 15
2.2.4 階層式分群演算法 16
2.2.5 t-SNE 19
第三章 資料集 21
3.1 觀光類別 25
3.1.1 食物類 25
3.1.2 動物類 26
3.1.4 住宿類 28
3.1.5 交通類 28
3.1.6 風景類 29
3.1.7 街景類 30
3.1.8 鳥瞰類 31
3.1.9 煙火類 32
3.2 非觀光類別 33
3.2.1 偶像類 33
3.2.2 政治新聞類 34
3.2.3 人像類 35
3.2.4 文字類 36
3.2.5 非寫實類 37
3.2.6 色情類 38
第四章 研究方法 39
4.1 使用工具 39
4.1.1 AllDup 39
4.1.2 Google Cloud Vision API[1] 40
4.1.3 Open NSFW[31] 42
4.2 實驗流程 42
4.2.1 去除重複圖片 43
4.2.2 色情圖片過濾 44
4.2.3 觀光、非觀光樣本定義 46
4.2.4 深度學習模型訓練 49
第五章 實驗結果與討論 51
5.1 去除重複圖片 51
5.2 色情圖片過濾 55
5.2.1 工具測試與比較 55
5.2.2 偵測圖片並過濾 58
5.3 模型訓練 60
5.4 階層式分群與t-SNE視覺化 63
第六章 結論與未來規劃 70
參考文獻 71
[1] Google Clound Vision API Documentation. https://cloud.google.com/vision/docs/.
[2] Amazon Rekognition. https://aws.amazon.com/rekognition/?nc1=h_ls.
[3] 中華民國交通部觀光局,觀光統計圖表。https://admin.taiwan.net.tw/public/public.aspx?no=315
[4] GU, Chunhui, et al. AVA: A video dataset of spatio-temporally localized atomic visual actions. arXiv preprint arXiv:1705.08421, 2017, 3.4: 6.
[5] HUBEL, David H.; WIESEL, Torsten N. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of physiology, 1962, 160.1: 106-154.
[6] HINTON, Geoffrey E.; OSINDERO, Simon; TEH, Yee-Whye. A fast learning algorithm for deep belief nets. Neural computation, 2006, 18.7: 1527-1554.
[7] RANJAN, Rajeev; PATEL, Vishal M.; CHELLAPPA, Rama. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41.1: 121-135.
[8] KRIZHEVSKY, Alex; SUTSKEVER, Ilya; HINTON, Geoffrey E. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097-1105.
[9] HE, Kaiming, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770-778.
[10] HU, Jie; SHEN, Li; SUN, Gang. Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507, 2017, 7.
[11] 林之昫,HubertLin(2017)。最後一屆ImageNet大規模視覺識別大賽(ILSVRC2017)順利落幕,而WebVision圖像大賽會是下一個ImageNet大賽嗎?。https://goo.gl/5rHG1y。
[12] LI, Wen, et al. Webvision database: Visual learning and understanding from web data. arXiv preprint arXiv:1708.02862, 2017.
[13] WebVision. https://www.vision.ee.ethz.ch/webvision/2017/index.html.
[14] WebVision Challenge Results. https://www.vision.ee.ethz.ch/webvision/2017/challenge_results.html.
[15] HU, Yuheng, et al. What We Instagram: A First Analysis of Instagram Photo Content and User Types. In: Icwsm. 2014.
[16] SZEGEDY, Christian, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 1-9.
[17] IOFFE, Sergey; SZEGEDY, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[18] SZEGEDY, Christian, et al. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 2818-2826.
[19] CHOLLET, François. Xception: Deep learning with depthwise separable convolutions. arXiv preprint, 2017, 1610.02357.
[20] HOWARD, Andrew G., et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
[21] Keras Documentation. https://keras.io/applications/.
[22] HARRIS, Zellig S. Distributional structure. Word, 1954, 10.2-3: 146-162.
[23] Vector Representations of Words. https://www.tensorflow.org/tutorials/word2vec.
[24] MIKOLOV, Tomas, et al. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
[25] 李維平、張加憲(2013)。使用N組連結平均法的階層式自動分群。電子商務學報,第十五卷(第一期),35-56。
[26] MAATEN, Laurens van der; HINTON, Geoffrey. Visualizing data using t-SNE. Journal of machine learning research, 2008, 9.Nov: 2579-2605.
[27] HINTON, Geoffrey E.; ROWEIS, Sam T. Stochastic neighbor embedding. In: Advances in neural information processing systems. 2003. p. 857-864.
[28] Flood and Fire Twitter Capture and Analysis Toolset, ff-tcat. https://github.com/Sparklet73/ff-tcat.git
[29] Sara Robinson(2016), Google Cloud Vision – Safe Search Detection API. https://cloud.google.com/blog/big-data/2016/08/filtering-inappropriate-content-with-the-cloud-vision-api
[30] Caffe. http://caffe.berkeleyvision.org/
[31] Open NSFW Model, yahoo. https://github.com/yahoo/open_nsfw.git
[32] GODIN, Fréderic, et al. Multimedia Lab $@ $ ACL WNUT NER Shared Task: Named Entity Recognition for Twitter Microposts using Distributed Word Representations. In: Proceedings of the Workshop on Noisy User-generated Text. 2015. p. 146-153.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊