|
[1] Google Clound Vision API Documentation. https://cloud.google.com/vision/docs/. [2] Amazon Rekognition. https://aws.amazon.com/rekognition/?nc1=h_ls. [3] 中華民國交通部觀光局,觀光統計圖表。https://admin.taiwan.net.tw/public/public.aspx?no=315 [4] GU, Chunhui, et al. AVA: A video dataset of spatio-temporally localized atomic visual actions. arXiv preprint arXiv:1705.08421, 2017, 3.4: 6. [5] HUBEL, David H.; WIESEL, Torsten N. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of physiology, 1962, 160.1: 106-154. [6] HINTON, Geoffrey E.; OSINDERO, Simon; TEH, Yee-Whye. A fast learning algorithm for deep belief nets. Neural computation, 2006, 18.7: 1527-1554. [7] RANJAN, Rajeev; PATEL, Vishal M.; CHELLAPPA, Rama. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41.1: 121-135. [8] KRIZHEVSKY, Alex; SUTSKEVER, Ilya; HINTON, Geoffrey E. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097-1105. [9] HE, Kaiming, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770-778. [10] HU, Jie; SHEN, Li; SUN, Gang. Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507, 2017, 7. [11] 林之昫,HubertLin(2017)。最後一屆ImageNet大規模視覺識別大賽(ILSVRC2017)順利落幕,而WebVision圖像大賽會是下一個ImageNet大賽嗎?。https://goo.gl/5rHG1y。 [12] LI, Wen, et al. Webvision database: Visual learning and understanding from web data. arXiv preprint arXiv:1708.02862, 2017. [13] WebVision. https://www.vision.ee.ethz.ch/webvision/2017/index.html. [14] WebVision Challenge Results. https://www.vision.ee.ethz.ch/webvision/2017/challenge_results.html. [15] HU, Yuheng, et al. What We Instagram: A First Analysis of Instagram Photo Content and User Types. In: Icwsm. 2014. [16] SZEGEDY, Christian, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 1-9. [17] IOFFE, Sergey; SZEGEDY, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [18] SZEGEDY, Christian, et al. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 2818-2826. [19] CHOLLET, François. Xception: Deep learning with depthwise separable convolutions. arXiv preprint, 2017, 1610.02357. [20] HOWARD, Andrew G., et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. [21] Keras Documentation. https://keras.io/applications/. [22] HARRIS, Zellig S. Distributional structure. Word, 1954, 10.2-3: 146-162. [23] Vector Representations of Words. https://www.tensorflow.org/tutorials/word2vec. [24] MIKOLOV, Tomas, et al. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. [25] 李維平、張加憲(2013)。使用N組連結平均法的階層式自動分群。電子商務學報,第十五卷(第一期),35-56。 [26] MAATEN, Laurens van der; HINTON, Geoffrey. Visualizing data using t-SNE. Journal of machine learning research, 2008, 9.Nov: 2579-2605. [27] HINTON, Geoffrey E.; ROWEIS, Sam T. Stochastic neighbor embedding. In: Advances in neural information processing systems. 2003. p. 857-864. [28] Flood and Fire Twitter Capture and Analysis Toolset, ff-tcat. https://github.com/Sparklet73/ff-tcat.git [29] Sara Robinson(2016), Google Cloud Vision – Safe Search Detection API. https://cloud.google.com/blog/big-data/2016/08/filtering-inappropriate-content-with-the-cloud-vision-api [30] Caffe. http://caffe.berkeleyvision.org/ [31] Open NSFW Model, yahoo. https://github.com/yahoo/open_nsfw.git [32] GODIN, Fréderic, et al. Multimedia Lab $@ $ ACL WNUT NER Shared Task: Named Entity Recognition for Twitter Microposts using Distributed Word Representations. In: Proceedings of the Workshop on Noisy User-generated Text. 2015. p. 146-153.
|