|
[1]Ankerst, M., Breunig, M. M., Kriegel, H. P., & Sander, J. (1999, June). OPTICS: ordering points to identify the clustering structure. In ACM Sigmod record (Vol. 28, No. 2, pp. 49-60). ACM. [2]Ball, G. H., & Hall, D. J. (1967). A clustering technique for summarizing multivariate data. Behavioral science, 12(2), 153-155. [3]Bengio, Y., Lamblin, P., Popovici, D., & Larochelle, H. (2007). Greedy layer-wise training of deep networks. In Advances in neural information processing systems (pp. 153–160). [4]Berry, M. J., & Linoff, G. (1997). Data mining techniques: for marketing, sales, and customer support. New York: John Wiley & Sons, Inc.. [5]Berson, A., Smith, S., & Thearling, K. (2000). Building data mining applications for CRM (pp. 4-14). New York: McGraw-Hill. [6]Bezdek, J. C. (1980). A convergence theorem for the fuzzy ISODATA clustering algorithms. IEEE transactions on pattern analysis and machine intelligence, (1), 1-8. [7]Blashfield, R. K. (1976). Mixture model tests of cluster analysis: Accuracy of four agglomerative hierarchical methods. Psychological Bulletin, 83(3), 377. [8]Bodnar, C. (2018). Text to Image Synthesis Using Generative Adversarial Networks. arXiv preprint arXiv:1805.00676. [9]Bose, T., Majumdar, A., & Chattopadhyay, T. (2018, April). Machine Load Estimation Via Stacked Autoencoder Regression. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 2126-2130). IEEE. [10]Byvatov, E., Fechner, U., Sadowski, J., & Schneider, G. (2003). Comparison of support vector machine and artificial neural network systems for drug/nondrug classification. Journal of chemical information and computer sciences, 43(6), 1882-1889. [11]Chang, J., Wang, L., Meng, G., Xiang, S., & Pan, C. (2017, October). Deep adaptive image clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5879-5887). [12]Chen, C. Y., & Huang, J. J. (2019). Double Deep Autoencoder for Heterogeneous Distributed Clustering. Information, 10(4), 144. [13]Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., & Abbeel, P. (2016). Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems (pp. 2172-2180). [14]Chu, S. C., Roddick, J. F., Chen, T. Y., & Pan, J. S. (2002, October). Efficient search approaches for K-medoids-based algorithms. In 2002 IEEE Region 10 Conference on Computers, Communications, Control and Power Engineering. TENCOM'02. Proceedings. (Vol. 1, pp. 712a-715a). IEEE. [15]Chu, W., & Cai, D. (2017, August). Stacked Similarity-Aware Autoencoders. In IJCAI (pp. 1561-1567). [16]Dai, D., & Van Gool, L. (2016). Unsupervised high-level feature learning by ensemble projection for semi-supervised image classification and image clustering. arXiv preprint arXiv:1602.00955. [17]Dalal, N., & Triggs, B. (2005, June). Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on (Vol. 1, pp. 886-893). IEEE. [18]Danthala, M. K. (2015). Tweet analysis: twitter data processing using Apache Hadoop. International Journal Of Core Engineering & Management (IJCEM), 1(11), 94-102. [19]Dilokthanakul, N., Mediano, P. A., Garnelo, M., Lee, M. C., Salimbeni, H., Arulkumaran, K., & Shanahan, M. (2016). Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648. [20]Dong, C., Loy, C. C., He, K., & Tang, X. (2016). Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence, 38(2), 295-307. [21]Ester, M., Kriegel, H. P., Sander, J., & Xu, X. (1996, August). A density-based algorithm for discovering clusters in large spatial databases with noise. In Kdd (Vol. 96, No. 34, pp. 226-231). [22]Farfade, S. S., Saberian, M. J., & Li, L. J. (2015, June). Multi-view face detection using deep convolutional neural networks. In Proceedings of the 5th ACM on International Conference on Multimedia Retrieval (pp. 643-650). ACM. [23]Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery in databases. AI magazine, 17(3), 37-54. [24]Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). The KDD process for extracting useful knowledge from volumes of data. Communications of the ACM, 39(11), 27-34. [25]Fisher, D. H. (1987). Knowledge acquisition via incremental conceptual clustering. Machine learning, 2(2), 139-172. [26]Fukunaga, K. (2013). Introduction to statistical pattern recognition. Elsevier; San Diego [27]Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680). [28]Guo, X., Liu, X., Zhu, E., & Yin, J. (2017, November). Deep clustering with convolutional autoencoders. In International Conference on Neural Information Processing (pp. 373-382). Springer, Cham. [29]Hand, D. J. (2007). Principles of data mining. Drug safety, 30(7), 621-622. [30]Hartigan, J. A., & Wong, M. A. (1979). Algorithm AS 136: A k-means clustering algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1), 100-108. [31]Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504-507. [32]Huang, J. J. (2018). Heterogeneous distributed clustering by the fuzzy membership and hierarchical structure. Journal of Industrial and Production Engineering, 35(3), 189-198. [33]Huang, K. Y. (2002). A synergistic automatic clustering technique (SYNERACT) for multispectral image analysis. Photogrammetric engineering and remote sensing, 68(1), 33-40. [34]Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017, July). Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5967-5976). [35]Karazeev, A. (2017). Generative adversarial networks (GANs): Engine and applications. [36]Ke, S., Zhao, Y., Li, B., Wu, Z., & Liu, X. (2016, August). Fast image clustering based on convolutional neural network and binary k-means. In Eighth International Conference on Digital Image Processing (ICDIP 2016) (Vol. 10033, p. 100332E). International Society for Optics and Photonics. [37]Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. [38]Kingma, D. P., Mohamed, S., Rezende, D. J., & Welling, M. (2014). Semi-supervised learning with deep generative models. In Advances in neural information processing systems (pp. 3581-3589). [39]Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., ... & Shi, W. (2017, July). Photo-realistic single image super-resolution using a generative adversarial network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 105-114). IEEE. [40]Li, W., Fu, H., Yu, L., Gong, P., Feng, D., Li, C., & Clinton, N. (2016). Stacked autoencoder-based deep learning for remote-sensing image classification: a case study of African land-cover mapping. International Journal of Remote Sensing, 37(23), 5632-5646 [41]Li, Z., Dey, N., Ashour, A.S., Cao, L., Wang, Y., Wang, D., McCauley, P., Balas, V.E., Shi, K. and Shi, F., 2017. Convolutional neural network based clustering and manifold learning method for diabetic plantar pressure imaging dataset. Journal of Medical Imaging and Health Informatics, 7(3), pp.639-652. [42]Liou, C. Y., Cheng, W. C., Liou, J. W., & Liou, D. R. (2014). Autoencoder for words. Neurocomputing, 139, 84-96. [43]Liu, H., Shao, M., Li, S., & Fu, Y. (2016, August). Infinite ensemble for image clustering. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1745-1754). ACM. [44]Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2), 91-110. [45]Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., & Frey, B. (2015). Adversarial autoencoders. arXiv preprint arXiv:1511.05644. [46]Mukherjee, S., Asnani, H., Lin, E., & Kannan, S. (2018). ClusterGAN: Latent space clustering in generative adversarial networks. arXiv preprint arXiv:1809.03627. [47]Ng, R. T., & Han, J. (2002). CLARANS: A method for clustering objects for spatial data mining. IEEE transactions on knowledge and data engineering, 14(5), 1003-1016. [48]Ngiam, J., Coates, A., Lahiri, A., Prochnow, B., Le, Q. V., & Ng, A. Y. (2011). On optimization methods for deep learning. Proceedings of the 28th International Conference on Machine Learning (ICML-11) (pp. 265–272). [49]Nicolau, M., & McDermott, J. (2016, September). A hybrid autoencoder and density estimation model for anomaly detection. In International Conference on Parallel Problem Solving from Nature (pp. 717-726). Springer, Cham. [50]Noda, K., Yamaguchi, Y., Nakadai, K., Okuno, H. G., & Ogata, T. (2015). Audio-visual speech recognition using deep learning. Applied Intelligence, 42(4), 722. [51]Omran, M., Engelbrecht, A. P., & Salman, A. (2005). Particle swarm optimization method for image clustering. International Journal of Pattern Recognition and Artificial Intelligence, 19(03), 297-321. [52]Pearson, K. (1901). LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11), 559-572. [53]Peng, X., Zhou, J. T., & Zhu, H. (2018). k-meansNet: When k-means Meets Differentiable Programming. arXiv preprint arXiv:1808.07292. [54]Rosenberger, C., & Chehdi, K. (2000). Unsupervised clustering method with optimal estimation of the number of clusters: Application to image segmentation. In Proceedings 15th International Conference on Pattern Recognition. ICPR-2000 (Vol. 1, pp. 656-659). IEEE. [55]Roy, M., Bose, S. K., Kar, B., Gopalakrishnan, P. K., & Basu, A. (2018, November). A Stacked Autoencoder Neural Network based Automated Feature Extraction Method for Anomaly detection in On-line Condition Monitoring. In 2018 IEEE Symposium Series on Computational Intelligence (SSCI) (pp. 1501-1507). IEEE. [56]Sakurada, M., & Yairi, T. (2014, December). Anomaly detection using autoencoders with nonlinear dimensionality reduction. In Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis (p. 4). ACM. [57]Savaresi, S. M., Boley, D. L., Bittanti, S., & Gazzaniga, G. (2002, April). Cluster selection in divisive clustering algorithms. In Proceedings of the 2002 SIAM International Conference on Data Mining (pp. 299-314). Society for Industrial and Applied Mathematics. [58]Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117. [59]Srinivas, K., Rani, B. K., & Govrdhan, A. (2010). Applications of data mining techniques in healthcare and prediction of heart attacks. International Journal on Computer Science and Engineering (IJCSE), 2(02), 250-255. [60]Steuer, R., Kurths, J., Daub, C. O., Weise, J., & Selbig, J. (2002). The mutual information: detecting and evaluating dependencies between variables. Bioinformatics, 18(suppl_2), S231-S240. [61]Sun, W., Shao, S., Zhao, R., Yan, R., Zhang, X., & Chen, X. (2016). A sparse auto-encoder-based deep neural network approach for induction motor faults classification. Measurement, 89, 171-178. [62]Tajbakhsh, N., Shin, J. Y., Gurudu, S. R., Hurst, R. T., Kendall, C. B., Gotway, M. B., & Liang, J. (2016). Convolutional neural networks for medical image analysis: Full training or fine tuning?. IEEE transactions on medical imaging, 35(5), 1299-1312. [63]Tan, C. C., & Eswaran, C. (2010). Reconstruction and recognition of face and digit images using autoencoders. Neural Computing & Applications, 19(7), 1069–1079. [64]Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P. A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec), 3371–3408. [65]Wang, C., Pan, S., Long, G., Zhu, X., & Jiang, J. (2017, November). Mgae: Marginalized graph autoencoder for graph clustering. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 889-898). ACM. [66]Wang, J., Tang, J., Xu, Z., Wang, Y., Xue, G., Zhang, X., & Yang, D. (2017, May). Spatiotemporal modeling and prediction in cellular networks: A big data enabled deep learning approach. In IEEE INFOCOM 2017-IEEE Conference on Computer Communications (pp. 1-9). IEEE. [67]Wang, Y., Yao, H., & Zhao, S. (2016). Auto-encoder based dimensionality reduction. Neurocomputing, 184, 232-242. [68]Wu, X., Kumar, V., Quinlan, J. R., Ghosh, J., Yang, Q., Motoda, H., ... & Zhou, Z. H. (2008). Top 10 algorithms in data mining. Knowledge and information systems, 14(1), 1-37. [69]Xie, J., Girshick, R., & Farhadi, A. (2016, June). Unsupervised deep embedding for clustering analysis. In International conference on machine learning (pp. 478-487). [70]Xu, J., Xiang, L., Liu, Q., Gilmore, H., Wu, J., Tang, J., & Madabhushi, A. (2016). Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images. IEEE transactions on medical imaging, 35(1), 119-130. [71]Zhang, J., Hou, Z., Wu, Z., Chen, Y., & Li, W. (2016, June). Research of 3D face recognition algorithm based on deep learning stacked denoising autoencoder theory. In 2016 8th IEEE International Conference on Communication Software and Networks (ICCSN) (pp. 663-667). [72]Zhang, T., Ramakrishnan, R., & Livny, M. (1996, June). BIRCH: an efficient data clustering method for very large databases. In ACM Sigmod Record (Vol. 25, No. 2, pp. 103-114). ACM. [73]Zhu, Z., Wang, X., Bai, S., Yao, C., & Bai, X. (2016). Deep learning representation using autoencoder for 3D shape retrieval. Neurocomputing, 204, 41–50.
|