|
[1]P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proc. Computer Vision and Pattern Recognition, pp. 1-9, 2001. [2]J. M. Guo, S. H. Tseng, and K. Wong, “Accurate Facial Landmark Extraction,” in IEEE Signal Processing Letters, vol. 23, no. 5, pp. 605-609, 2016. [3]P. Ekman and W. Friesen, “Facial Action Coding System: A Technique for the Measurement of Facial Movement. Palo Alto: Consulting Psychologists Press,” 1978. [4]S. Porter, L. ten Brinke, B. Wallace, “Secrets and lies: involuntary leakage in deceptive facial expressions as a function of emotional intensity,” in Journal of Nonverbal Behavior, vol. 36, no. 1, pp.23-27, 2012. [5]L. ten Brinke, S. Porter, A. Baker, “Darwin the detective: observable facial muscle contractions reveal emotional high-stakes lies,” in Evolution and Human Behavior, vol. 33, no. 4, pp.411–416, 2012. [6]V. Perez-Rosas, M. Abouelenien, R. Mihalcea, M. Burzo, “Deception Detection using Real-life Trial Data,” in Proceedings of the 2015 ACM International Conference on Multimodal Interaction, pp.59-66, 2015. [7]M. Jaiswal, S. Tabibu, and R. Bajpai, “The Truth and Nothing but the Truth: Multimodal Analysis for Deception Detection,” in IEEE International Conference on Data Mining Workshops, pp.938-943, 2017. [8]F. Charles, Jr. Bond and M. Bella DePaulo, “Accuracy of Deception Judgments,” in Personality and Social Psychology Review, vol. 10, no. 3, pp. 214-234, 2006. [9]Scientific Validity of Polygraph Testing: A Research Review and Evaluation—A Technical Memorandum (Washington, D. C.: U.S. Congress, Office of Technology Assessment, OTA-TM-H-15, November 1983) [10]A. R, "Detecting Deception". Monitor on Psychology, vol. 37, no. 7, pp. 70, 2004. [11]F. A. Kozel et al., “A pilot study of functional magnetic resonance imaging brain correlates of deception in healthy young men,” J. Neuropsychiatry Clin. Neurosci, vol. 16, no. 3, pp. 295–305, Aug. 2004. [12]“Education psychologists use eye-tracking method for detecting lies,” psychologialscience.org. Retrieved 26 April 2012. [13]F. Horvath, J. McCloughan, D. Weatherman, and S. Slowik, “The Accuracy of auditors' and layered voice Analysis (LVA) operators' judgments of truth and Deception During Police Questioning,” in Journal of forensic sciences, vol. 58, no. 2, pp.385-392, 2013. [14]K. R. Damphousse, “Voice stress analysis: Only 15 percent of lies about drug use detected in field test,” in NIJ Journal, vol. 8, no. 12, pp.8–12, 2008. [15]J. D. Harnsberger, H. Hollien, C. A. Martin, and K. A. Hollien, “Stress and Deception in Speech: Evaluating Layered Voice Analysis,” in Journal of forensic sciences, vol. 54, no. 3, pp.642–650, 2009. [16]H. Hollien, J. D. Harnsberger, Martin, and C. A., K. A. Hollien, “Evaluation of the NITV CVSA,” in Journal of forensic sciences, vol. 53, no. 1, pp.183–193, 2008. [17]M. Owayjan, et al., “The design and development of a lie detection system using facial micro-expressions.,” IEEE 2nd International Conference on Advances in Computational Tools for Engineering Applications (ACTEA) , pp. 33-38, 2012. [18]L. Su, and D. L. Martin. “High-stakes deception detection based on facial expressions,” IEEE 22nd International Conference on Pattern Recognition (ICPR), pp. 2519-2524, 2014. [19]M. F. Valstar, et al., “Fera 2015-second facial expression recognition and analysis challenge,” 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) , vol. 6, pp. 1-8, 2015. [20]T. Baltrušaitis, M. Mahmoud, and P. Robinson, “Cross-dataset learning and person-specific normalisation for automatic action unit detection,” 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 6, pp.1-6, 2015. [21]Y. I. Tian, T. Kanade, and J. F. Cohn, “Recognizing action units for facial expression analysis,” in IEEE Transactions on pattern analysis and machine intelligence, vol. 23, no. 2, pp. 97-115, 2001. [22]T. Kanade, J. F. Cohn and Y. Tian, “Comprehensive database for facial expression analysis,” Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46-53, 2000. [23]E. Wood, T. Baltrusaitis, X. Zhang, Y. Sugano, P. Robinson, and A. Bulling. “Rendering of eyes for eye-shape registration and gaze estimation,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3756-3764, 2015. [24]T. Baltrusaitis, L. P. Morency, and P. Robinson. “Constrained local neural fields for robust facial landmark detection in the wild,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, pp.354-361, 2013. [25]D. Cristinacce, and T. F. Cootes, “Feature detection and tracking with constrained local models,” in BMVC, vol. 1, no. 2, 2006. [26]L. Swirski, A. Bulling, and N. Dodgson, “Robust real-time pupil tracking in highly off-axis images,” in Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 173-176, 2012. [27]T. Ojala, M. Pietikinen and D. Harwood, “A comparative study of texture measures with classification with local binary patterns,” in Pattern Recognition, vol. 29, no. 1, 1996. [28]T. Ojala, M. Pietikainen and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, Jul. 2002. [29]X. Cao, Y. Wei, F. Wen, and J. Sun, “Face alignment by explicit shape regression,” in International Journal of Computer Vision , vol. 107, no. 2, pp. 177-190, 2014. [30]P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman and N. Kumar, “Localization parts of faces using a consensus of exemplars,” in IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 12, pp. 2930-2940, 2013. [31]X. Xiong and F. D. L. Torre, “Supervised Descent Method and its Applications to Face Alignment,” in IEEE Computer Vision and Pattern Recognition (CVPR), pp. 532-539, June, 2013. [32]D. G. Lowe, “Object Recognition from Local Scale-Invariant Features,” in Computer vision, 1999. The proceedings of the seventh IEEE international conference on, vol. 2, pp. 1150-1157, 1999. [33]P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman and N. Kumar, “Localization parts of faces using a consensus of exemplars,” in IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 12, pp. 2930-2940, 2013. [34]X. P. Burgos-Artizzu, P. Perona, and P. Dollar, “Robust face landmark estimation under occlusion,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1513-1520, 2013. [35]S. Zhu, C. Li, C. C. Loy and X. Tang, “Face Alignment by Coarse-to-Fine Shape Searching,” in Proceedings of the IEEE International Conference on Computer Vision, 2015. [36]P. Pudil, J. Novovicova and J. Kittler, “Floating search methods in feature selection,” in Pattern Recognition Letters, vol. 15, pp. 1119-1125, Nov. 1994. [37]R. E. Fan, K. W. Chang, C. J. Hsieh, X. R. Wang and C. J. Lin, “LIBLINEAR: A Library for Large Linear Classification,” in Journal of Machine Learning Research, pp. 1871-1874, Aug. 2008. [38]C. Cortes and V. Vapnik, “Support-vector networks,” in Machine Learning, vol. 20, no. 3, pp. 273-297, Sep. 1995. [39]N. Dalal and B. Triggs, “Histogram of Oriented Gradients for Human Detection,” in IEEE Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 886-893, June 2005. [40]R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multi-pie,” in Image and Vision Computing, vol. 28, no. 5, pp. 1-8, Sep. 2010. [41]P. Liu, J. M. Guo, et al., “Ocular Recognition for Blinking Eyes,” in IEEE Transactions on Image Processing, 2017. [42]S. A. Huettel, W. S. Allen, and M. Gregory. Functional magnetic resonance imaging. Vol. 1. Sunderland: Sinauer Associates, 2004. [43]J. W. Pennebaker, E. F. Martha, and J. B. Roger. “Linguistic inquiry and word count: LIWC 2001,” in Mahway: Lawrence Erlbaum Associates71.2001 (2001): 2001. [44]J. Allwood, et al., “The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena,” in Language Resources and Evaluation, vol. 41, no. 3, pp. 273-287, 2007. [45]E. Cambria, et al., “SenticNet 4: A Semantic Resource for Sentiment Analysis Based on Conceptual Primitives,” in COLING, pp. 2666-2677, 2016. [46]S. Poria, et al., “Merging SenticNet and WordNet-Affect emotion lists for sentiment analysis,” IEEE 11th International Conference on Signal Processing (ICSP), vol. 2, pp. 1251-1255, 2012. [47]S. Poria, et al., “Enriching SenticNet polarity scores through semi-supervised fuzzy clustering,” 12th International Conference on Data Mining Workshops (ICDMW), pp. 709-716, 2012. [48]F. Eyben et al., “Recent developments in opensmile, the munich open-source multimedia feature extractor,” in Proceedings of the 21st ACM international conference on Multimedia (ACM), pp. 835-838, 2013. [49]Z. Zhang, et al., “Real-time automatic deceit detection from involuntary facial expressions,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-6, 2007. [50]R. P. Fisher, and R. E. Geiselman. “The cognitive interview method of conducting police interviews: Eliciting extensive information and promoting therapeutic jurisprudence,” in International journal of law and psychiatry, vol. 33, no.5, pp. 321-328, 2010. [51]S. Ren, X. Cao, Y. Wei and J. Sin, “Face Alignment at 3000 FPS via Regression Local Binary Features, ” in IEEE Computer Vision and Pattern Recognition (CVPR), pp. 23-28, June 2014. [52]L. Breiman, “Random Forests,” in Machine Learning, vol. 45, pp. 5-32, Oct. 2001. [53]V. Kazemi and J. Sullivan, “One Millisecond Face Alignment with an Ensemble of Regression Trees,” in IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1867-1874, June 2014.
|