|
References [1]World Health Organization, “Deafness and Hearing Loss,” World Health Organization, https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss, 2024. [2]Kudrinko, Karly, et al. "Wearable sensor-based sign language recognition: A comprehensive review." IEEE Reviews in Biomedical Engineering, vol. 14, pp. 82-97, 2020. [3]L. E. Baum and T. Petrie, “Statistical inference for probabilistic functions of finite state Markov chains,” Ann. Math. Statistics, vol. 37, no. 6, pp. 1554–1563, 1966. [4]T. Starner, J. Weaver, and A. Pentland, “Real-time American sign language recognition using desk and wearable computer based video,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 12, pp. 1371–1375, 1998. [5]H.-L. Lou, “Implementing the Viterbi algorithm,” Proc. IEEE Signal Process. Mag., pp. 42–52, 1995. [6]X. Liu et al., “3D skeletal gesture recognition via hidden states exploration,” IEEE Trans. Image Process., vol. 29, pp. 4583–4597, 2020. [7]G. Fang, W. Gao, X. Chen, C. Wang, and J. Ma, ‘‘Signer-independent 843 continuous sign language recognition based on SRN/HMM,’’ Proc. Int. 844 Gesture Workshop, pp. 76–85, 2001. [8]Rung-Huei Liang and Ming Ouhyoung, "A real-time continuous gesture recognition system for sign language," Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan, pp. 558-567, 1998. [9]N. Tubaiz, T. Shanableh, and K. Assaleh, ‘‘Glove-based continuous Arabic 865 sign language recognition in user-dependent mode,’’ IEEE Trans. Human- 866 Mach. Syst., vol. 45, no. 4, pp. 526–533, 2015. [10]E. Alpaydin, Introduction to Machine Learning, MIT Press, 2010. [11]J. Wu, L. Sun, and R. Jafari, “A wearable system for recognizing American sign language in real-time using IMU and surface EMG sensors,” IEEE J. Biomed. Heal. Informat., vol. 20, no. 5, pp. 1281–1290, 2016. [12]W. Aly, S. Aly ,and S. Almotairi, ‘‘User-independent American Sign Language Alphabet Recognition based on Depth Image and PCANet features,’’ IEEE Access, vol. 7, pp. 123138–123150, 2019. [13]Luqman, Hamzah. "An efficient two-stream network for isolated sign language recognition using accumulative video motion." IEEE Access, vol. 10, pp. 93785-93798, 2022. [14]O. Koller, N. C. Camgoz, H. Ney, and R. Bowden, “Weakly Supervised Learning with Multi-Stream CNN-LSTM-HMMs to Discover Sequential Parallelism in Sign Language Videos,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 9, pp. 2306–2320, 2020. [15]J. Forster, C. Schmidt, T. Hoyoux, O. Koller, U. Zelle, J. Piater, and H. Ney, “RWTH-PHOENIX-Weather: A large vocabulary sign language recognition and translation corpus,” Proc. Int. Conf. Language Resources Eval., pp. 3785–3789, 2012. [16]Lin, K., Wang, X., Zhu, L., Zhang, B. and Yang, Y., “SKIM: Skeleton-Based Isolated Sign Language Recognition With Part Mixing,” IEEE Transactions on Multimedia, vol. 26, pp.4271-4280, 2024. [17]J. Huang, W. Zhou, H. Li, and W. Li, ‘‘Attention-based 3D-CNNs for large-vocabulary sign language recognition,’’ IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 9, pp. 2822–2832, 2019. [18]M. Al-Hammadi, G. Muhammad, W. Abdul, M. Alsulaiman, M. A. Bencherif and M. A. Mekhtiche, "Hand Gesture Recognition for Sign Language Using 3DCNN," IEEE Access, vol. 8, pp. 79491-79509, 2020. [19]Wang, Zhibo, et al. "Hear sign language: A real-time end-to-end sign language recognition system," IEEE Transactions on Mobile Computing, vol. 21, no. 7, pp. 2398-2410, 2022. [20]Bencherif, Mohamed A., et al., "Arabic sign language recognition system using 2D hands and body skeleton data," IEEE Access, vol. 9, pp. 59612-59627, 2021. [21]Hao Zhou, Wengang Zhou, Yun Zhou, Houqiang Li, "Spatial-temporal multi-cue network for sign language recognition and translation," IEEE Transactions on Multimedia, vol. 24, pp. 768-779, 2021. [22]A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional LSTM and other neural network architectures,” Neural Netw., vol. 18, no. 5–6, pp. 602–610, 2005. [23]K. Cho et al., “Learning phrase representations using RNN encoder- decoder for statistical machine translation,” Proc. Conf. Empir. Meth- ods Nat. Lang. Process., pp. 1724–1734, 2014. [24]B. Fang, J. Co, and M. Zhang, ‘‘DeepASL: Enabling ubiquitous and non- intrusive word and sentence-level sign language translation,’’ Proc. 15th ACM Conf. Embedded Netw. Sensor Syst., pp. 1–13, 2017. [25]E. Rakun, A. M. Arymurthy, L. Y. Stefanus, A. F. Wicaksono, and I. W. W. Wisesa, ‘‘Recognition of sign language system for Indonesian language using long short-term memory neural networks,’’ Adv. Sci. Lett., vol. 24, no. 2, pp. 999–1004, 2018. [26]N. Heidari, and Iosifidis, “Temporal attention-augmented graph convolutional network for efficient skeleton-based human action recognition,” 25th IEEE International Conference on Pattern Recognition, Milan, Italy, pp. 7907-7914, 2021. [27]G A. Prasath, and K. Annapurani, “Prediction of sign language recognition based on multi layered CNN,” Multimedia Tools and Applications, vol. 82, no. 19, pp. 29649-29669, 2023. [28]Yang, Ti et al., “Articulated pose estimation with flexible mixtures-of-parts,” Proc. CVPR, pp.1385-1392, 2011. [29]Basavarajaiah, Madhushree, “6 basic things to know about Convolution,” Medium.com, https://medium.com/@bdhuma/6-basic-things-to-know-about-convolution-daef5e1bc411, 2019. [30]Pisa, Ivan et al., “Denoising Autoencoders and LSTM-Based Artificial Neural Networks Data Processing for Its Application to Internal Model Control in Industrial Environments—The Wastewater Treatment Plant Control Case,” Sensors, vol. 20, no. 13, p. 3743, 2020. [31]Google AI,“MediaPipe Solutions Guide,” Google AI for Developers, https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker, 2024. [32]Google AI,“MediaPipe Solutions Guide,” Google AI for Developers, https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker, 2024.
|