|
Bibliography [1].Anderson, W.N. Jr., Kleindorfer, G.B., Kleindorfer, P.R., Woodroofe, M.B. “Consistent estimates of the parameters of a linear system.” Ann.Math. Stat.40, 2064-2075, 1969 [2].Adlar Jeewook Kim, ,” Input/Output Hidden Markov Models for Modeling Stock Order Flows”, M.I.T. A.I. Lab Tech. Rep., No. 1370, Jan,2001 [3].A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang,”Phoneme recognition using time-delay neural networks,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 37,pp.328-339,1989 [4].A.Poritz, “Hidden Markov Models: A guided tour,” in Proc. Int. Conf. Acoust., Speech, Signal Processing, pp.7-13,1988 [5].Acero A., Stern R.M., “Environmental robustness in automatic speech recogtnition,” Proc ICASSp-90, pp 849-952, Albuq., NM,1990 [6].Admati, A.R. and P. Pfleidere,”The Theory of Intraday Patterns: Volume and Price Variability,” Review of Financial Studies, vol.1,pp.3-40,1988 [7].Bahl L.R., Jelinek F., “A maximum likelihood approach to continuous speech recognition,” IEEE Trans. Pattern Analysis Machine Intelligence,5, ,1983 [8].Bahl L.R., Brown P.F., De Souza P.V., Mercer R.L., “A tree-based statistical language model for natural language speech recognition,” IEEE Trans. On Acoustics Speech Signal Process, 37, No. 7.pp.1001-1008,1989 [9].Bollerslev, T., “On the Correlation Structure for the Generalised Autoregressive Conditional Heteroskedastic Process,” Journal of Time Series Analysis, 9, pp. 121-32. 1988 [10].Balasubramanian V.,”Equivalence and reduction of Hidden Markov Models”, M.I.T. A.I. Lab Tech. Rep., No. 1370, Jan,1993 [11]. Bo-Sung Kim, Bok-Gue Park, Jun-Dong Cho, Young-Hoon Chang, “Low Power Viterbi Search Architecture using Inverse Hidden Markov Model,”pp. 724 -732.,2000 [12].Claudio Becchetti and L. P. Ricotti, “Speech Recognition Theory and C++ Implementation,” John Wiley & Sons Ltd, 1999. [13].Roberts, H., "Statistical versus Clinical Prediction of the Stock Market," unpublished manuscript, Center for Research in Security Prices, University of Chicago,May, 1967 [14].Robinson, P., "Time Series with Strong Dependence," in C. A. Sims (ed.), Advance sin Econometrics: Sixth World Congress, Vol. I, Cambridge University Press, pp. 47-95.,1994 [15].Turner, C. M. , R. Startz and C. R. Nelson, "A Markov model of heteroscedasticity, risk and learning in the stock market," Journal of Financial Economics, 25, pp. 3-22.,1989 [16].Steinhaus, S. "Comparison of mathematical programs for data analysis," University of Frankfurt, Germany, http://www.informatik.uni-frankfurt.de/ , 1999 [17].Starck, J.L., Murtagh, F., and A. Bijaoui Image and Data Analysis: The Multiscale Approach, Cambridge University Press,1998 [18]. Chen, S. H. and C. W. Tan "Estimating the Complexity Function of Financial Time Series," The Journal of Management and Economics, Oct.-Nov., Vol. 3.,1999 [19].Thomas Hellstrom, Kenneth, Kolmstrm,”Predictable Patterns in Stock Returns” Department of Mathematics and Physics, Malardalen University Press,1998 [20].Bagella M., Becchetti, L., Carpentieri, A., “The first shall be last. Do contraian strategy premia violate financial market efficency,” mimeo,1988 [21].D. Easley, N. Kiefer, and M. O’Hara. The information content of the trading process. Journal of Empirical Finance,vol., 4 pp.159-186, 1997. [22].L. Rabiner.” A tutorial on hidden markov models and selected applications in speech recognition.,“ In Proceeding of IEEE, Vol. 77, No. 2, pages 257—286, 1989. [23].L. R. Rabiner, “A tutorial on hidden Markov models and selected applications in speech recognition,” AT&T Bell Syst. Tech. Journal, vol. 62, no. 4, April,1983 [24].L.B. Jackson, Digital Filters and Signal Processing with MATALAB Exercises, Kluwer Academic Publishers, 1995. [25].Y. Bengio. “Markovian models for sequential data,” Neural Computing Surveys, 2:129—162, 1999. [26].Y. Bengio and P. Frasconi.”Input/output hmms for sequence processing,” IEEE Transactions on Neural Networks, 7(5) pp.1231-1249, 1996. [27].J.G. Wilpon, et al., “Automatic Recognition of Keyword in Unconstrained Speech Using Hidden Markov Models,” IEEE ASSP Magazine, Vol. 38, pp. 1870-1878, 1990. [28].J. R. Deller, “Discrete-Time Processing of Speech Signals,” Macmillan,1993. [29]. J.G. Bergh, F.K. Soong, and L.R. Rabiner, “Incorporation of Temporal Structure Into a Vector-Quantization-Based Preprocessor for Speaker-Independent, Isolated-Word Recognition,” AT & T Technical Journal, Vol. 64, No. 5, May-June 1985. [30]. G. Wilpon and L.R. Rabiner, “A Modified K-Means Clustering Algorithm for Use Isolated Word Recognition,” IEEE Trans. Acoustics, Speech, and Signal Processing, Vol. 33, No. 3, pp. 587-594, June 1985. [31]. L¨utkepohl, H.: Introduction to multiple time series analysis. Berlin, Heidelberg, New York:,Springer 1991 [32]. Liptser, R.S., Shiryayev, A.N.: Statistics of random processes. Vols. I and II. Berlin, Heidelberg, New York: Springer 1977 [33].Kaas, Robert E and Adrian E. Raftery.“Bayes Factors,” Technical Report 254,Department of Statistics, University of Washington., 1995 [34].Lin, Tsungan and Bill G. Horne. “Learning Long-Term Dependencies in NARXRecurrent Neural Networks.” 1996 IEEE Conference on Neural Networks. pp. 1329-1338,1996 [35]. Neal, Radford M. “Monte Carlo implementation of Gaussian process models for Bayesian regression and classification,” Technical Report No. 9702, Dept. Of Statistics, University of Toronto. ftp://ftp.cs.toronto.edu/pub/radford, 1997 [36]. Valtchev, V., S. Kapadia, S. J. Young. “Recurrent Input Transformation for Hidden Markov Models.” Cambridge University Engineering Department Technical Report.,1995
|