|
[1] M. Adya and F. Collopy. How effective are neural networks at forecasting and prediction? a review and evaluation. Journal of Forecasting, 17(5-6):481–495, 1998. [2] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994. [3] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Proceedings of the 20th International Conference on Neural Information Processing Systems(NIPS), pages 161–168, Vancouver, Canada, 2007. [4] G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, 2(4):303–314, Dec. 1989. [5] J. Donahue, L. A. Hendricks, M. Rohrbach, S. Venugopalan, S. Guadarrama, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4):677–691, 2017. [6] R. Engle. GARCH 101: the use of ARCH/GARCH models in applied econometrics. Journal of Economic Perspectives, 15(4):157–168, 2001. [7] E. F. Fama. Efficient capital markets: a review of theory and empirical work. Journal of Finance, 25(2):383–417, 1970. [8] C. L. Giles, S. Lawrence, and A. C. Tsoi. Noisy time series prediction using recurrent neural networks and grammatical inference. Machine Learning, 44(1):161–183, 2001. [9] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. [10] T. H. Hann and E. Steurer. Much ado about nothing? exchange rate forecasting: neural networks vs. linear models using monthly and weekly data. Neurocomputing, 10(4):323–339, 1996. [11] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In S. C. Kremer and J. F. Kolen, editors, A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press, New York, 2001. [12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997. [13] W. Huang, K. K. Lai, Y. Nakamori, and S. Wang. Forecasting foreign exchange rates with artificial neural networks: a review. International Journal of Information Technology and Decision Making, 3(1):145–165, 2004. [14] L. Ingber. Statistical mechanics of nonlinear nonequilibrium financial markets. Mathematical and Computer Modelling, 23(7):101–121, 1996. [15] D. P. Kingma and J. Ba. Adam: a method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, May 2015. [16] A. Lapedes and R. Farber. Nonlinear signal processing using neural networks: prediction and system modeling. Technical Report LA-UR-87-2662, Los Alamos National Laboratory, Los Alamos, NM, 1987. [17] A. Lo and A. MacKinlay. A non-random walk down Wall Street. Princeton University Press, Princeton, NJ, 1999. [18] B. Malkiel. A random walk down Wall Street. Norton, New York, 1996. [19] P.-F. Pai and C.-S. Lin. A hybrid arima and support vector machines model in stock price forecasting. Omega, 33(6):497–505, 2005. [20] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on International Conference on Machine Learning, pages 1310–1318, Atlanta, June 2013. [21] N. Qian. On the momentum term in gradient descent learning algorithms. Neural Networks, 12:145–151, 1999. [22] J. Snyman. Practical mathematical optimization: an introduction to basic optimization theory and classical and new gradient-based algorithms. Springer, Berlin, 2005. [23] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014. [24] I. Sutskever, J. Martens, and G. Hinton. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning(ICML-11), pages 1017–1024, Bellevue, WA, June 2011. [25] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS-14), pages 3104–3112, Montreal, December 2014. [26] R. S. Sutton. Two problems with backpropagation and other steepest-descent learning procedures for networks. In Proceedings of the 8th Annual Conference of the Cognitive Science Society (CogSci), pages 823–831, Amherst, MA, 1986. [27] S. Taylor. Modelling financial time series. John Wiley & Sons, Chichester, UK, 1986. [28] F.-M. Tseng, G.-H. Tzeng, H.-C. Yu, and B. J. C. Yuan. Fuzzy arima model for forecasting the foreign exchange market. Fuzzy Sets and Systems, 118(1):9–19, 2001. [29] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: a neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3156–3164, Boston, MA, June 2015. [30] D. H. Wolpert and W. G. Macready. No free lunch theorems for optimization. Transactions on Evolutionary Computation, 1(1):67–82, 1997. [31] J. Yao and C. L. Tan. A case study on using neural networks to perform technical forecasting of forex. Neurocomputing, 34(1-4):79–98, 2000. [32] G. Zhang, B. Eddy Patuwo, and M. Y. Hu. Forecasting with artificial neural networks: the state of the art. International Journal of Forecasting, 14(1):35–62, 1998.
|