References
Bellman, Richard. 1957. Dynamic Programming. Princeton University Press, Princeton, NJ.
Bergstra, James, and Yoshua Bengio. 2012. “Random Search for Hyper-Parameter Optimization.” Journal of Machine Learning Research 13: 281–305. http://dl.acm.org/citation.cfm?id=2188395.
Bishop, C.M. 1995. “Regularization and Complexity Control in Feed-Forward Networks.” Proceedings International Conference on Artificial Neural Networks ICANN’95 1: 141–48.
Black, Fischer, and Myron Scholes. 1973. “The Pricing of Options and Corporate Liabilities.” Journal of Political Economy 81 (3): 637–54. https://EconPapers.repec.org/RePEc:ucp:jpolec:v:81:y:1973:i:3:p:637-54.
Corazza, Marco, and Francesco Bertoluzzo. 2014. “Q-Learning-Based Financial Trading Systems with Applications.” University Ca’ Foscari of Venice, Dept. of Economics Working Paper Series No. 15/WP/2014. http://dx.doi.org/10.2139/ssrn.2507826.
Culkin, Robert, and Sanjiv R. Das. 2017. “Machine Learning in Finance: The Case of Deep Learning for Option Pricing.” Journal of Investment Management 15 (4): 1–9.
Dempster, M. A. H., and V. Leemans. 2006. “An Automated Fx Trading System Using Adaptive Reinforcement Learning.” Expert Syst. Appl. 30 (3). Tarrytown, NY, USA: Pergamon Press, Inc.: 543–52. doi:10.1016/j.eswa.2005.10.012.
Duarte, Victor. 2017. “Macro, Finance, and Macro Finance: Solving Nonlinear Models in Continuous Time with Machine Learning.” Working Paper, MIT.
Fukushima, Kunihiko. 1975. “Cognitron: A Self-Organizing Multilayered Neural Network.” Biological Cybernetics 20 (3): 121–36. doi:10.1007/BF00342633.
———. 1980. “Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position.” Biological Cybernetics 36 (4): 193–202. doi:10.1007/BF00344251.
Hahnloser, R., R. Sarpeshkar, M A Mahowald, R. J. Douglas, and H.S. Seung. 2000. “Digital Selection and Analogue Amplification Coesist in a Cortex-Inspired Silicon Circuit.” Nature, 947–51.
He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015a. “Deep Residual Learning for Image Recognition.” CoRR abs/1512.03385. http://arxiv.org/abs/1512.03385.
———. 2015b. “Delving Deep into Rectifiers: Surpassing Human-Level Performance on Imagenet Classification.” CoRR abs/1502.01852. http://arxiv.org/abs/1502.01852.
Hinton, G.E., S. Osindero, and Y. Teh. 2006. “A Fast Learning Algorithm for Deep Belief Nets.” Neural Computation 18: 1527–54.
Howard, Ronald A. 1966. “Dynamic Programming.” Management Science 12 (5): 317–48. doi:10.1287/mnsc.12.5.317.
Hubel, D. H., and T. N. Wiesel. 1959. “Receptive Fields of Single Neurones in the Cat’s Striate Cortex.” The Journal of Physiology 148 (3): 574–91. doi:10.1113/jphysiol.1959.sp006308.
———. 1962. “Receptive Fields, Binocular Interaction and Functional Architecture in the Cat’s Visual Cortex.” The Journal of Physiology 160 (1): 106–54. doi:10.1113/jphysiol.1962.sp006837.
Hubel, David H., and Torsten N. Wiesel. 1968. “Receptive Fields and Functional Architecture of Monkey Striate Cortex.” Journal of Physiology (London) 195: 215–43.
Ioffe, Sergey, and Christian Szegedy. 2015. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” CoRR abs/1502.03167. http://arxiv.org/abs/1502.03167.
Jiang, Z., D. Xu, and J. Liang. 2017. “A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem.” ArXiv E-Prints, June. http://adsabs.harvard.edu/abs/2017arXiv170610059J.
Kotsiantis, S. B., and et al. 2006. “Data Preprocessing for Supervised Learning.”
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. “ImageNet Classification with Deep Convolutional Neural Networks.” In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, 1097–1105. NIPS’12. Lake Tahoe, Nevada. http://dl.acm.org/citation.cfm?id=2999134.2999257.
LeCun, Yann, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. “Gradient-Based Learning Applied to Document Recognition.” In Proceedings of the Ieee, 2278–2324.
McCulloch, Warren, and Walter Pitts. 1943. “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Bulletin Of Mathematical Biophysics 5: 115–33.
Minsky, Marvin, and Seymour Papert. 1969. Perceptrons. Cambridge, MA: MIT Press.
Mnih, V., K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. 2013. “Playing Atari with Deep Reinforcement Learning.” ArXiv E-Prints, December. http://adsabs.harvard.edu/abs/2013arXiv1312.5602M.
Montavon, Grégoire, Genevieve B. Orr, and Klaus-Robert Müller, eds. 2012. Neural Networks: Tricks of the Trade - Second Edition. Vol. 7700. Lecture Notes in Computer Science. Springer. doi:10.1007/978-3-642-35289-8.
Moody, John E., and Matthew Saffell. 2001. “Learning to Trade via Direct Reinforcement.” IEEE Transactions on Neural Networks 12 (4): 875–89.
Nair, Vinod, and Geoffrey Hinton. 2010. “Rectified Linear Units Improve Restricted Boltzmann Machines.” ICML.
Rosenblatt, F. 1958. “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review 65: 386–408.
———. 1962. Principles of Neurodynamics. New York, NY: Spartan.
Roux, N. Le, and Y. Bengio. 2008. “Representational Power of Restricted Boltzmann Machines and Deep Belief Networks.” Neural Computation 20 (6): 1631–49.
———. 2010. “Deep Belief Networks Are Compact Universal Approximators.” Neural Computation 22 (8): 2192–2207.
Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. 2015. “ImageNet Large Scale Visual Recognition Challenge.” International Journal of Computer Vision (IJCV) 115 (3): 211–52. doi:10.1007/s11263-015-0816-y.
Silver, David, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, et al. 2016. “Mastering the Game of Go with Deep Neural Networks and Tree Search.” Nature 529 (7587). Nature Publishing Group: 484–89. doi:10.1038/nature16961.
Simonyan, Karen, and Andrew Zisserman. 2014. “Very Deep Convolutional Networks for Large-Scale Image Recognition.” CoRR abs/1409.1556. http://arxiv.org/abs/1409.1556.
Srivastava, N. 2013. “Improving Neural Networks with Dropout.” Master’s Thesis, University of Toronto.
Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” J. Mach. Learn. Res. 15 (1). JMLR.org: 1929–58. http://dl.acm.org/citation.cfm?id=2627435.2670313.
Sugiyama, Masashi, Hirotaka Hachiya, and Tetsuro Morimura. 2013. Statistical Reinforcement Learning: Modern Machine Learning Approaches. 1st ed. Chapman & Hall/CRC.
Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. 2014. “Sequence to Sequence Learning with Neural Networks.” NIPS, ArXiv:1409.3215 v3.
Sutton, Richard S., and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. MIT Press. http://www.cs.ualberta.ca/~sutton/book/the-book.html.
Szegedy, Christian, Sergey Ioffe, and Vincent Vanhoucke. 2016. “Inception-V4, Inception-Resnet and the Impact of Residual Connections on Learning.” CoRR abs/1602.07261. http://arxiv.org/abs/1602.07261.
Zeiler, Matthew D., and Rob Fergus. 2013. “Visualizing and Understanding Convolutional Networks.” CoRR abs/1311.2901. http://arxiv.org/abs/1311.2901.