Machine Learning for Physicists
ver. 20171218
■ Lecturer I: Prelude 조정효 Junghyo Jo (KIAS)
1. Perceptron
2. Recurrent neural network
3. Boltzmann machine
4. Causality inference
■ Lecture II: Machine Learning 101 노영균 Yung-Kyun Noh (SNU)
1. Probability theory
2. Bayes classifier and regression
3. Directed and undirected graphical models
4. Inference using Kalman filter
5. Latent variable models
■ Lecturer III: Machine Learning in Practice 안강헌 Kang-Hun Ahn (CNU)
1. Convolutional neural network
2. Generative adversarial network
3. Python and TensorFlowTM
■ Special Lecture: Information Dynamics Chaoming Song (Miami)
1. Least action principle vs. maximum log-likelihood
2. Stochastic process, a mini primer
3. Reinforced Poisson process and citation dynamics
4. Substitutional systems
* General References
- David Rumelhart, Geoffrey Hinton and Ronald Williams (1986),
Learning representations by back-propagating errors, Nature 323: 533-536.
- Shun-ichi Amari, Koji Kurata and Hiroshi Nagaoka (1992), Information geometry
of Boltzmann machines, IEEE Transactions on Neural Networks 3: 260-271.
- Herbert Jaeger (2002), A tutorial on training recurrent neural networks, covering
BPTT, RTRL, EKF and the “echo state network” approach, GMD Report 159,
Fraunhofer Institute AIS.
- Geoffrey Hinton (2012), A Practical Guide to Training Restricted Boltzmann Machines.
In: Montavon G., Orr G.B., Muller KR. (eds) Neural Networks: Tricks of the Trade.
Lecture Notes in Computer Science, vol 7700. Springer, Berlin, Heidelberg.
- Geoffrey Hinton and Ruslan Salakhutdinov (2006), Reducing the dimensionality of
data with neural networks, Science 313: 504-507.
- Ilya Sutskever, Geoffrey Hinton and Graham Taylor (2009), The Recurrent Temporal
Restricted Boltzmann Machine. Advances in Neural Information Processing
Systems 21, MIT Press, Cambridge, MA.