Weight Uncertainty in Neural Network

Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, Daan Wierstra
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1613-1622, 2015.

Abstract

We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-blundell15, title = {Weight Uncertainty in Neural Network}, author = {Blundell, Charles and Cornebise, Julien and Kavukcuoglu, Koray and Wierstra, Daan}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1613--1622}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/blundell15.pdf}, url = {https://proceedings.mlr.press/v37/blundell15.html}, abstract = {We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.} }
Endnote
%0 Conference Paper %T Weight Uncertainty in Neural Network %A Charles Blundell %A Julien Cornebise %A Koray Kavukcuoglu %A Daan Wierstra %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-blundell15 %I PMLR %P 1613--1622 %U https://proceedings.mlr.press/v37/blundell15.html %V 37 %X We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.
RIS
TY - CPAPER TI - Weight Uncertainty in Neural Network AU - Charles Blundell AU - Julien Cornebise AU - Koray Kavukcuoglu AU - Daan Wierstra BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-blundell15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1613 EP - 1622 L1 - http://proceedings.mlr.press/v37/blundell15.pdf UR - https://proceedings.mlr.press/v37/blundell15.html AB - We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning. ER -
APA
Blundell, C., Cornebise, J., Kavukcuoglu, K. & Wierstra, D.. (2015). Weight Uncertainty in Neural Network. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1613-1622 Available from https://proceedings.mlr.press/v37/blundell15.html.

Related Material