Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Feb 10, 2015
e-Print:

Citations per year

2015201820212024202501020304050
Abstract: (arXiv)
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
  • Bengio, Yoshua and Glorot, Xavier. Understanding the difficulty of training deep feedforward neural networks
    • In Proceedings of AISTATS, volume 9, pp. 249-
    • 256, May
    • Dean, Jeffrey, Corrado / Monga, Rajat, Chen, Kai, Devin, Matthieu, Le, Quoc V / Mao, Mark Z / Ranzato, Marc’Aurelio, Senior, Andrew, Tucker, Paul, Yang, Ke, and Ng, Andrew Y. Large / scale distributed deep networks. In NIPS
      • Greg
    • Desjardins, Guillaume and Kavukcuoglu, Koray. Natural neural networks. (unpublished)
      • Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization / Learn. Res., 12:2121-2159,July
        • J. Mach
        • [2011]
          ISSN 1532-4435
          • Gülc
            • ¸ehre, C
              • ¸ aglar and Bengio, Yoshua. Knowledge matters: Importance of prior information for optimization
                • CoRR, abs/
                • He, K / Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. ArXiv e-prints, February
                  • X. Zhang
                    ,
                  • S. Ren
                    ,
                  • J. Sun
                  • Independent component analysis: Algorithms and applications. Neural Netw., 13
                    • A. Hyvärinen
                      ,
                    • E. Oja
                    • (4-5):411-430, May
                    • Jiang / literature survey on domain adaptation of statistical classifiers
                      • Jing.
                      • Y. LeCun
                        ,
                      • L. Bottou
                        ,
                      • Y. Bengio
                        ,
                      • P. Haffner
                      • Gradient-based learning applied to document recognition
                          • IEEE Proc. 86 (1998) 2278-2324
                      • Efficient backprop. In Orr, G / and K., Muller (eds.), Neural Networks: Tricks of the trade / b
                        • Y. LeCun
                          ,
                        • L. Bottou
                          ,
                        • G. Orr
                          ,
                        • K. Muller
                      • Nonlinear image representation using divisive normalization. In Proc. Computer
                        • S. Lyu
                          ,
                        • E.P. Simoncelli
                        • Vision and Pattern Recognition, pp. 1-8 / Computer Society, Jun 23-28
                        • 4587821
                        • Nair, Vinod and Hinton / linear units improve restricted boltzmann machines. In ICML, pp
                          • Rectified
                            ,
                          • Geoffrey E.
                          • 807-814. Omnipress
                          • Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua
                            • On the difficulty of training recurrent neural networks