【学习】(代码+论文)ICLR2016文章《All you need is a good init》

点击上方机器学习研究会可以订阅哦

摘要


我爱机器学习

ICLR2016文章《All you need is a good init》。文章阐述了CNN学习中参数初始化对最总学习效果的影响,并提出了结合了Batch Normalization效果并适合RELU的初始化方式,在多个数据集上展现了良好的效果。
摘要:
Layer-sequential unit-variance (LSUV) initialization - a simple method for weight initialization for deep net learning - is proposed. The method consists of the two steps. First, pre-initialize weights of each convolution or inner-product layer with orthonormal matrices. Second, proceed from the first to the final layer, normalizing the variance of the output of each layer to be equal to one.
Experiment with different activation functions (maxout, ReLU-family, tanh) show that the proposed initialization leads to learning of very deep nets that (i) produces networks with test accuracy better or equal to standard methods and (ii) is at least as fast as the complex schemes proposed specifically for very deep nets such as FitNets (Romero et al. (2015)) and Highway (Srivastava et al. (2015)).
Performance is evaluated on GoogLeNet, CaffeNet, FitNets and Residual nets and the state-of-the-art, or very close to it, is achieved on the MNIST, CIFAR-10/100 and ImageNet datasets.

论文链接:
http://arxiv.org/abs/1511.06422

代码链接:
https://github.com/ducha-aiki/LSUVinit

原文链接:
http://weibo.com/5066241201/E326CFVNX?ref=collection&type;=comment#_rnd1470914248977

“完整内容”请点击【阅读原文】

↓↓↓
机器学习研究会

发表评论

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: