Some Interesting Papers from NIPS 2012

原文链接:http://www.cnblogs.com/youth0826/archive/2012/12/06/2804859.html Some Interesting Papers from NIPS 2012

W. Koolen, D. Adamskiy, M. Warmuth
Putting Bayes to sleep
Some signals look sort of jump Markov — the distribution of the data changes over time so that there are segments which have distribution A, then later it switches to B, then perhaps back to A, and so on. A prediction procedure which “mixes past posteriors” works well in this setting but it was not clear why. This paper provides a Bayesian interpretation for the predictor as mixing in a “sleeping experts” setting.

 

J. Duchi, M. Jordan, M. Wainwright, A. Wibisono
Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods
This paper looked at stochastic gradient descent when function evaluations are cheap but gradient evaluations are expensive. The idea is to compute an unbiased approximation to the gradient by evaluating the function at the Some Interesting Papers from NIPS 2012 and Some Interesting Papers from NIPS 2012 and then do the discrete approximate to the gradient. Some of the attendees claimed this is similar to an approach proposed by Nesterov, but the distinction was unclear to me.

 

J. Lloyd, D. Roy, P. Orbanz, Z. Ghahramani
Random function priors for exchangeable graphs and arrays
This paper looked at Bayesian modeling for structures like undirected graphs which may represent interactions, like protein-protein interactions. Infinite random graphs whose distributions are invariant under permutations of the vertex set can be associated to a structure called a graphon. Here they put a prior on graphons, namely a Gaussian process prior, and then try to do inference on real graphs to estimate the kernel function of the process, for example.

 

N. Le Roux, M. Schmidt, F. Bach
A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets
This was a paper marked for oral presentation — the idea is that in gradient descent it is expensive to evaluate gradients if your objective function looks like Some Interesting Papers from NIPS 2012, where Some Interesting Papers from NIPS 2012 are your data points and Some Interesting Papers from NIPS 2012 is huge. This is because you have to evaluate Some Interesting Papers from NIPS 2012 gradients. On the other hand, stochastic gradient descent can be slow because it picks a single Some Interesting Papers from NIPS 2012 and does a gradient step at each iteration on Some Interesting Papers from NIPS 2012. Here what they do at step Some Interesting Papers from NIPS 2012 is pick a random point Some Interesting Papers from NIPS 2012, evaluate its gradient, but then take a gradient step on all Some Interesting Papers from NIPS 2012 points. For points Some Interesting Papers from NIPS 2012 they just use the gradient from the last time Some Interesting Papers from NIPS 2012 was picked. Let Some Interesting Papers from NIPS 2012 be the last time Some Interesting Papers from NIPS 2012 was picked before time Some Interesting Papers from NIPS 2012, and Some Interesting Papers from NIPS 2012. Then they take a gradient step like Some Interesting Papers from NIPS 2012. This works surprisingly well.

 

Stephane Mallat
Classification with Deep Invariant Scattering Networks
This was an invited talk — Mallat was trying to explain why deep networks seem to do learning well (it all seems a bit like black magic), but his explanation felt a bit heuristic to me in the end. The first main point he had is that wavelets are good at capturing geometric structure like translation and rotation, and appear to have favorable properties with respect to “distortions” in the signal. The notion of distortion is a little vague, but the idea is that if two signals (say images) are similar but one is slightly distorted, they should map to representations which are close to each other. The mathematics behind his analysis framework was group theoretic — he wants to estimate the group of actions which manipulate images. In a sense, this is a control-theory view of the problem (at least it seemed to me). The second point that I understood was that sparsity in representation has a big role to play in building efficient and layered representations. I think I’d have to see the talk again to understand it better, but in the end I wasn’t sure that I understoodwhy deep networks are good, but I did understand some more interesting things about wavelet representations, which is cool.

From:http://ergodicity.net/2012/12/05/nips-2012-day-two/

posted on 2012-12-06 14:03 Shicai Yang 阅读(...) 评论(...) 编辑 收藏

转载于:https://www.cnblogs.com/youth0826/archive/2012/12/06/2804859.html

上一篇:美语初级 L99 Asking for Directions 解析


下一篇:C++第七章经验整理2