-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex2.html
203 lines (167 loc) · 11.8 KB
/
index2.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
<!DOCTYPE html>
<html lang="en">
<head>
<title>Index – Page 2 – B.log</title>
<meta charset="utf-8" />
<meta property="twitter:card" content="summary" />
<meta name="twitter:site" content="@art_sobolev" />
<meta property="og:title" content="Index – Page 2 – B.log" />
<meta property="og:description" content="Personal blog of Artem Sobolev, a Machine Learning professional with particular interest in Probabilistic Modeling, Bayesian Inference, Deep Learning, and beyond" />
<link rel="shortcut icon" href="/favicon.ico"/>
<link rel="stylesheet" type="text/css" href="/theme/css/default.css" />
<link rel="stylesheet" type="text/css" href="/theme/css/syntax.css" />
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Lato:,b" />
<script type="text/javascript">
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']],
macros: {
E: '\\mathop{\\mathbb{E}}'
}
},
svg: {
fontCache: 'global'
}
};
</script>
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
</head>
<body>
<header>
<hgroup>
<h1><a href="/">B.log</a></h1>
<h2>Random notes mostly on Machine Learning</h2>
</hgroup>
</header>
<nav>
<menu>
<a href="/">Home</a>
<a href="/pages/about.html">About me</a>
<a href="http://feeds.feedburner.com/barmaley-exe-blog-feed">RSS feed</a>
</menu>
</nav>
<section>
<article>
<header>
<h3><a href="/posts/2017-09-10-stochastic-computation-graphs-continuous-case.html">Stochastic Computation Graphs: Continuous Case</a></h3>
<time>September 10, 2017</time>
</header>
<section><p>Last year I covered <a href="/tags/modern-variational-inference-series.html">some modern Variational Inference theory</a>. These methods are often used in conjunction with Deep Neural Networks to form deep generative models (VAE, for example) or to enrich deterministic models with stochastic control, which leads to better exploration. Or you might be interested in amortized inference.</p>
<p>All these cases turn your computation graph into a stochastic one – previously deterministic nodes now become random. And it's not obvious how to do backpropagation through these nodes. In <a href="/tags/stochastic-computation-graphs-series.html">this series</a> I'd like to outline possible approaches. This time we're going to see why general approach works poorly, and see what we can do in a continuous case.</p>
</section>
</article>
<hr/>
<article>
<header>
<h3><a href="/posts/2017-08-14-icml-2017.html">ICML 2017 Summaries</a></h3>
<time>August 14, 2017</time>
</header>
<section><p>Just like with <a href="/posts/2016-12-31-nips-2016-summaries.html">NIPS last year</a>, here's a list of ICML'17 summaries (updated as I stumble upon new ones)</p>
</section>
</article>
<hr/>
<article>
<header>
<h3><a href="/posts/2017-07-23-no-free-lunch-theorem.html">On No Free Lunch Theorem and some other impossibility results</a></h3>
<time>July 23, 2017</time>
</header>
<section><p>The more I talk to people online, the more I hear about the famous No Free Lunch Theorem (NFL theorem). Unfortunately, quite often people don't really understand what the theorem is about, and what its implications are. In this post I'd like to share my view on the NFL theorem, and some other impossibility results.</p>
</section>
</article>
<hr/>
<article>
<header>
<h3><a href="/posts/2017-01-29-matrix-and-vector-calculus-via-differentials.html">Matrix and Vector Calculus via Differentials</a></h3>
<time>January 29, 2017</time>
</header>
<section><p>Many tasks of machine learning can be posed as optimization problems. One comes up with a parametric model, defines a loss function, and then minimizes it in order to learn optimal parameters. One very powerful tool of optimization theory is the use of smooth (differentiable) functions: those that can be locally approximated with a linear functions.
We all surely know how to differentiate a function, but often it's more convenient to perform all the derivations in matrix form, since many computational packages like numpy or matlab are optimized for vectorized expressions.</p>
<p>In this post I want to outline the general idea of how one can calculate derivatives in vector and matrix spaces (but the idea is general enough to be applied to other algebraic structures).</p>
</section>
</article>
<hr/>
<article>
<header>
<h3><a href="/posts/2016-12-31-nips-2016-summaries.html">NIPS 2016 Summaries</a></h3>
<time>December 31, 2016</time>
</header>
<section><p>I did not attend this year's NIPS, but I've gathered many summaries published online by those who did attend the conference.</p>
</section>
</article>
<hr/>
<article>
<header>
<h3><a href="/posts/2016-07-14-neural-variational-importance-weighted-autoencoders.html">Neural Variational Inference: Importance Weighted Autoencoders</a></h3>
<time>July 14, 2016</time>
</header>
<section><p>Previously we covered <a href="/posts/2016-07-11-neural-variational-inference-variational-autoencoders-and-Helmholtz-machines.html">Variational Autoencoders</a> (VAE) — popular inference tool based on neural networks. In this post we'll consider, a followup work from Torronto by Y. Burda, R. Grosse and R. Salakhutdinov, <a href="https://arxiv.org/abs/1509.00519">Importance Weighted Autoencoders</a> (IWAE). The crucial contribution of this work is introduction of a new lower-bound on the marginal log-likelihood $\log p(x)$ which generalizes ELBO, but also allows one to use less accurate approximate posteriors $q(z \mid x, \Lambda)$.</p>
<p>On a dessert we'll discuss another paper, <a href="https://arxiv.org/abs/1602.06725">Variational inference for Monte Carlo objectives</a> by A. Mnih and D. Rezende which aims to broaden the applicability of this approach to models where reparametrization trick can not be used (e.g. for discrete variables).</p>
</section>
</article>
<hr/>
<article>
<header>
<h3><a href="/posts/2016-07-11-neural-variational-inference-variational-autoencoders-and-helmholtz-machines.html">Neural Variational Inference: Variational Autoencoders and Helmholtz machines</a></h3>
<time>July 11, 2016</time>
</header>
<section><p>So far we had a little of "neural" in our VI methods. Now it's time to fix it, as we're going to consider <a href="https://arxiv.org/abs/1312.6114">Variational Autoencoders</a> (VAE), a paper by D. Kingma and M. Welling, which made a lot of buzz in ML community. It has 2 main contributions: a new approach (AEVB) to large-scale inference in non-conjugate models with continuous latent variables, and a probabilistic model of autoencoders as an example of this approach. We then discuss connections to <a href="https://en.wikipedia.org/wiki/Helmholtz_machine">Helmholtz machines</a> — a predecessor of VAEs.</p>
</section>
</article>
<hr/>
<article>
<header>
<h3><a href="/posts/2016-07-05-neural-variational-inference-blackbox.html">Neural Variational Inference: Blackbox Mode</a></h3>
<time>July 5, 2016</time>
</header>
<section><p>In the <a href="/posts/2016-07-04-neural-variational-inference-stochastic-variational-inference.html">previous post</a> we covered Stochastic VI: an efficient and scalable variational inference method for exponential family models. However, there're many more distributions than those belonging to the exponential family. Inference in these cases requires significant amount of model analysis. In this post we consider <a href="https://arxiv.org/abs/1401.0118">Black Box Variational Inference</a> by Ranganath et al. This work just as the previous one comes from <a href="http://www.cs.columbia.edu/~blei/">David Blei</a> lab — one of the leading researchers in VI. And, just for the dessert, we'll touch upon another paper, which will finally introduce some neural networks in VI.</p>
</section>
</article>
<hr/>
<article>
<header>
<h3><a href="/posts/2016-07-04-neural-variational-inference-stochastic-variational-inference.html">Neural Variational Inference: Scaling Up</a></h3>
<time>July 4, 2016</time>
</header>
<section><p>In the <a href="/posts/2016-07-01-neural-variational-inference-classical-theory.html">previous post</a> I covered well-established classical theory developed in early 2000-s. Since then technology has made huge progress: now we have much more data, and a great need to process it and process it fast. In big data era we have huge datasets, and can not afford too many full passes over it, which might render classical VI methods impractical. Recently M. Hoffman et al. dissected classical Mean-Field VI to introduce stochasticity right into its heart, which resulted in <a href="https://arxiv.org/abs/1206.7051">Stochastic Variational Inference</a>.</p>
</section>
</article>
<hr/>
<article>
<header>
<h3><a href="/posts/2016-07-01-neural-variational-inference-classical-theory.html">Neural Variational Inference: Classical Theory</a></h3>
<time>July 1, 2016</time>
</header>
<section><p>As a member of <a href="http://bayesgroup.ru/">Bayesian methods research group</a> I'm heavily interested in Bayesian approach to machine learning. One of the strengths of this approach is ability to work with hidden (unobserved) variables which are interpretable. This power however comes at a cost of generally intractable exact inference, which limits the scope of solvable problems.</p>
<p>Another topic which gained lots of momentum in Machine Learning recently is Deep Learning, of course. With Deep Learning we can now build big and complex models that outperform most hand-engineered approaches given lots of data and computational power. The fact that Deep Learning needs a considerable amount of data also requires these methods to be scalable — a really nice property for any algorithm to have, especially in a Big Data epoch.</p>
<p>Given how appealing both topics are it's not a surprise there's been some work to marry these two recently. In this <a href="/tags/modern-variational-inference-series.html">series</a> of blogsposts I'd like to summarize recent advances, particularly in variational inference. This is not meant to be an introductory discussion as prior familiarity with classical topics (Latent variable models, <a href="https://en.wikipedia.org/wiki/Variational_Bayesian_methods">Variational Inference, Mean-field approximation</a>) is required, though I'll introduce these ideas anyway just to remind it and setup the notation.</p>
</section>
</article>
<hr/>
<!-- /#posts-list -->
<div class="paginator">
<div class="paginator-newer">
<a href="./index.html">← Newer Entries</a>
</div>
<div class="paginator-older">
<a href="./index3.html">Older Entries →</a>
</div>
</div>
</section>
<footer>
Generated with Pelican
</footer>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-38530232-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>