forked from 1heart/math690-lecture-notes
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathnotes.tex
380 lines (197 loc) · 16.2 KB
/
notes.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
\documentclass[12pt]{article}
\usepackage[a4paper,margin=2cm]{geometry}
\usepackage{amsmath, amssymb, amsthm, amsfonts, tikz, algpseudocode}
\usepackage[plain]{algorithm}
\usepackage[framemethod=default]{mdframed}
\theoremstyle{plain}
\newtheorem*{theorem}{Theorem}
\newtheorem*{lemma}{Lemma}
\newtheorem*{claim}{Claim}
\newtheorem*{definition}{Definition}
\newtheorem*{corollary}{Corollary}
\DeclareMathOperator*{\argmin}{arg\,min}
\title{Math 690: Topics in Data Analysis and Computation\\ \large Lecture notes for Fall 2017}
\date{}
\author{\small Scribed by Yixin Lin, Shen Yan}
\begin{document}
\maketitle
\part*{August 31}
We ended last time on a few basics of machine learning.
An important point is to be careful not to ``cheat'' when measuring the performance of your models! Of course you want to know how well your model works on your current dataset, but you can fail on datasets that you haven't seen before. This is the concept of \textbf{generalization}: we want our models to generalize to the unseen samples as well.
\section*{Machine learning basics}
Goal: $y = f^* (x)$.
$$x_i \sim P_x, \text{ iid}, \quad y_i = f^*(x_i) $$
Training set $D_{Tr} = \{x_i, y_i \}_{i=1}^{n_{Tr}}$
Testing set (independent of training set, but from the same distribution) $D_{Te} = \{x_j, y_j\}_{j=1}^{n_{Te}}$
Training error: error on the training dataset.
Test error: error using trained model on test set. (also known as generalization error)
Recall that test error is $$\varepsilon_{Te} (\theta) = \frac{1}{n_{Te}} \sum_{j=1}^{n_{Te}} (y_j - f_\theta (x_j))^2 \rightarrow^{n \rightarrow \infty} \int (f^*(x)-f_\theta(x))^2 p(x) dx$$
We are trying to search for parameters $\theta \in \Theta$ such that $ f_\theta(x) \approx f^*(x) $.
The issues of overfitting and under-fitting:
$f^*$ may not be in the family we are considering.
When the family is too small, the trained model underfits.
Thus we want to enlarge our family of models. However, we may need more samples to estimate $f^*$ when considering a large family.
If we don't have enough samples, the trained model overfits and poorly generalizes to unobserved samples.
\subsection*{Breakdown of error}
We can break down the error into three terms: the approximation error, estimation error, and optimization error. ([ref] This is Part I of ``The Tradeoffs of Large-scale Learning'' by Leon Botton, 2007.)
Our true objective is minimizing the error
$$ \varepsilon(\theta) = \int (f^*(x) - f_\theta(x))^2 d p(x) = \mathbb{E}_{x \sim p} |y-f_\theta(x)|^2$$
but we know that our family of models may not be big enough. As long as $f^*$ is not in the family of functions we are considering, then this error will never be zero:
$$ \min_{\theta \in \Theta} \varepsilon(\theta) = \varepsilon(\theta^*) > 0 $$
Even if you have infinite training samples, if your family is small, then the error \textit{will never go to zero}. We call this the \textbf{bias} term.
We don't know $p$, so we don't know how to compute the integral. So what we really do in practice is minimize the training error. Instead of $\min_\theta \varepsilon(\theta)$, we minimize the following:
$$ \hat{\theta} = \argmin_{\theta \in \Theta} \hat{\varepsilon} (\theta ) = \frac{1}{n} \sum_{i=1}^n (y_i - f_\theta(x_i))^2 $$
which is a finite sample approximation of the integral, and in general $\hat{\theta} \neq \theta^*$. This means that $\varepsilon(\hat{\theta}) > \varepsilon(\theta^*)$.
Let us write $\varepsilon(\hat{\theta}) = \varepsilon(\theta^*) + (\varepsilon(\hat{\theta}) - \varepsilon(\theta^*))$. $(\varepsilon(\hat{\theta}) - \varepsilon(\theta^*))$ is the \textbf{estimation error}.
The third term of optimization will be that we want to minimize our previous error, but we are using a computer to approximate and compute this. The third term of the breakdown is \textbf{optimization error} and caused by imperfection in the optimization of
$$ \argmin_{\theta \in \Theta \hat{\varepsilon(\theta)}} $$
This imperfection causes there to be an additional term:
$$ \Rightarrow \varepsilon(\theta_{\text{sol}}) = \varepsilon(\hat{\theta}) + \varepsilon_\text{opt} $$
$$\varepsilon = \varepsilon_\text{approx} + \varepsilon_\text{est} + \varepsilon_\text{opt} $$
We mainly first focus on the first two terms in this class.
\subsection*{No free lunch theorem}
[ref] ``The Lack of A Priori Distinctions Between Learning Algorithms'' Wolpert '96.
The question is, do we have a model better than another? No machine learning model is uniformly better than another.
What do we mean by \textit{better}?
{\bf No free lunch theorem for optimization}
Our goal is to minimize some function $f : V \rightarrow S $. For any two models $A$ and $B$, the \textit{averaged} performance of $A$ and $B$ are identical.
What do we mean by averaged performance? Over $\sum_f$, $f$ is uniformly averaged over $S^V$.
{\bf No free lunch theorem for machine learning}
For any two models $A$ and $B$, there are ``as many'' targets for which model $A$ has lower generalization error than model $B$, and vice versa. In other words, I can always find some dataset for $A$ to work worse than $B$, and vice versa. So we need more assumptions about the distribution of the data.
The take-home message is that for the question of ``which algorithm is better'', the answer is \textit{it depends on the data}.
%%
\section*{Topic 1: Principal Component Analysis in High Dimension}
Two perspectives: the linear algebra perspective, and the probability perspective
\subsection*{Linear algebra perspective}
Suppose I have data vectors
$$ x_1, \cdots, x_n \in \mathbb{R}^D $$
with center of mass in the origin: $\sum_{i=1}^n x_i = 0$. (Here, $D = p$, the dimensionality of the data.)
Our goal is to find the ``optimal'' $d$-subspace to maximize the projected variation of data. (Maximizing variation is minimizing residual; verifying this is a homework problem.)
Let $d=1$, $w \in \mathbb{R}^p$, and constrain $\|w \| = 1$.
$$ \max_w \sum_{i=1}^n (w^T x_i)^2 = \sum_{i=1}^n w^T x_i x_i^T w $$
$$ = w^T (\sum_{i=1}^n x_i x_i^T) w $$
$$ S \triangleq \frac{1}{n} \sum_{i=1}^n x_i x_i^T$$
So our goal is
$$ \max_w w^T S w \text{ s.t. } \| w \| = 1 $$
We need to also add the constraint that $w_k^T w_k = 1$, $w_k^T w_l = 0$ for all $k \neq l$. Equivalently, $w_k^T w_l = \delta_{kl}$. In other words, the vectors should be mutually orthogonal.
Objective will be $$\max_{w_1, \cdots, w_d} \sum_{k=1}^d w_k^T S w_k$$
with the constraint of orthogonality.
We know the solution to this problem from linear algebra: the first $d$ eigenvectors of $S$.
$S$ has $p$ non-negative eigenvalues $\lambda_1, \cdots, \lambda_p$ (sorted by magnitude $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_p$) associated with $p$ eigenvectors $v_1, \cdots, v_p$.
To solve this problem, compute the eigendecomposition of the covariance matrix can be computed by the singular value decomposition (SVD) of $X$.
So far there is no probability.
\subsection*{Probability perspective}
Suppose $\{x_i \}_{i=1}^n \sim^{\text{i.i.d.}} P$, in $\mathbb{R}^p$.
$$ \mathbb{E} x_i = 0, \mathbb{E} x_i x_i^T = \Sigma_{p \times p} $$
Our goal:
$$ \max_{w_1, \cdots, w_d \mid w_k^T w_l = \delta_{kl}} \sum_{k=1}^d \mathbb{E}_{x \sim P} (x^T w_k)^2 $$
where the notation $ \mathbb{E}_{x \sim p} f(x) = \int f(x) dP(x) $, we interchange $dP(x)$ with $p(x)dx$, $p(x)$ being the probability density. The objective is thus
$$ \sum_{k=1}^d w_k^T (\mathbb{E}_{x \sim p} x x^T) w_k = \sum_{k=1}^d w_k^T \Sigma w_k.$$
The solution is obtained by the eigendecomposition of $\Sigma$.
However, the population covariance matrix $\Sigma$ is usually not available, and we approximate it by the sample covariance $S$, which goes back to the PCA of finite samples (the 1st perspective).
\newpage
\part*{September 5}
Last time we saw PCA from two perspectives: linear algebra, and probability (population covariance matrix).
Why can PCA be solved by eigenproblem? Recall we want to
$$ \max_{w_1, \cdots, w_d, w_k^T w_l = \delta_{kl}} \sum_{k} w_k^T S w_k $$
Solved by the eigendecomposition of the matrix $S$.
\textbf{Courant-Fischer Minimax Theorem} For any Hermitian or real-symmetric matrix, we know it has $n$ real eigenvalues with eigenvectors forming a orthonormal basis.
Let $A$ be an $n \times n$ Hermition matrix. It has $n$ eigenvalues $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n$.
$$ \lambda_k(A) = \sup_{\text{dim}(V) = k} \inf_{\| v \| = 1, v \in V} v^*A v $$
$$ \lambda_k(A) = \inf_{\text{dim}(V) = n - k + 1} \sup_{\| v \| = 1, v \in V v^* A v } v^*Av$$
Proof: $A = U \Lambda U^*$, verify for each $k$.
(Ex) We mentioned that PCA can be viewed as maximizing the projected variation, which is equivalent to minimizing the residual after projection.
$$ P_{w_k} = w_k w_k^T \text{ is the projection matrix} $$
$$ \min_{w_1, \cdots, w_d, w_k^T w_l = \delta_{kl} } \sum_{k=1}^d \sum_{i=1}^n \| x_i - P_{w_k} w_i \|^2 $$
Hint for exercise: $\| (I-ww^T) x_i \|^2$
Population covariance matrix $\Sigma = \mathbb{E} x_i x_i^T = ?$, can't compute the integral. Approximate it with $S = \frac{1}{n} \sum_{i=1}^n x_i x_i^T$.
\textbf{Covariance estimation} Given $\{x_i\}_{i=1}^n \sim^\text{iid} P_x$ in $\mathbb{R}^p$. Our goal is to estimate $$\mu = \mathbb{E}x_i, \Sigma = \mathbb{E}(x_i-\mu)(x_i-\mu)^T$$
as the sample mean and covariance
$$\hat{\mu} = \frac{1}{n} \sum_i x_i, \hat{\Sigma} = \frac{1}{n-1} \sum_{i=1}^n (x_i-\hat{\mu})(x_i-\hat{\mu})^T$$
Why do we use these as estimators? They are unbiased, but other statistics may be unbiased as well. The reason is that these are max-likelihood estimators(MLE) when the data is distributed as Gaussian. Note that $\frac{1}{n-1} \sum_{i=1}^n (x_i-\hat{\mu})(x_i-\hat{\mu})^T$ is the unbiased estimator, while $\frac{1}{n} \sum_{i=1}^n (x_i-\hat{\mu})(x_i-\hat{\mu})^T$ is the MLE.
Proof sketch: suppose $x_i \sim N(\mu, \Sigma)$. Then
$$ p(x_i) = \frac{\exp[-(x_i - \mu)^T \Sigma^{-1} (y_i - \mu) / 2]}{(2\pi)^{p/2} |\Sigma|^{1/2}} $$
$$ \log p(x_i | \mu, \Sigma) = - \frac{(x_i - \mu)^T \Sigma^{-1} (x_i - \mu)}{2} - \frac{1}{2} \log |\Sigma| + c $$
$$ \log p( \{x_i\}_{i=1}^n | \mu, \Sigma) = \log \prod_{i=1}^n p(x_i | \mu, \Sigma) $$
$$ = \sum_{i=1}^n \Big\{ -\frac{1}{2} (x_i - \mu)^T \Sigma^{-1} (x_i - \mu) - \frac{1}{2} \log |\Sigma| + c \Big\} $$
$$ = nc - \frac{n}{2} \log |\Sigma | - \frac{1}{2} \sum_{i=1}^n (x_i - \mu)^T \Sigma^{-1} (x_i - \mu) $$
$\max_\mu \Rightarrow \hat{\mu}^\text{MLE} = \text{ sample mean}$.
$$ = nc - \frac{n}{2} \log |\Sigma | - \frac{n}{2} \text{Tr} (\Sigma^{-1} S) \text{ where } S = \frac{1}{n} \sum_{i=1}^n (x_i - \hat{\mu}) (x_i -\hat{ \mu})^T $$
$$ = |\Sigma| - \frac{n}{2} \text{Tr}(\Sigma^{-1} S) $$
$$ \max_{\Sigma} -\frac{1}{2} \log |\Sigma| - \frac{1}{2} \text{Tr} ( \Sigma^{-1} S) $$
$$ \max_{\Sigma} \frac{1}{2} \log |\Sigma^{-1}| - \frac{1}{2} \text{Tr} ( \Sigma^{-1} S) $$
$$ \max_{\Sigma} c + \frac{1}{2} \log |\Sigma^{-1}S| - \frac{1}{2} \text{Tr} ( \Sigma^{-1} S) $$
Let $B = \Sigma^{-1}S$. $\log|B| - \text{Tr}(B) = \sum_{i=1}^p \log(x_i) - \lambda i$ can be purely written in the eigenvalues of $B$, which results in $\lambda_i=1$ means that $B = I$ (is identity). Exercise.
\textbf{Covariance estimation: asymptotic consistency} How well does $S \approx E$ as $\lim_n \rightarrow \infty$. (Assume $\mu=0$.)
$$ S = \frac{1}{n} \sum_{i=1}^n x_i x_i^T $$
$$ \text{1. Law of large numbers: }S \rightarrow^{n\rightarrow + \infty} \mathbb{E}S = \Sigma$$
$$ \text{2. rate } \sim n^{-1/2}, S_{p \times p} $$
$$ \mathbb{E} | S_{kl} - \Sigma_{kl}|^2 \leq \frac{c}{n} $$
This result comes from an element-wise LLN:
$$ S_{kl} = \frac{1}{n} \sum_{i=1}^n x_i(k) x_i(l) \rightarrow^\text{dist} N(0, \frac{c}{n})$$
Using Frobenius norm: $\| A \|_\text{Fro} = (\sum_{i,j} A_{ij}^2)^{1/2}$
$$ \mathbb{E} \| S-\Sigma \|_\text{Fr} = \mathbb{E} \sum_{k,l} |S_{kl} - \Sigma_{kl} |^2 \leq \frac{c}{n} \cdot p^2$$
What if $p$ is large (e.g. $p \approx n$ or $p \gg n$)? This may be large...
However, if the $p$ is large but the true covariance matrix $\Sigma$ has low rank, then there may be no curse of dimensionality. For example, consider
$\Sigma = uu^T$, where $u = (1,0,0,\cdots,0)$, $x_i = \alpha_i u$, $\alpha_i = N(0,1)$. In this case, $\mathbb{E}x_i x_i^T = uu^T$, $S = (\frac{1}{n}\sum \alpha_i^2)uu^T$, by LLN we always get the same convergence rate regardless of $p$.\\
Noisy PCA: What we observe is that the noisy patches $y_i = x_i + z_i$ for some $x_i \sim P_x$, clean patches, and $z_i \sim N(0, \sigma^2I)$
Goal: $\Sigma_x = \mathbb{E}x_i x_i^T$. Estimate $\Sigma_x$ or the principle components of $\Sigma_x$.
$$ S_y = \frac{1}{n} \sum_{i=1}^n y_i y_i^T$$
$$ \mathbb{E}S_y = \Sigma_y = \Sigma_x + \alpha^2 I_p \text{, exercise} $$
If we're in the classical case, where $p$ is fixed, $\hat{\Sigma}_y, \hat{\sigma} \Rightarrow \hat{\sigma}_x = \hat{\sigma}_y - \sigma^2 I$. Inconsistency of $p \approx n, p \gg n$.
\part*{September 7}
Setting up:
$$ y_i = X_i + z_i \text{ (i.e. info + noise)} $$
$$ z_i \sim N(0, \sigma^2I), x_i \sim P_x $$
$$ \Sigma_y = \Sigma_x + \sigma^2 I $$
$$ \Sigma_x \rightarrow \Sigma_y $$
$$ S_y = \frac{1}{n} \sum_i (x_i + z_i ) (x_i + z_i)^T = S_x \text{``+''} S_z $$
Think of $X \in \mathbb{R}^{p \times n} = U \pmb{1}^T = [U \cdots U]$.
\textbf{Null case}
$$ S = \frac{1}{n} \sum_i z_i z_i^T , z_i \sim N(0, I_p) $$
Question: what is eig($S$) like?
\textbf{Marcenko-Pastur Law '67}: $ P, n \rightarrow + \infty, P/n \rightarrow \gamma > 0 $. Distribution of eigenvalues of $S$ converges to limiting density
$$p_{MP} (t) = \frac{\sqrt{(t-a)(b-t)}}{2\pi \gamma t}, a < t < b, a = (1-\sqrt{\gamma})^2 , b = (1+\sqrt{\gamma})^2$$
The mathematical formulation:
Definition: The empirical spectral density (ESD) =
$$ \frac{1}{p} \sum_{i=1}^p \delta_{\lambda_i} $$
for eigenvalues of $S$ $\lambda_1 , \cdots, \lambda_p$
The convergence of $ESD_S(S) \rightarrow^w p_{MP} (t)$ almost surely
Remark: when $\gamma \rightarrow 0^+$, this density converges to $\delta_1$ (population case). when $\gamma \rightarrow +\infty$, this density will be roughly a semicircle centered at $\gamma$
Remark: Convergence to limit density is \text{fast}
$$ p , \sim 10^2 \text{ , theapproximation is OK} $$
Remark: ``White MP''
$$ z_i \sim N(0, I_p) $$
$$ \text{ when } z_i \sim N(0, \Sigma), \Sigma \neq I $$
``Colored MP''
Limiting distribution of eig($\Sigma$), $\frac{1}{2}d_1 + \frac{1}{2} d_2$
\section*{Proof of Marcenko-Pasture Density}
$$ \mathbb{E}_{x \sim p} e^{i \xi x} = \phi(\xi) $$
$$ \mathbb{E} \exp[i \xi \frac{1}{\sqrt{n}} (x_1 + \cdots + x_n)] = \prod_i ( \mathbb{E} \exp[i \xi \frac{x_i}{n})] = \text{ Taylor series approximation } \rightarrow \phi(\xi)_{\text{Gaussian}} $$
Stieltjes Transform of $\mu$: $p(t)$ probabilistic density $d \mu(t) = p(t) dt$
$m_\mu(\xi) = \int_{-\infty}^{+\infty} \frac{1}{t-\xi} d \mu(t), I_m(\xi) > 0$
Lemma 1: A sequence of probability measures $\mu_n \rightarrow \mu$ if and only if $m_n(\xi) \rightarrow m(\xi), \forall \xi, Im(z) > 0$, $m_n(\xi)$ is the Stieltjes Transform of $\mu_n$.
Lemma 2: MP Equation: consider $m(\xi)$, the Stieltjes Transform of the MP density. Then $ \xi + \frac{1}{m(\xi)} = \frac{1}{1+\delta m(\xi)}$ (*)
Proof of Lemma 2:
Inversion formula for Stieltjes Transform
$$ \lim_{b \rightarrow 0+} Im ( m_\mu (t + ib)) = \pi \cdot p(t) $$
The solution of (*)
$$ m(\xi) = \frac{-(\xi + \gamma -1) \pm \sqrt{ \cdots}}{2 \xi \delta} $$
Verify that $Im(m(t+ib)) \rightarrow \pi \cdot p_{MP}(t)$
Back to the main theorem:
It suffices to show that $m_n (\xi)$ on the limit of $n, p \rightarrow \infty, \cdots$ satisfies (*):
$$ m_n(\xi) = \int_\mathbb{R} \frac{1}{t-\xi} ESD_{s}(t) dt $$
$$ \frac{1}{p} \sum_{i=1}^p \frac{1}{\lambda_i - \xi} = \frac{1}{p} Tr(S - \xi I)^{-1} $$
Recall that $S = \frac{1}{n} \sum_{i} \xi_i \xi_i^T = \sum_{i=1}^n x_i x_i^T, x_i = \frac{\xi_i}{\sqrt{n}}$
Identity:
$$ I + \xi(B-\xi I)^{-1} B(B- \xi I)^{-1} $$
``$I + \frac{\xi}{B-\xi} = \frac{B}{B-\xi_i}$''
$\frac{1}{p} Tr(\cdot)$ both sides:
$$ 1 + \xi \frac{1}{p} Tr(B - \xi)^{-1} = \frac{1}{p} Tr(B (B- \xi)^{-1}) $$
$$ = \frac{1}{p} \sum_i Tr(x_i x_i^T (B- \xi I)^{-1}) $$
$$ x_i^T(B - \xi I)^{-1} x_i = x_i^T(x_ix_i^T + B_{(i)} + B_{(i)} - \xi I)^{-1} x_i $$
$$ B = \sum_{j \neq i} x_j x_j^T + x_i x_i^T $$
Using Sherman-Morrison:
$$ \frac{\gamma m_n(\xi)}{1+\gamma m(\xi)} $$
Look at the reference book by Terry Tao on random matrices. Search for introduction to random matrix theory.
\end{document}