From fb8ffbe6f8825dbb8710b9bd196e138f472aa5e3 Mon Sep 17 00:00:00 2001 From: Shengjia Zhao Date: Fri, 5 Mar 2021 00:43:11 -0800 Subject: [PATCH] Fix typos --- learning/bayesian/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/learning/bayesian/index.md b/learning/bayesian/index.md index 156e510..808b8c9 100644 --- a/learning/bayesian/index.md +++ b/learning/bayesian/index.md @@ -55,7 +55,7 @@ $$ This might cause us trouble, since integration is usually difficult. For this very simple example, we might be able to compute this integral, but as you may have seen many times in this class, if $$\theta$$ is high dimensional then computing integrals could be quite challenging. -To tackle this issue, people have observed that for some choices of prior $p(\theta)$, the posterior distribution $$p(\theta \mid \mathcal{D})$$ can be directly computed in closed form. Going back to our coin toss example, where we are given a sequence of $$N$$ coin tosses, $$\mathcal{D} = \{X_{1},\ldots,X_{N}\}$$ and we want to infer the probability of getting heads $$\theta$$ using Bayes rule. Suppose we choose the prior $$p(\theta)$$ as the Beta distribution defined by +To tackle this issue, people have observed that for some choices of prior $$p(\theta)$$, the posterior distribution $$p(\theta \mid \mathcal{D})$$ can be directly computed in closed form. Going back to our coin toss example, where we are given a sequence of $$N$$ coin tosses, $$\mathcal{D} = \{X_{1},\ldots,X_{N}\}$$ and we want to infer the probability of getting heads $$\theta$$ using Bayes rule. Suppose we choose the prior $$p(\theta)$$ as the Beta distribution defined by $$ P(\theta) = Beta(\theta \mid \alpha_H, \alpha_T) = \frac{\theta^{\alpha_H -1 }(1-\theta)^{\alpha_T -1 }}{B(\alpha_H,\alpha_T)}