import numpy
step_size = 0.0001
x= numpy.arange(0,1, step_size)
len(x)
likelihood = [ p * p * (1-p) for p in x]
from matplotlib import pyplot
%matplotlib inline
pyplot.plot(x,likelihood)
index = numpy.argmax(likelihood)
mle_p = x[index]
mle_p
We use Bayes' Rule to infer the probability distribution over the parameter $\theta$. Now we use the posterior probability to tell us how good the $\theta$. $$P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)}$$
How do we compute $P(D)$? We can approximate it with $\sum_{\theta'} P(D|\theta') P(\theta')$
from scipy.stats import beta
prior_y = beta.pdf(x, 2, 2)
pyplot.plot(x, prior_y)
normalization = 0.0
for l, p in zip(likelihood, prior_y):
normalization += l * p
posterior = [l * p / normalization for l, p in zip(likelihood, prior_y)]
pyplot.plot(x, posterior)
index = numpy.argmax(posterior)
x[index]
float((2 + 1)) / (2+1 + 1 +1)
index = numpy.argmax(likelihood)
x[index]
2.0/3