Questions tagged [maximum-likelihood]

For questions that use the method of maximum likelihood for estimating the parameters of a statistical model with given data.

In statistics, maximum-likelihood estimation (MLE) is a method of estimating the parameters of a statistical model given data. The method of maximum likelihood corresponds to many well-known estimation methods in statistics. Maximum likelihood, also called the maximum likelihood method, is the procedure of finding the value of one or more parameters for a given statistic which makes the known likelihood distribution a maximum. The maximum likelihood estimate for a parameter $\mu$ is denoted $\widehat{\mu}$.

1310 questions
29
votes
5 answers

Maximum Likelihood Estimator of parameters of multinomial distribution

Suppose that 50 measuring scales made by a machine are selected at random from the production of the machine and their lengths and widths are measured. It was found that 45 had both measurements within the tolerance limits, 2 had satisfactory length…
20
votes
4 answers

Maximum likelihood estimation of $a,b$ for a uniform distribution on $[a,b]$

I'm supposed to calculate the MLE's for $a$ and $b$ from a random sample of $(X_1,...,X_n)$ drawn from a uniform distribution on $[a,b]$. But the likelihood function, $\mathcal{L}(a,b)=\frac{1}{(b-a)^n}$ is constant, how do I find a maximum? Would…
10
votes
2 answers

A wavy histogram from a maximum likelihood algorithm

This question is based on a Puzzling Stack question I answered. Suppose you have a loaded $10$-sided die that gives one value with probability $\frac7{25}$ and the rest with probability $\frac2{25}$ each, but you do not know the favoured value. A…
Parcly Taxel
  • 89,705
  • 18
  • 102
  • 171
10
votes
1 answer

Estimating Parameter - What is the qualitative difference between MLE fitting and Least Squares CDF fitting?

Given a parametric pdf $f(x;\lambda)$ and a set of data $\{ x_k \}_{k=1}^n$, here are two ways of formulating a problem of selecting an optimal parameter vector $\lambda^*$ to fit to the data. The first is maximum likelihood estimation (MLE):…
9
votes
3 answers

Why are maximum likelihood estimators used?

Is there a motivating reason for using maximum likelihood estimators? As for as I can tell, there is no reason why they should be unbiased estimators (Can their expectation even be calculated in a general setting, given that they are defined by a…
user782220
  • 2,855
  • 4
  • 38
  • 69
7
votes
2 answers

Beta regression

Consider some positive random variables $X^1, X^2$ and $Y\sim Beta(p; 1)$ where $p=\beta_1X^1 + \beta_2X^2$. We have a random sample $\{X^1_i, X^2_i, Y_i\}$. Now, estimating $\beta_1, \beta_2$ is not hard, e.g. using GLM. Now let's say that we do…
Albert Paradek
  • 499
  • 1
  • 4
  • 13
7
votes
1 answer

MLE for Uniform $(0,\theta)$

I am a bit confused about the derivation of MLE of Uniform$(0,\theta)$. I understand that $L(\theta)={\theta}^{-n}$ is a decreasing function and to find the MLE we want to maximize the likelihood function. What is confusing me is that if a function…
6
votes
1 answer

Trace of a Matrix: when to use? what is trace trick?

On calculating log-likelihood function for some multivariate distributions, such as multivariate Normal, I see some examples where the matrices are suddenly changed to trace, even when the matrix is not diagonal. I searched online to find a…
6
votes
2 answers

Prove Neg. Log Likelihood for Gaussian distribution is convex in mean and variance.

I am looking to compute maximum likelihood estimators for $\mu$ and $\sigma^2$, given n i.i.d random variables drawn from a Gaussian distribution. I believe I know how to write the expressions for negative log likelihood (kindly see below), however…
6
votes
1 answer

MLE (Maximum Likelihood Estimator) of Beta Distribution

Let $X_1,\ldots,X_n$ be i.i.d. random variables with a common density function given by: $f(x\mid\theta)=\theta x^{\theta-1}$ for $x\in[0,1]$ and $\theta>0$. Clearly this is a $\operatorname{BETA}(\theta,1)$ distribution. Calculate the maximum…
6
votes
3 answers

Simple example of "Maximum A Posteriori"

I've been immersing myself into Bayesian statistics in school and I'm having a very difficult time grasping argmax and maximum a posteriori. A quick explanation of this can be found: https://www.cs.utah.edu/~suyash/Dissertation_html/node8.html…
5
votes
1 answer

Find Maximum-Likelihood-Estimator (MLE) for $\alpha$

Consider the following PDF: $$w_{\alpha,\beta}(x):=\alpha \beta x^{\beta-1}e^{-\alpha x^{\beta}} \mathbf{1}_{(0,\infty)}(x)$$ This is the Weibull distribution often used in material science. Assume we know $\beta$ and we want to estimate $\alpha$.…
qmd
  • 4,027
  • 5
  • 33
  • 52
5
votes
2 answers

Efficiency of $\hat{\theta}_{MLE}$ from $\operatorname{Beta}(\theta,1)$

I am working on a problem which asks me to discuss the efficiency of the MLE $\hat{\theta}$ given that $X_1,\ldots,X_n \sim_{iid} \operatorname{Beta}(\theta,1) $. I was able to deduce that $$\hat{\theta} = \frac{n}{-\sum_{i=1}^n \ln X_i}$$ and that…
hyg17
  • 4,697
  • 3
  • 32
  • 61
5
votes
1 answer

Log likelihood with exponential function

I'm trying to find the maximum likelihood of this function. I have samples in this question as follows: (0.77, 0.82, 0.94, 0.92, 0.98) $$f_Y(y;\theta)=\theta y^{\theta-1} ;, \quad 0 \le y \le 1 ;, 0 \lt \theta$$ $$L(\theta) = \prod\limits_{i=1}^{n}…
Bucephalus
  • 1,286
  • 1
  • 17
  • 36
5
votes
1 answer

MAP Solution for Linear Regression - What is a Gaussian prior?

I am looking at some slides that compute the MLE and MAP solution for a Linear Regression problem. It states that the problem can be defined as such: We can compute the MLE of w as such: Now they talk about computing the MAP of w I simply can't…
user1436508
  • 346
  • 1
  • 4
  • 12
1
2 3
87 88