

Weighted <- nlm(wLogLik, p=c(0,1), X=X, y=y, w=w, hessian=T) sum(w*dunif(dyest, -dyest, dyest, log=T)) Unweighted <- nlm(logLik, p=c(0,1), X=X, y=y, hessian=T) Tsk, the problem with boundary estimates! I would bootstrap these to get the real SEs! delta_y <- 1 though (interestingly) I can't seem to get the hessian to come up with anything that makes a lick of sense. Same logic applies to the weighted estimates. So in the max step, penalize the LARGEST observation since the ML estimate of a unif distribution is based on the maximum. For an unweighted estimate, the EM algorithm can do very well since the objective is to "squeeze" the LS line within a certain range of values. The question remains: can you use ML for the known nonnormal variance structure? The answer is yes. Due to the nature of the nonnormal residuals, we can't be certain of whether the weighted version is more efficient or not regardless of whether the weighting is correct. Weighted estimates give different variance estimates. Least squares happens to be the maximum likelihood estimate when the residuals are normal. Now, the important bit is that, regardless of the variance structure of $y$, the usual least squares estimate is still consistent for the right slope, although the error estimate may be conservative or anticonservative. I see that you have constructed an interesting variance structure for $y$ that is nonnormal. Which is the efficient estimator by the Gauss-Markov theorem. Then the result is a constrained least squares problem that looks something like this: So for now, let's assume that the error model is Gaussian. Under the uniform error model, there are an infinite number of solutions that are all equally good. But if the error model is uniform inside the error bars, the red and teal lines are just as good as the green line. If you do a traditional least squares fit, which assumes a Gaussian error model, you get the green line. In other words, is it better for the model to be close to the observed $y_i$? Or does it not matter where in the interval $|y - y_i| \leq \delta y$ the model falls? Consider the following picture: The first thing you have to ask is whether the error model is still Gaussian(-like).
