# Distribution of reciprocal of regression coefficient

Suppose that we have a linear model $y_i = \beta_0 + \beta_1 x_i + \epsilon_i$ that meets all the standard regression (Gauss-Markov) assumptions. We are interested in $\theta = 1/\beta_1$.

Question 1: What assumptions are necessary for the distribution of $\hat{\theta}$ to be well defined? $\beta_1 \neq 0$ would be important—any others?

Question 2: Add the assumption that the errors follow a normal distribution. We know that, if $\hat{\beta}_1$ is the MLE and $g(\cdot)$ is a monotonic function, then $g\left(\hat{\beta}_1\right)$ is the MLE for $g(\beta_1)$. Is monotonicity only necessary in the neighborhood of $\beta_1$? In other words, is $\hat{\theta} = 1/\hat{\beta}$ the MLE? The continuous mapping theorem at least tells us that this parameter is consistent.

Question 3: Are both the Delta Method and the bootstrap both appropriate means for finding the distribution of $\hat{\theta}$?

Question 4: How do these answer changes for the parameter $\gamma = \beta_0 / \beta_1$?

Aside: We might consider rearranging the problem to give

to estimate the parameters directly. This doesn’t seem to work to me as the Gauss-Markov assumptions no longer make sense here; we can’t talk about $\text{E}[\epsilon \mid y]$, for example. Is this interpretation correct?

Q1. If $\hat\beta_1$ is the MLE of $\beta_1$, then $\hat\theta$ is the MLE of $\theta$ and $\beta_1 \neq 0$ is a sufficient condition for this estimator to be well-defined.

Q2. $\hat\theta = 1/\hat\beta$ is the MLE of $\theta$ by invariance property of the MLE. In addition, you do not need monotonicity of $g$ if you do not need to obtain its inverse. There is only need for $g$ to be well-defined at each point. You can check this in Theorem 7.2.1 pp. 350 of “Probability and Statistical Inference” by Nitis Mukhopadhyay.

Q3. Yes, you can use both methods, I would also check the profile likelihood of $\theta$.

Q4. Here, you can reparameterise the model in terms of the parameters of interest $(\theta,\gamma)$. For instance, the MLE of $\gamma$ is $\hat\gamma=\hat\beta_0/\hat\beta_1$ and you can calculate the profile likelihood of this parameter or its bootstrap distribution as usual.

The approach you mention at the end is incorrect, you are actually considering a “calibration model” which you can check in the literature. The only thing you need is to reparameterise in terms of the parameters of interest.

I hope this helps.

Kind regards.