This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
I have at hand a generated regressor problem. I estimate the two following equations:
x_it = g(z1_it , a1) err1_it
z_ijt =h(z2_ijt, z1_it, \hat{a1}, a2) err2_ijt
where notation is as follows: z1 and z2 are (exogenous) variables, a1 and a2 parameters to estimate, \hat{a1} are consistent estimates of a1 from the first equation. err1_it is independent error, but its variance depends on i; err2 is iid (can it be??). Think of t as "time", i as some "individual".
I know that estimates of a2 are consistent in a 2 stage approach, if estimates of a1 are. My problem is inference. The Full Information Maximum Likelihood of this is a catastrophic mess, so I want to estimate this in a 2 stages procedure. Also, standard variance correction (Murphy and Topel) is out of question for reasons I am not going to explain.
I want to bootstrap, but I am never sure of how to do things correctly. I started block-bootstrapping the i blocks: this is what you do when you have panel, after all. I got my distribution of \hat{a1} and I was very happy, when it dawned on me: shouldn't I block-bootstrap the "it"-block, rather? Now, the assumption (if it makes sense, at this point I am not sure) is that err2 is iid, so I don't need to preserve the correlation structure with a block-bootstrap. Also, it is the generated regressor from the first stage I want a boostrapped statistics on, not the second. But I am very confused.What is the theory behind this? Anyone has any idea and or reference?
Edit: details for clarity
Subreddit
Post Details
- Posted
- 5 years ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/econometric...