This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
Here's the situation: I'm estimating the mean of a random variable through sampling. We know the stdev of the RV pretty well, after all that converges pretty quickly. I have a published result that says that this RV has mean 55 with a 95% confidence interval of - 20, the appropriate amount for the number of trials they did.
I'm doing my own tests, and I want to know if I am simulating this random variable correctly. I'm still running the simulation, but at the current time, I predict a mean of 44.75 with about the same confidence interval. This means the standard deviation of each of our predictions is about 10.
How can I quantitatively measure the likelihood that I am measuring the same random variable? I can plot the two normal distributions, one from each of our estimates, and say "hey, looks kinda close". I can say "eh it's only off by ten that's not too bad", but what's a good rigorous method to compare the two? Or perhaps a rule of thumb?
Subreddit
Post Details
- Posted
- 14 years ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/statistics/...