Variance of Monte Carlo integration with importance sampling The Next CEO of Stack OverflowMonte Carlo Integration for non-square integrable functionsMonte Carlo integration aim for maximum varianceVariance reduction technique in Monte Carlo integrationMonte Carlo integration and varianceMonte Carlo Integration on the Real LineUse Importance Sampling and Monte carlo for estimating a summationSampling / Importance Resampling Poisson WeightsImportance SamplingFind the value of an integral using Monte-Carlo methodOptimal proposal for self-normalized importance sampling
Does Germany produce more waste than the US?
Can Sri Krishna be called 'a person'?
Raspberry pi 3 B with Ubuntu 18.04 server arm64: what pi version
Compilation of a 2d array and a 1d array
Is it possible to make a 9x9 table fit within the default margins?
Another proof that dividing by 0 does not exist -- is it right?
Can a PhD from a non-TU9 German university become a professor in a TU9 university?
Gauss' Posthumous Publications?
Why can't we say "I have been having a dog"?
Man transported from Alternate World into ours by a Neutrino Detector
Why was Sir Cadogan fired?
Why does freezing point matter when picking cooler ice packs?
How seriously should I take size and weight limits of hand luggage?
Why do we say “un seul M” and not “une seule M” even though M is a “consonne”?
Are British MPs missing the point, with these 'Indicative Votes'?
Ising model simulation
Does the Idaho Potato Commission associate potato skins with healthy eating?
Can this transistor (2n2222) take 6V on emitter-base? Am I reading datasheet incorrectly?
How do I secure a TV wall mount?
Planeswalker Ability and Death Timing
Cannot restore registry to default in Windows 10?
How do I keep Mac Emacs from trapping M-`?
How can I separate the number from the unit in argument?
Calculating discount not working
Variance of Monte Carlo integration with importance sampling
The Next CEO of Stack OverflowMonte Carlo Integration for non-square integrable functionsMonte Carlo integration aim for maximum varianceVariance reduction technique in Monte Carlo integrationMonte Carlo integration and varianceMonte Carlo Integration on the Real LineUse Importance Sampling and Monte carlo for estimating a summationSampling / Importance Resampling Poisson WeightsImportance SamplingFind the value of an integral using Monte-Carlo methodOptimal proposal for self-normalized importance sampling
$begingroup$
I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then
$$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$
where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and
$$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$
I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.
monte-carlo integral importance-sampling
$endgroup$
add a comment |
$begingroup$
I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then
$$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$
where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and
$$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$
I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.
monte-carlo integral importance-sampling
$endgroup$
add a comment |
$begingroup$
I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then
$$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$
where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and
$$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$
I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.
monte-carlo integral importance-sampling
$endgroup$
I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then
$$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$
where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and
$$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$
I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.
monte-carlo integral importance-sampling
monte-carlo integral importance-sampling
asked 5 hours ago
user1799323user1799323
1234
1234
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
This is a good illustration of the dangers of importance sampling: while
$$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
$$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
since the integral diverges in $x=0$. For instance,
> x=runif(1e7)^1/2.5
> range(exp(x)/x^1.5)
[1] 2.718282 83403.685972
shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since
> mean(exp(x)/x^1.5)/2.5
[1] 1.717576
> var(exp(x)/x^1.5)/(2.5)^2/1e7
[1] 2.070953e-06
but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)
$endgroup$
add a comment |
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f400628%2fvariance-of-monte-carlo-integration-with-importance-sampling%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
This is a good illustration of the dangers of importance sampling: while
$$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
$$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
since the integral diverges in $x=0$. For instance,
> x=runif(1e7)^1/2.5
> range(exp(x)/x^1.5)
[1] 2.718282 83403.685972
shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since
> mean(exp(x)/x^1.5)/2.5
[1] 1.717576
> var(exp(x)/x^1.5)/(2.5)^2/1e7
[1] 2.070953e-06
but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)
$endgroup$
add a comment |
$begingroup$
This is a good illustration of the dangers of importance sampling: while
$$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
$$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
since the integral diverges in $x=0$. For instance,
> x=runif(1e7)^1/2.5
> range(exp(x)/x^1.5)
[1] 2.718282 83403.685972
shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since
> mean(exp(x)/x^1.5)/2.5
[1] 1.717576
> var(exp(x)/x^1.5)/(2.5)^2/1e7
[1] 2.070953e-06
but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)
$endgroup$
add a comment |
$begingroup$
This is a good illustration of the dangers of importance sampling: while
$$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
$$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
since the integral diverges in $x=0$. For instance,
> x=runif(1e7)^1/2.5
> range(exp(x)/x^1.5)
[1] 2.718282 83403.685972
shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since
> mean(exp(x)/x^1.5)/2.5
[1] 1.717576
> var(exp(x)/x^1.5)/(2.5)^2/1e7
[1] 2.070953e-06
but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)
$endgroup$
This is a good illustration of the dangers of importance sampling: while
$$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
$$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
since the integral diverges in $x=0$. For instance,
> x=runif(1e7)^1/2.5
> range(exp(x)/x^1.5)
[1] 2.718282 83403.685972
shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since
> mean(exp(x)/x^1.5)/2.5
[1] 1.717576
> var(exp(x)/x^1.5)/(2.5)^2/1e7
[1] 2.070953e-06
but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)
edited 5 hours ago
answered 5 hours ago
Xi'anXi'an
59k897365
59k897365
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f400628%2fvariance-of-monte-carlo-integration-with-importance-sampling%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown