Variance of Monte Carlo integration with importance sampling The Next CEO of Stack OverflowMonte Carlo Integration for non-square integrable functionsMonte Carlo integration aim for maximum varianceVariance reduction technique in Monte Carlo integrationMonte Carlo integration and varianceMonte Carlo Integration on the Real LineUse Importance Sampling and Monte carlo for estimating a summationSampling / Importance Resampling Poisson WeightsImportance SamplingFind the value of an integral using Monte-Carlo methodOptimal proposal for self-normalized importance sampling

Does Germany produce more waste than the US?

Can Sri Krishna be called 'a person'?

Raspberry pi 3 B with Ubuntu 18.04 server arm64: what pi version

Compilation of a 2d array and a 1d array

Is it possible to make a 9x9 table fit within the default margins?

Another proof that dividing by 0 does not exist -- is it right?

Can a PhD from a non-TU9 German university become a professor in a TU9 university?

Gauss' Posthumous Publications?

Why can't we say "I have been having a dog"?

Man transported from Alternate World into ours by a Neutrino Detector

Why was Sir Cadogan fired?

Why does freezing point matter when picking cooler ice packs?

How seriously should I take size and weight limits of hand luggage?

Why do we say “un seul M” and not “une seule M” even though M is a “consonne”?

Are British MPs missing the point, with these 'Indicative Votes'?

Ising model simulation

Does the Idaho Potato Commission associate potato skins with healthy eating?

Can this transistor (2n2222) take 6V on emitter-base? Am I reading datasheet incorrectly?

How do I secure a TV wall mount?

Planeswalker Ability and Death Timing

Cannot restore registry to default in Windows 10?

How do I keep Mac Emacs from trapping M-`?

How can I separate the number from the unit in argument?

Calculating discount not working



Variance of Monte Carlo integration with importance sampling



The Next CEO of Stack OverflowMonte Carlo Integration for non-square integrable functionsMonte Carlo integration aim for maximum varianceVariance reduction technique in Monte Carlo integrationMonte Carlo integration and varianceMonte Carlo Integration on the Real LineUse Importance Sampling and Monte carlo for estimating a summationSampling / Importance Resampling Poisson WeightsImportance SamplingFind the value of an integral using Monte-Carlo methodOptimal proposal for self-normalized importance sampling










3












$begingroup$


I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then



$$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$



where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and



$$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$



I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.



enter image description here










share|cite|improve this question









$endgroup$
















    3












    $begingroup$


    I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then



    $$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$



    where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and



    $$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$



    I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.



    enter image description here










    share|cite|improve this question









    $endgroup$














      3












      3








      3


      3



      $begingroup$


      I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then



      $$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$



      where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and



      $$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$



      I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.



      enter image description here










      share|cite|improve this question









      $endgroup$




      I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then



      $$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$



      where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and



      $$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$



      I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.



      enter image description here







      monte-carlo integral importance-sampling






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 5 hours ago









      user1799323user1799323

      1234




      1234




















          1 Answer
          1






          active

          oldest

          votes


















          3












          $begingroup$

          enter image description hereThis is a good illustration of the dangers of importance sampling: while
          $$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
          shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
          $$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
          since the integral diverges in $x=0$. For instance,



          > x=runif(1e7)^1/2.5
          > range(exp(x)/x^1.5)
          [1] 2.718282 83403.685972


          shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since



           > mean(exp(x)/x^1.5)/2.5
          [1] 1.717576
          > var(exp(x)/x^1.5)/(2.5)^2/1e7
          [1] 2.070953e-06


          but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)






          share|cite|improve this answer











          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "65"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f400628%2fvariance-of-monte-carlo-integration-with-importance-sampling%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            3












            $begingroup$

            enter image description hereThis is a good illustration of the dangers of importance sampling: while
            $$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
            shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
            $$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
            since the integral diverges in $x=0$. For instance,



            > x=runif(1e7)^1/2.5
            > range(exp(x)/x^1.5)
            [1] 2.718282 83403.685972


            shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since



             > mean(exp(x)/x^1.5)/2.5
            [1] 1.717576
            > var(exp(x)/x^1.5)/(2.5)^2/1e7
            [1] 2.070953e-06


            but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)






            share|cite|improve this answer











            $endgroup$

















              3












              $begingroup$

              enter image description hereThis is a good illustration of the dangers of importance sampling: while
              $$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
              shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
              $$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
              since the integral diverges in $x=0$. For instance,



              > x=runif(1e7)^1/2.5
              > range(exp(x)/x^1.5)
              [1] 2.718282 83403.685972


              shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since



               > mean(exp(x)/x^1.5)/2.5
              [1] 1.717576
              > var(exp(x)/x^1.5)/(2.5)^2/1e7
              [1] 2.070953e-06


              but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)






              share|cite|improve this answer











              $endgroup$















                3












                3








                3





                $begingroup$

                enter image description hereThis is a good illustration of the dangers of importance sampling: while
                $$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
                shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
                $$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
                since the integral diverges in $x=0$. For instance,



                > x=runif(1e7)^1/2.5
                > range(exp(x)/x^1.5)
                [1] 2.718282 83403.685972


                shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since



                 > mean(exp(x)/x^1.5)/2.5
                [1] 1.717576
                > var(exp(x)/x^1.5)/(2.5)^2/1e7
                [1] 2.070953e-06


                but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)






                share|cite|improve this answer











                $endgroup$



                enter image description hereThis is a good illustration of the dangers of importance sampling: while
                $$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
                shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
                $$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
                since the integral diverges in $x=0$. For instance,



                > x=runif(1e7)^1/2.5
                > range(exp(x)/x^1.5)
                [1] 2.718282 83403.685972


                shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since



                 > mean(exp(x)/x^1.5)/2.5
                [1] 1.717576
                > var(exp(x)/x^1.5)/(2.5)^2/1e7
                [1] 2.070953e-06


                but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited 5 hours ago

























                answered 5 hours ago









                Xi'anXi'an

                59k897365




                59k897365



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f400628%2fvariance-of-monte-carlo-integration-with-importance-sampling%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Reverse int within the 32-bit signed integer range: [−2^31, 2^31 − 1]Combining two 32-bit integers into one 64-bit integerDetermine if an int is within rangeLossy packing 32 bit integer to 16 bitComputing the square root of a 64-bit integerKeeping integer addition within boundsSafe multiplication of two 64-bit signed integersLeetcode 10: Regular Expression MatchingSigned integer-to-ascii x86_64 assembler macroReverse the digits of an Integer“Add two numbers given in reverse order from a linked list”

                    Category:Fedor von Bock Media in category "Fedor von Bock"Navigation menuUpload mediaISNI: 0000 0000 5511 3417VIAF ID: 24712551GND ID: 119294796Library of Congress authority ID: n96068363BnF ID: 12534305fSUDOC authorities ID: 034604189Open Library ID: OL338253ANKCR AUT ID: jn19990000869National Library of Israel ID: 000514068National Thesaurus for Author Names ID: 341574317ReasonatorScholiaStatistics

                    Kiel Indholdsfortegnelse Historie | Transport og færgeforbindelser | Sejlsport og anden sport | Kultur | Kendte personer fra Kiel | Noter | Litteratur | Eksterne henvisninger | Navigationsmenuwww.kiel.de54°19′31″N 10°8′26″Ø / 54.32528°N 10.14056°Ø / 54.32528; 10.14056Oberbürgermeister Dr. Ulf Kämpferwww.statistik-nord.deDen danske Stats StatistikKiels hjemmesiderrrWorldCat312794080n790547494030481-4