Simulating a probability of 1 of 2^N with less than N random bitsIs the number of coin tosses of a probabilistic Turing machine a Blum complexity measure?Prove that inserting $n$ sorted values in to an AVL using AVL insertion is $Thetaleft (n log left ( n right ) right )$2SUM with a weightIs transitivity required for a sorting algorithmHow to compare conditional entropy and mutual information?How to state a recurrence that expresses the worst case for good pivots?Counting words that satisfy SAT-like constraints with BDDsHigher order empirical entropy is not the entropy of the empirical distribution?How do these alternative definitions of one-way functions compare?Average-case analysis of linear search given that the desired element appears $k$ times

Reply 'no position' while the job posting is still there

Flux received by a negative charge

How to get the similar sounding words together

Difference between -| and |- in TikZ

A social experiment. What is the worst that can happen?

Can I use my Chinese passport to enter China after I acquired another citizenship?

Folder comparison

How will losing mobility of one hand affect my career as a programmer?

Using a siddur to Daven from in a seforim store

If a character with the Alert feat rolls a crit fail on their Perception check, are they surprised?

Generating adjacency matrices from isomorphic graphs

My friend sent me a screenshot of a transaction hash, but when I search for it I find divergent data. What happened?

Have I saved too much for retirement so far?

How can Trident be so inexpensive? Will it orbit Triton or just do a (slow) flyby?

How to align and center standalone amsmath equations?

What is this type of notehead called?

MAXDOP Settings for SQL Server 2014

Is there a word to describe the feeling of being transfixed out of horror?

Proving a function is onto where f(x)=|x|.

Why is Arduino resetting while driving motors?

Is possible to search in vim history?

Divine apple island

What linear sensor for a keyboard?

What does the Rambam mean when he says that the planets have souls?



Simulating a probability of 1 of 2^N with less than N random bits


Is the number of coin tosses of a probabilistic Turing machine a Blum complexity measure?Prove that inserting $n$ sorted values in to an AVL using AVL insertion is $Thetaleft (n log left ( n right ) right )$2SUM with a weightIs transitivity required for a sorting algorithmHow to compare conditional entropy and mutual information?How to state a recurrence that expresses the worst case for good pivots?Counting words that satisfy SAT-like constraints with BDDsHigher order empirical entropy is not the entropy of the empirical distribution?How do these alternative definitions of one-way functions compare?Average-case analysis of linear search given that the desired element appears $k$ times













2












$begingroup$


Say I need to simulate the following discrete distribution:



$$
P(X = k) =
begincases
frac12^N, & textif $k = 1$ \
1 - frac12^N, & textif $k = 0$
endcases
$$



The most obvious way is to draw $N$ random bits and check if all of them equals to 0 (or 1). However, information theory says



$$
beginalign
S & = - Sigma_i P_i logP_i \
& = - frac12^N logfrac12^N - left(1 - frac12^Nright) logleft(1 - frac12^Nright) \
& = frac12^N log2^N + left(1 - frac12^Nright) logfrac2^N2^N - 1 \
& rightarrow 0
endalign
$$



So the minimum number of random bits required actually decreases as $N$ goes large. How is this possible?



Please assume that we are running on a computer where bits is your only source of randomness, so you can't just tose a biased coin.










share|cite|improve this question











$endgroup$
















    2












    $begingroup$


    Say I need to simulate the following discrete distribution:



    $$
    P(X = k) =
    begincases
    frac12^N, & textif $k = 1$ \
    1 - frac12^N, & textif $k = 0$
    endcases
    $$



    The most obvious way is to draw $N$ random bits and check if all of them equals to 0 (or 1). However, information theory says



    $$
    beginalign
    S & = - Sigma_i P_i logP_i \
    & = - frac12^N logfrac12^N - left(1 - frac12^Nright) logleft(1 - frac12^Nright) \
    & = frac12^N log2^N + left(1 - frac12^Nright) logfrac2^N2^N - 1 \
    & rightarrow 0
    endalign
    $$



    So the minimum number of random bits required actually decreases as $N$ goes large. How is this possible?



    Please assume that we are running on a computer where bits is your only source of randomness, so you can't just tose a biased coin.










    share|cite|improve this question











    $endgroup$














      2












      2








      2





      $begingroup$


      Say I need to simulate the following discrete distribution:



      $$
      P(X = k) =
      begincases
      frac12^N, & textif $k = 1$ \
      1 - frac12^N, & textif $k = 0$
      endcases
      $$



      The most obvious way is to draw $N$ random bits and check if all of them equals to 0 (or 1). However, information theory says



      $$
      beginalign
      S & = - Sigma_i P_i logP_i \
      & = - frac12^N logfrac12^N - left(1 - frac12^Nright) logleft(1 - frac12^Nright) \
      & = frac12^N log2^N + left(1 - frac12^Nright) logfrac2^N2^N - 1 \
      & rightarrow 0
      endalign
      $$



      So the minimum number of random bits required actually decreases as $N$ goes large. How is this possible?



      Please assume that we are running on a computer where bits is your only source of randomness, so you can't just tose a biased coin.










      share|cite|improve this question











      $endgroup$




      Say I need to simulate the following discrete distribution:



      $$
      P(X = k) =
      begincases
      frac12^N, & textif $k = 1$ \
      1 - frac12^N, & textif $k = 0$
      endcases
      $$



      The most obvious way is to draw $N$ random bits and check if all of them equals to 0 (or 1). However, information theory says



      $$
      beginalign
      S & = - Sigma_i P_i logP_i \
      & = - frac12^N logfrac12^N - left(1 - frac12^Nright) logleft(1 - frac12^Nright) \
      & = frac12^N log2^N + left(1 - frac12^Nright) logfrac2^N2^N - 1 \
      & rightarrow 0
      endalign
      $$



      So the minimum number of random bits required actually decreases as $N$ goes large. How is this possible?



      Please assume that we are running on a computer where bits is your only source of randomness, so you can't just tose a biased coin.







      algorithms information-theory randomness pseudo-random-generators entropy






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited 5 hours ago







      nalzok

















      asked 6 hours ago









      nalzoknalzok

      432414




      432414




















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          Wow, great question! Let me try to explain the resolution. It'll take three distinct steps.



          The first thing to note is that the entropy is focused more on the average number of bits needed per draw, not the maximum number of bits needed.



          With your sampling procedure, the maximum number of random bits needed per draw is $N$ bits, but the average number of bits needed is 2 bits (the average of a geometric distribution with $p=1/2$) -- this is because there is a $1/2$ probability that you only need 1 bit (if the first bit turns out to be 1), a $1/4$ probability that you only need 2 bits (if the first two bits turn out to be 01), a $1/8$ probability that you only need 3 bits (if the first three bits turn out to be 001), and so on.



          The second thing to note is that the entropy doesn't really capture the average number of bits needed for a single draw. Instead, the entropy captures the amortized number of bits needed to sample $m$ iid draws from this distribution. Suppose we need $f(m)$ bits to sample $m$ draws; then the entropy is the limit of $f(m)/k$ as $m to infty$.



          The third thing to note is that, with this distribution, you can sample $m$ iid draws with fewer bits than needed to repeatedly sample one draw. Suppose you naively decided to draw one sample (takes 2 random bits on average), then draw another simple (using 2 more random bits on average), and so on, until you've repeated this $m$ times. That would require about $2m$ random bits on average.



          But it turns out there's a way to sample from $m$ draws using fewer than $2m$ bits. It's hard to believe, but it's true!



          Let me give you the intuition. Suppose you wrote down the result of sampling $m$ draws, where $m$ is really large. Then the result could be specified as a $m$-bit string. This $m$-bit string will be mostly 0's, with a few 1's in it: in particular, on average it will have about $m/2^N$ 1's (could be more or less than that, but if $m$ is sufficiently large, usually the number will be close to that). The length of the gaps between the 1's are random, but will typically be somewhere vaguely in the vicinity of $2^N$ (could easily be half that or twice that or even more, but of that order of magnitude). Of course, instead of writing down the entire $m$-bit string, we could write it down more succinctly by writing down a list of the lengths of the gaps -- that carries all the same information, in a more compressed format. How much more succinct? Well, we'll usually need about $N$ bits to represent the length of each gap; and there will be about $m/2^N$ gaps; so we'll need in total about $mN/2^N$ bits (could be a bit more, could be a bit less, but if $m$ is sufficiently large, it'll usually be close to that). That's a lot shorter than a $m$-bit string.



          And if there's a way to write down the string this succinctly, perhaps it won't be too surprising if that means there's a way to generate the string with a number of random bits comparable to the length of the string. In particular, you randomly generate the length of each gap; this is sampling from a geometric distribution with $p=1/2^N$, and that can be done with roughly $sim N$ random bits on average (not $2^N$). You'll need about $m/2^N$ iid draws from this geometric distribution, so you'll need in total roughly $sim Nm/2^N$ random bits. (It could be a small constant factor larger, but not too much larger.) And, notice is that this is much smaller than $2m$ bits.



          So, we can sample $m$ iid draws from your distribution, using just $f(m) sim Nm/2^N$ random bits (roughly). Recall that the entropy is $lim_m to infty f(m)/m$. So this means that you should expect the entropy to be (roughly) $N/2^N$. That's off by a little bit, because the above calculation was sketchy and crude -- but hopefully it gives you some intuition for why the entropy is what it is, and why everything is consistent and reasonable.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
            $endgroup$
            – nalzok
            50 mins ago











          • $begingroup$
            @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
            $endgroup$
            – D.W.
            23 mins ago










          • $begingroup$
            So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
            $endgroup$
            – nalzok
            14 mins ago







          • 1




            $begingroup$
            @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
            $endgroup$
            – D.W.
            13 mins ago











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "419"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106018%2fsimulating-a-probability-of-1-of-2n-with-less-than-n-random-bits%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2












          $begingroup$

          Wow, great question! Let me try to explain the resolution. It'll take three distinct steps.



          The first thing to note is that the entropy is focused more on the average number of bits needed per draw, not the maximum number of bits needed.



          With your sampling procedure, the maximum number of random bits needed per draw is $N$ bits, but the average number of bits needed is 2 bits (the average of a geometric distribution with $p=1/2$) -- this is because there is a $1/2$ probability that you only need 1 bit (if the first bit turns out to be 1), a $1/4$ probability that you only need 2 bits (if the first two bits turn out to be 01), a $1/8$ probability that you only need 3 bits (if the first three bits turn out to be 001), and so on.



          The second thing to note is that the entropy doesn't really capture the average number of bits needed for a single draw. Instead, the entropy captures the amortized number of bits needed to sample $m$ iid draws from this distribution. Suppose we need $f(m)$ bits to sample $m$ draws; then the entropy is the limit of $f(m)/k$ as $m to infty$.



          The third thing to note is that, with this distribution, you can sample $m$ iid draws with fewer bits than needed to repeatedly sample one draw. Suppose you naively decided to draw one sample (takes 2 random bits on average), then draw another simple (using 2 more random bits on average), and so on, until you've repeated this $m$ times. That would require about $2m$ random bits on average.



          But it turns out there's a way to sample from $m$ draws using fewer than $2m$ bits. It's hard to believe, but it's true!



          Let me give you the intuition. Suppose you wrote down the result of sampling $m$ draws, where $m$ is really large. Then the result could be specified as a $m$-bit string. This $m$-bit string will be mostly 0's, with a few 1's in it: in particular, on average it will have about $m/2^N$ 1's (could be more or less than that, but if $m$ is sufficiently large, usually the number will be close to that). The length of the gaps between the 1's are random, but will typically be somewhere vaguely in the vicinity of $2^N$ (could easily be half that or twice that or even more, but of that order of magnitude). Of course, instead of writing down the entire $m$-bit string, we could write it down more succinctly by writing down a list of the lengths of the gaps -- that carries all the same information, in a more compressed format. How much more succinct? Well, we'll usually need about $N$ bits to represent the length of each gap; and there will be about $m/2^N$ gaps; so we'll need in total about $mN/2^N$ bits (could be a bit more, could be a bit less, but if $m$ is sufficiently large, it'll usually be close to that). That's a lot shorter than a $m$-bit string.



          And if there's a way to write down the string this succinctly, perhaps it won't be too surprising if that means there's a way to generate the string with a number of random bits comparable to the length of the string. In particular, you randomly generate the length of each gap; this is sampling from a geometric distribution with $p=1/2^N$, and that can be done with roughly $sim N$ random bits on average (not $2^N$). You'll need about $m/2^N$ iid draws from this geometric distribution, so you'll need in total roughly $sim Nm/2^N$ random bits. (It could be a small constant factor larger, but not too much larger.) And, notice is that this is much smaller than $2m$ bits.



          So, we can sample $m$ iid draws from your distribution, using just $f(m) sim Nm/2^N$ random bits (roughly). Recall that the entropy is $lim_m to infty f(m)/m$. So this means that you should expect the entropy to be (roughly) $N/2^N$. That's off by a little bit, because the above calculation was sketchy and crude -- but hopefully it gives you some intuition for why the entropy is what it is, and why everything is consistent and reasonable.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
            $endgroup$
            – nalzok
            50 mins ago











          • $begingroup$
            @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
            $endgroup$
            – D.W.
            23 mins ago










          • $begingroup$
            So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
            $endgroup$
            – nalzok
            14 mins ago







          • 1




            $begingroup$
            @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
            $endgroup$
            – D.W.
            13 mins ago
















          2












          $begingroup$

          Wow, great question! Let me try to explain the resolution. It'll take three distinct steps.



          The first thing to note is that the entropy is focused more on the average number of bits needed per draw, not the maximum number of bits needed.



          With your sampling procedure, the maximum number of random bits needed per draw is $N$ bits, but the average number of bits needed is 2 bits (the average of a geometric distribution with $p=1/2$) -- this is because there is a $1/2$ probability that you only need 1 bit (if the first bit turns out to be 1), a $1/4$ probability that you only need 2 bits (if the first two bits turn out to be 01), a $1/8$ probability that you only need 3 bits (if the first three bits turn out to be 001), and so on.



          The second thing to note is that the entropy doesn't really capture the average number of bits needed for a single draw. Instead, the entropy captures the amortized number of bits needed to sample $m$ iid draws from this distribution. Suppose we need $f(m)$ bits to sample $m$ draws; then the entropy is the limit of $f(m)/k$ as $m to infty$.



          The third thing to note is that, with this distribution, you can sample $m$ iid draws with fewer bits than needed to repeatedly sample one draw. Suppose you naively decided to draw one sample (takes 2 random bits on average), then draw another simple (using 2 more random bits on average), and so on, until you've repeated this $m$ times. That would require about $2m$ random bits on average.



          But it turns out there's a way to sample from $m$ draws using fewer than $2m$ bits. It's hard to believe, but it's true!



          Let me give you the intuition. Suppose you wrote down the result of sampling $m$ draws, where $m$ is really large. Then the result could be specified as a $m$-bit string. This $m$-bit string will be mostly 0's, with a few 1's in it: in particular, on average it will have about $m/2^N$ 1's (could be more or less than that, but if $m$ is sufficiently large, usually the number will be close to that). The length of the gaps between the 1's are random, but will typically be somewhere vaguely in the vicinity of $2^N$ (could easily be half that or twice that or even more, but of that order of magnitude). Of course, instead of writing down the entire $m$-bit string, we could write it down more succinctly by writing down a list of the lengths of the gaps -- that carries all the same information, in a more compressed format. How much more succinct? Well, we'll usually need about $N$ bits to represent the length of each gap; and there will be about $m/2^N$ gaps; so we'll need in total about $mN/2^N$ bits (could be a bit more, could be a bit less, but if $m$ is sufficiently large, it'll usually be close to that). That's a lot shorter than a $m$-bit string.



          And if there's a way to write down the string this succinctly, perhaps it won't be too surprising if that means there's a way to generate the string with a number of random bits comparable to the length of the string. In particular, you randomly generate the length of each gap; this is sampling from a geometric distribution with $p=1/2^N$, and that can be done with roughly $sim N$ random bits on average (not $2^N$). You'll need about $m/2^N$ iid draws from this geometric distribution, so you'll need in total roughly $sim Nm/2^N$ random bits. (It could be a small constant factor larger, but not too much larger.) And, notice is that this is much smaller than $2m$ bits.



          So, we can sample $m$ iid draws from your distribution, using just $f(m) sim Nm/2^N$ random bits (roughly). Recall that the entropy is $lim_m to infty f(m)/m$. So this means that you should expect the entropy to be (roughly) $N/2^N$. That's off by a little bit, because the above calculation was sketchy and crude -- but hopefully it gives you some intuition for why the entropy is what it is, and why everything is consistent and reasonable.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
            $endgroup$
            – nalzok
            50 mins ago











          • $begingroup$
            @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
            $endgroup$
            – D.W.
            23 mins ago










          • $begingroup$
            So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
            $endgroup$
            – nalzok
            14 mins ago







          • 1




            $begingroup$
            @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
            $endgroup$
            – D.W.
            13 mins ago














          2












          2








          2





          $begingroup$

          Wow, great question! Let me try to explain the resolution. It'll take three distinct steps.



          The first thing to note is that the entropy is focused more on the average number of bits needed per draw, not the maximum number of bits needed.



          With your sampling procedure, the maximum number of random bits needed per draw is $N$ bits, but the average number of bits needed is 2 bits (the average of a geometric distribution with $p=1/2$) -- this is because there is a $1/2$ probability that you only need 1 bit (if the first bit turns out to be 1), a $1/4$ probability that you only need 2 bits (if the first two bits turn out to be 01), a $1/8$ probability that you only need 3 bits (if the first three bits turn out to be 001), and so on.



          The second thing to note is that the entropy doesn't really capture the average number of bits needed for a single draw. Instead, the entropy captures the amortized number of bits needed to sample $m$ iid draws from this distribution. Suppose we need $f(m)$ bits to sample $m$ draws; then the entropy is the limit of $f(m)/k$ as $m to infty$.



          The third thing to note is that, with this distribution, you can sample $m$ iid draws with fewer bits than needed to repeatedly sample one draw. Suppose you naively decided to draw one sample (takes 2 random bits on average), then draw another simple (using 2 more random bits on average), and so on, until you've repeated this $m$ times. That would require about $2m$ random bits on average.



          But it turns out there's a way to sample from $m$ draws using fewer than $2m$ bits. It's hard to believe, but it's true!



          Let me give you the intuition. Suppose you wrote down the result of sampling $m$ draws, where $m$ is really large. Then the result could be specified as a $m$-bit string. This $m$-bit string will be mostly 0's, with a few 1's in it: in particular, on average it will have about $m/2^N$ 1's (could be more or less than that, but if $m$ is sufficiently large, usually the number will be close to that). The length of the gaps between the 1's are random, but will typically be somewhere vaguely in the vicinity of $2^N$ (could easily be half that or twice that or even more, but of that order of magnitude). Of course, instead of writing down the entire $m$-bit string, we could write it down more succinctly by writing down a list of the lengths of the gaps -- that carries all the same information, in a more compressed format. How much more succinct? Well, we'll usually need about $N$ bits to represent the length of each gap; and there will be about $m/2^N$ gaps; so we'll need in total about $mN/2^N$ bits (could be a bit more, could be a bit less, but if $m$ is sufficiently large, it'll usually be close to that). That's a lot shorter than a $m$-bit string.



          And if there's a way to write down the string this succinctly, perhaps it won't be too surprising if that means there's a way to generate the string with a number of random bits comparable to the length of the string. In particular, you randomly generate the length of each gap; this is sampling from a geometric distribution with $p=1/2^N$, and that can be done with roughly $sim N$ random bits on average (not $2^N$). You'll need about $m/2^N$ iid draws from this geometric distribution, so you'll need in total roughly $sim Nm/2^N$ random bits. (It could be a small constant factor larger, but not too much larger.) And, notice is that this is much smaller than $2m$ bits.



          So, we can sample $m$ iid draws from your distribution, using just $f(m) sim Nm/2^N$ random bits (roughly). Recall that the entropy is $lim_m to infty f(m)/m$. So this means that you should expect the entropy to be (roughly) $N/2^N$. That's off by a little bit, because the above calculation was sketchy and crude -- but hopefully it gives you some intuition for why the entropy is what it is, and why everything is consistent and reasonable.






          share|cite|improve this answer









          $endgroup$



          Wow, great question! Let me try to explain the resolution. It'll take three distinct steps.



          The first thing to note is that the entropy is focused more on the average number of bits needed per draw, not the maximum number of bits needed.



          With your sampling procedure, the maximum number of random bits needed per draw is $N$ bits, but the average number of bits needed is 2 bits (the average of a geometric distribution with $p=1/2$) -- this is because there is a $1/2$ probability that you only need 1 bit (if the first bit turns out to be 1), a $1/4$ probability that you only need 2 bits (if the first two bits turn out to be 01), a $1/8$ probability that you only need 3 bits (if the first three bits turn out to be 001), and so on.



          The second thing to note is that the entropy doesn't really capture the average number of bits needed for a single draw. Instead, the entropy captures the amortized number of bits needed to sample $m$ iid draws from this distribution. Suppose we need $f(m)$ bits to sample $m$ draws; then the entropy is the limit of $f(m)/k$ as $m to infty$.



          The third thing to note is that, with this distribution, you can sample $m$ iid draws with fewer bits than needed to repeatedly sample one draw. Suppose you naively decided to draw one sample (takes 2 random bits on average), then draw another simple (using 2 more random bits on average), and so on, until you've repeated this $m$ times. That would require about $2m$ random bits on average.



          But it turns out there's a way to sample from $m$ draws using fewer than $2m$ bits. It's hard to believe, but it's true!



          Let me give you the intuition. Suppose you wrote down the result of sampling $m$ draws, where $m$ is really large. Then the result could be specified as a $m$-bit string. This $m$-bit string will be mostly 0's, with a few 1's in it: in particular, on average it will have about $m/2^N$ 1's (could be more or less than that, but if $m$ is sufficiently large, usually the number will be close to that). The length of the gaps between the 1's are random, but will typically be somewhere vaguely in the vicinity of $2^N$ (could easily be half that or twice that or even more, but of that order of magnitude). Of course, instead of writing down the entire $m$-bit string, we could write it down more succinctly by writing down a list of the lengths of the gaps -- that carries all the same information, in a more compressed format. How much more succinct? Well, we'll usually need about $N$ bits to represent the length of each gap; and there will be about $m/2^N$ gaps; so we'll need in total about $mN/2^N$ bits (could be a bit more, could be a bit less, but if $m$ is sufficiently large, it'll usually be close to that). That's a lot shorter than a $m$-bit string.



          And if there's a way to write down the string this succinctly, perhaps it won't be too surprising if that means there's a way to generate the string with a number of random bits comparable to the length of the string. In particular, you randomly generate the length of each gap; this is sampling from a geometric distribution with $p=1/2^N$, and that can be done with roughly $sim N$ random bits on average (not $2^N$). You'll need about $m/2^N$ iid draws from this geometric distribution, so you'll need in total roughly $sim Nm/2^N$ random bits. (It could be a small constant factor larger, but not too much larger.) And, notice is that this is much smaller than $2m$ bits.



          So, we can sample $m$ iid draws from your distribution, using just $f(m) sim Nm/2^N$ random bits (roughly). Recall that the entropy is $lim_m to infty f(m)/m$. So this means that you should expect the entropy to be (roughly) $N/2^N$. That's off by a little bit, because the above calculation was sketchy and crude -- but hopefully it gives you some intuition for why the entropy is what it is, and why everything is consistent and reasonable.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered 2 hours ago









          D.W.D.W.

          102k12127291




          102k12127291











          • $begingroup$
            Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
            $endgroup$
            – nalzok
            50 mins ago











          • $begingroup$
            @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
            $endgroup$
            – D.W.
            23 mins ago










          • $begingroup$
            So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
            $endgroup$
            – nalzok
            14 mins ago







          • 1




            $begingroup$
            @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
            $endgroup$
            – D.W.
            13 mins ago

















          • $begingroup$
            Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
            $endgroup$
            – nalzok
            50 mins ago











          • $begingroup$
            @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
            $endgroup$
            – D.W.
            23 mins ago










          • $begingroup$
            So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
            $endgroup$
            – nalzok
            14 mins ago







          • 1




            $begingroup$
            @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
            $endgroup$
            – D.W.
            13 mins ago
















          $begingroup$
          Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
          $endgroup$
          – nalzok
          50 mins ago





          $begingroup$
          Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
          $endgroup$
          – nalzok
          50 mins ago













          $begingroup$
          @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
          $endgroup$
          – D.W.
          23 mins ago




          $begingroup$
          @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
          $endgroup$
          – D.W.
          23 mins ago












          $begingroup$
          So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
          $endgroup$
          – nalzok
          14 mins ago





          $begingroup$
          So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
          $endgroup$
          – nalzok
          14 mins ago





          1




          1




          $begingroup$
          @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
          $endgroup$
          – D.W.
          13 mins ago





          $begingroup$
          @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
          $endgroup$
          – D.W.
          13 mins ago


















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Computer Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106018%2fsimulating-a-probability-of-1-of-2n-with-less-than-n-random-bits%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How should I use the fbox command correctly to avoid producing a Bad Box message?How to put a long piece of text in a box?How to specify height and width of fboxIs there an arrayrulecolor-like command to change the rule color of fbox?What is the command to highlight bad boxes in pdf?Why does fbox sometimes place the box *over* the graphic image?how to put the text in the boxHow to create command for a box where text inside the box can automatically adjust?how can I make an fbox like command with certain color, shape and width of border?how to use fbox in align modeFbox increase the spacing between the box and it content (inner margin)how to change the box height of an equationWhat is the use of the hbox in a newcommand command?

          Doxepinum Nexus interni Notae | Tabula navigationis3158DB01142WHOa682390"Structural Analysis of the Histamine H1 Receptor""Transdermal and Topical Drug Administration in the Treatment of Pain""Antidepressants as antipruritic agents: A review"

          inputenc: Unicode character … not set up for use with LaTeX The Next CEO of Stack OverflowEntering Unicode characters in LaTeXHow to solve the `Package inputenc Error: Unicode char not set up for use with LaTeX` problem?solve “Unicode char is not set up for use with LaTeX” without special handling of every new interesting UTF-8 characterPackage inputenc Error: Unicode character ² (U+B2)(inputenc) not set up for use with LaTeX. acroI2C[I²C]package inputenc error unicode char (u + 190) not set up for use with latexPackage inputenc Error: Unicode char u8:′ not set up for use with LaTeX. 3′inputenc Error: Unicode char u8: not set up for use with LaTeX with G-BriefPackage Inputenc Error: Unicode char u8: not set up for use with LaTeXPackage inputenc Error: Unicode char ́ (U+301)(inputenc) not set up for use with LaTeX. includePackage inputenc Error: Unicode char ̂ (U+302)(inputenc) not set up for use with LaTeX. … $widehatleft (OA,AA' right )$Package inputenc Error: Unicode char â„¡ (U+2121)(inputenc) not set up for use with LaTeX. printbibliography[heading=bibintoc]Package inputenc Error: Unicode char − (U+2212)(inputenc) not set up for use with LaTeXPackage inputenc Error: Unicode character α (U+3B1) not set up for use with LaTeXPackage inputenc Error: Unicode characterError: ! Package inputenc Error: Unicode char ⊘ (U+2298)(inputenc) not set up for use with LaTeX