Question about numpy.random.choice with probabilties
![](https://secure.gravatar.com/avatar/8be442c3468fa1ee11a3f5acf5697ce8.jpg?s=120&d=mm&r=g)
Hi Nadav, I may be wrong, but I think that the result of the current implementation is actually the expected one. Using you example: probabilities for item 1, 2 and 3 are: 0.2, 0.4 and 0.4 P([1,2]) = P([2] | 1st=[1]) P([1]) + P([1] | 1st=[2]) P([2]) Now, P([1]) = 0.2 and P([2]) = 0.4. However: P([2] | 1st=[1]) = 0.5 (2 and 3 have the same sampling probability) P([1] | 1st=[2]) = 1/3 (1 and 3 have probability 0.2 and 0.4 that, once normalised, translate into 1/3 and 2/3 respectively) Therefore P([1,2]) = 0.7/3 = 0.23333 Similarly, P([1,3]) = 0.23333 and P([2,3]) = 1.6/3 = 0.533333 What am I missing? Alessandro 2017-01-17 13:00 GMT+01:00 <numpy-discussion-request@scipy.org>:
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Tue, Jan 17, 2017 at 7:18 PM, alebarde@gmail.com <alebarde@gmail.com> wrote:
Yes, this formula does fit well with the actual algorithm in the code. But, my question is *why* we want this formula to be correct:
Right, these are the numbers that the algorithm in the current code, and the formula above, produce: P([1,2]) = P([1,3]) = 0.23333 P([2,3]) = 0.53333 What I'm puzzled about is that these probabilities do not really fullfill the given probability vector 0.2, 0.4, 0.4... Let me try to explain explain: Why did the user choose the probabilities 0.2, 0.4, 0.4 for the three items in the first place? One reasonable interpretation is that the user wants in his random picks to see item 1 half the time of item 2 or 3. For example, maybe item 1 costs twice as much as item 2 or 3, so picking it half as often will result in an equal expenditure on each item. If the user randomly picks the items individually (a single item at a time), he indeed gets exactly this distribution: 0.2 of the time item 1, 0.4 of the time item 2, 0.4 of the time item 3. Now, what happens if he picks not individual items, but pairs of different items using numpy.random.choice with two items, replace=false? Suddenly, the distribution of the individual items in the results get skewed: If we look at the expected number of times we'll see each item in one draw of a random pair, we will get: E(1) = P([1,2]) + P([1,3]) = 0.46666 E(2) = P([1,2]) + P([2,3]) = 0.76666 E(3) = P([1,3]) + P([2,3]) = 0.76666 Or renormalizing by dividing by 2: P(1) = 0.233333 P(2) = 0.383333 P(3) = 0.383333 As you can see this is not quite the probabilities we wanted (which were 0.2, 0.4, 0.4)! In the random pairs we picked, item 1 was used a bit more often than we wanted, and item 2 and 3 were used a bit less often! So that brought my question of why we consider these numbers right. In this example, it's actually possible to get the right item distribution, if we pick the pair outcomes with the following probabilties: P([1,2]) = 0.2 (not 0.233333 as above) P([1,3]) = 0.2 P([2,3]) = 0.6 (not 0.533333 as above) Then, we get exactly the right P(1), P(2), P(3): 0.2, 0.4, 0.4 Interestingly, fixing things like I suggest is not always possible. Consider a different probability-vector example for three items - 0.99, 0.005, 0.005. Now, no matter which algorithm we use for randomly picking pairs from these three items, *each* returned pair will inevitably contain one of the two very-low-probability items, so each of those items will appear in roughly half the pairs, instead of in a vanishingly small percentage as we hoped. But in other choices of probabilities (like the one in my original example), there is a solution. For 2-out-of-3 sampling we can actually show a system of three linear equations in three variables, so there is always one solution but if this solution has components not valid as probabilities (not in [0,1]) we end up with no solution - as happens in the 0.99, 0.005, 0.005 example.
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Tue, Jan 17, 2017 at 4:13 PM, Nadav Har'El <nyh@scylladb.com> wrote:
On Tue, Jan 17, 2017 at 7:18 PM, alebarde@gmail.com <alebarde@gmail.com>
wrote: the formula above, produce:
the given probability vector 0.2, 0.4, 0.4... pairs from these three items, *each* returned pair will inevitably contain one of the two very-low-probability items, so each of those items will appear in roughly half the pairs, instead of in a vanishingly small percentage as we hoped.
But in other choices of probabilities (like the one in my original
example), there is a solution. For 2-out-of-3 sampling we can actually show a system of three linear equations in three variables, so there is always one solution but if this solution has components not valid as probabilities (not in [0,1]) we end up with no solution - as happens in the 0.99, 0.005, 0.005 example. I think the underlying problem is that in the sampling space the events (1, 2) (1, 3) (2, 3) are correlated and because of the discreteness an arbitrary marginal distribution on the individual events 1, 2, 3 is not possible. related aside: I'm not able (or willing to spend the time) on the math, but I just went through something similar for survey sampling in finite population (e.g. survey two out of 3 individuals, where 3 is the population), leading to the Horvitz–Thompson estimator. The books have chapters on different sampling schemes and derivation of the marginal and joint probability to be surveyed. (I gave up on sampling without replacement, and assume we have a large population where it doesn't make a difference.) In some of the sampling schemes they pick sequentially and adjust the probabilities for the remaining individuals. That seems to provide more flexibility to create a desired or optimal sampling scheme. Josef
out probability
![](https://secure.gravatar.com/avatar/8be442c3468fa1ee11a3f5acf5697ce8.jpg?s=120&d=mm&r=g)
2017-01-17 22:13 GMT+01:00 Nadav Har'El <nyh@scylladb.com>:
fundamental law: https://en.wikipedia.org/wiki/Law_of_total_probability + https://en.wikipedia.org/wiki/Bayes%27_theorem Thus, the result we get from random.choice IMHO definitely makes sense. Of course, I think we could always discuss about implementing other sampling methods if they are useful to some application.
p is not the probability of the output but the one of the source finite population. I think that if you want to preserve that distribution, as Josef pointed out, you have to make extractions independent, that is either sample with replacement or approximate an infinite population (that is basically the same thing). But of course in this case you will also end up with events [X,X].
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Tue, Jan 17, 2017 at 6:58 PM, alebarde@gmail.com <alebarde@gmail.com> wrote:
With replacement and keeping duplicates the results might also be similar in the pattern of the marginal probabilities https://onlinecourses.science.psu.edu/stat506/node/17 Another approach in survey sampling is also to drop duplicates in with replacement sampling, but then the sample size itself is random. (again I didn't try to understand the small print) (another related aside: The problem with discrete sample space in small samples shows up also in calculating hypothesis tests, e.g. fisher's exact or similar. Because, we only get a few discrete possibilities in the sample space, it is not possible to construct a test that has exactly the desired type 1 error.) Josef
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 1:58 AM, alebarde@gmail.com <alebarde@gmail.com> wrote:
Hi, Yes, of course the formula is correct, but it doesn't mean we're not applying it in the wrong context. I'll be honest here: I came to numpy.random.choice after I actually coded a similar algorithm (with the same results) myself, because like you I thought this was the "obvious" and correct algorithm. Only then I realized that its output doesn't actually produce the desired probabilities specified by the user - even in the cases where that is possible. And I started wondering if existing libraries - like numpy - do this differently. And it turns out, numpy does it (basically) in the same way as my algorithm.
Thus, the result we get from random.choice IMHO definitely makes sense.
Let's look at what the user asked this function, and what it returns: User asks: please give me random pairs of the three items, where item 1 has probability 0.2, item 2 has 0.4, and 3 has 0.4. Function returns: random pairs, where if you make many random returned results (as in the law of large numbers) and look at the items they contain, item 1 is 0.2333 of the items, item 2 is 0.38333, and item 3 is 0.38333. These are not (quite) the probabilities the user asked for... Can you explain a sense where the user's requested probabilities (0.2, 0.4, 0.4) are actually adhered in the results which random.choice returns? Thanks, Nadav Har'El.
![](https://secure.gravatar.com/avatar/8be442c3468fa1ee11a3f5acf5697ce8.jpg?s=120&d=mm&r=g)
2017-01-18 9:35 GMT+01:00 Nadav Har'El <nyh@scylladb.com>:
I think that the question the user is asking by specifying p is a slightly different one: "please give me random pairs of the three items extracted from a population of 3 items where item 1 has probability of being extracted of 0.2, item 2 has 0.4, and 3 has 0.4. Also please remove extract items once extracted."
-- -------------------------------------------------------------------------- NOTICE: Dlgs 196/2003 this e-mail and any attachments thereto may contain confidential information and are intended for the sole use of the recipient(s) named above. If you are not the intended recipient of this message you are hereby notified that any dissemination or copying of this message is strictly prohibited. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. Thank you. --------------------------------------------------------------------------
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 11:00 AM, alebarde@gmail.com <alebarde@gmail.com> wrote:
You are right, if that is what the user wants, numpy.random.choice does the right thing. I'm just wondering whether this is actually what users want, and whether they understand this is what they are getting. As I said, I expected it to generate pairs with, empirically, the desired distribution of individual items. The documentation of numpy.random.choice seemed to me (wrongly) that it implis that that's what it does. So I was surprised to realize that it does not. Nadav.
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 4:52 AM, Nadav Har'El <nyh@scylladb.com> wrote:
As Alessandro and you showed, the function returns something that makes sense. If the user wants something different, then they need to look for a different function, which is however difficult if it doesn't have a solution in general. Sounds to me a bit like a Monty Hall problem. Whether we like it or not, or find it counter intuitive, it is what it is given the sampling scheme. Having more sampling schemes would be useful, but it's not possible to implement sampling schemes with impossible properties Josef
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 8:53 AM, <josef.pktd@gmail.com> wrote:
BTW: sampling 3 out of 3 without replacement is even worse No matter what sampling scheme and what selection probabilities we use, we always have every element with probability 1 in the sample. (Which in survey statistics implies that the sampling error or standard deviation of any estimate of a population mean or total is zero. Which I found weird. How can you do statistics and get an estimate that doesn't have any uncertainty associated with it?) Josef
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 4:30 PM, <josef.pktd@gmail.com> wrote:
I agree. The random-sample function of the type I envisioned will be able to reproduce the desired probabilities in some cases (like the example I gave) but not in others. Because doing this correctly involves a set of n linear equations in comb(n,k) variables, it can have no solution, or many solutions, depending on the n and k, and the desired probabilities. A function of this sort could return an error if it can't achieve the desired probabilities. But in many cases (the 0.2, 0.4, 0.4 example I gave was just something random I tried) there will be a way to achieve exactly the desired distribution. I guess I'll need to write this new function myself :-) Because my use case definitely requires that the output of the random items produced matches the required probabilities (when possible). Thanks, Nadav.
![](https://secure.gravatar.com/avatar/ab7e74f2443b81e5175638d72be65e07.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 4:13 PM Nadav Har'El <nyh@scylladb.com> wrote:
It seems to me that the basic problem here is that the numpy.random.choice docstring fails to explain what the function actually does when called with weights and without replacement. Clearly there are different expectations; I think numpy.random.choice chose one that is easy to explain and implement but not necessarily what everyone expects. So the docstring should be clarified. Perhaps a Notes section: When numpy.random.choice is called with replace=False and non-uniform probabilities, the resulting distribution of samples is not obvious. numpy.random.choice effectively follows the procedure: when choosing the kth element in a set, the probability of element i occurring is p[i] divided by the total probability of all not-yet-chosen (and therefore eligible) elements. This approach is always possible as long as the sample size is no larger than the population, but it means that the probability that element i occurs in the sample is not exactly p[i]. Anne
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 6:27 AM, Anne Archibald <peridot.faceted@gmail.com> wrote:
implement sampling schemes with impossible properties. linear equations in comb(n,k) variables, it can have no solution, or many solutions, depending on the n and k, and the desired probabilities. A function of this sort could return an error if it can't achieve the desired probabilities.
It seems to me that the basic problem here is that the
numpy.random.choice docstring fails to explain what the function actually does when called with weights and without replacement. Clearly there are different expectations; I think numpy.random.choice chose one that is easy to explain and implement but not necessarily what everyone expects. So the docstring should be clarified. Perhaps a Notes section:
When numpy.random.choice is called with replace=False and non-uniform
probabilities, the resulting distribution of samples is not obvious. numpy.random.choice effectively follows the procedure: when choosing the kth element in a set, the probability of element i occurring is p[i] divided by the total probability of all not-yet-chosen (and therefore eligible) elements. This approach is always possible as long as the sample size is no larger than the population, but it means that the probability that element i occurs in the sample is not exactly p[i]. I don't object to some Notes, but I would probably phrase it more like we are providing the standard definition of the jargon term "sampling without replacement" in the case of non-uniform probabilities. To my mind (or more accurately, with my background), "replace=False" obviously picks out the implemented procedure, and I would have been incredibly surprised if it did anything else. If the option were named "unique=True", then I would have needed some more documentation to let me know exactly how it was implemented. -- Robert Kern
![](https://secure.gravatar.com/avatar/8be442c3468fa1ee11a3f5acf5697ce8.jpg?s=120&d=mm&r=g)
2017-01-23 15:33 GMT+01:00 Robert Kern <robert.kern@gmail.com>:
-- -------------------------------------------------------------------------- NOTICE: Dlgs 196/2003 this e-mail and any attachments thereto may contain confidential information and are intended for the sole use of the recipient(s) named above. If you are not the intended recipient of this message you are hereby notified that any dissemination or copying of this message is strictly prohibited. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. Thank you. --------------------------------------------------------------------------
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 4:52 PM, alebarde@gmail.com <alebarde@gmail.com> wrote:
With my own background (MSc. in Mathematics), I agree that this algorithm is indeed the most natural one. And as I said, when I wanted to implement something myself when I wanted to choose random combinations (k out of n items), I wrote exactly the same one. But when it didn't produce the desired probabilities (even in cases where I knew that doing this was possible), I wrongly assumed numpy would do things differently - only to realize it uses exactly the same algorithm. So clearly, the documentation didn't quite explain what it does or doesn't do. Also, Robert, I'm curious: beyond explaining why the existing algorithm is reasonable (which I agree), could you give me an example of where it is actually *useful* for sampling? Let me give you an illustrative counter-example: Let's imagine a country that a country has 3 races: 40% Lilliputians, 40% Blefuscans, an 20% Yahoos (immigrants from a different section of the book ;-)). Gulliver wants to take a poll, and needs to sample people from all these races with appropriate proportions. These races live in different parts of town, so to pick a random person he needs to first pick one of the races and then a random person from that part of town. If he picks one respondent at a time, he uses numpy.random.choice(3, size=1,p=[0.4,0.4,0.2])) to pick the part of town, and then a person from that part - he gets the desired 40% / 40% / 20% division of races. Now imagine that Gulliver can interview two respondents each day, so he needs to pick two people each time. If he picks 2 choices of part-of-town *with* replacement, numpy.random.choice(3, size=2,p=[0.4,0.4,0.2]), that's also fine: he may need to take two people from the same part of town, or two from two different parts of town, but in any case will still get the desired 40% / 40% / 20% division between the races of the people he interviews. But consider that we are told that if two people from the same race meet in Gulliver's interview room, the two start chatting between themselves, and waste Gulliver's time. So he prefers to interview two people of *different* races. That's sampling without replacement. So he uses numpy.random.choice(size=2,p=[0.4,0.4,0.2],replace=False) to pick two different parts of town, and one person from each. But then he looks at his logs, and discovers he actually interviewed the races at 38% / 38% / 23% proportions - not the 40%/40%/20% he wanted. So the opinions of the Yahoos were over-counted in this poll! I know that this is a silly example (made even sillier by the names of races I used), but I wonder if you could give me an example where the current behavior of replace=False is genuinely useful. Not that I'm saying that fixing this problem is easy (I'm still struggling with it myself in the general case of size < n-1). Nadav.
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 9:41 AM, Nadav Har'El <nyh@scylladb.com> wrote:
On Mon, Jan 23, 2017 at 4:52 PM, alebarde@gmail.com <alebarde@gmail.com>
wrote: possible), I wrongly assumed numpy would do things differently - only to realize it uses exactly the same algorithm. So clearly, the documentation didn't quite explain what it does or doesn't do. In my experience, I have seen "without replacement" mean only one thing. If the docstring had said "returns unique items", I'd agree that it doesn't explain what it does or doesn't do. The only issue is that "without replacement" is jargon, and it is good to recapitulate the definitions of such terms for those who aren't familiar with them.
The references I previously quoted list a few. One is called "multistage sampling proportional to size". The idea being that you draw (without replacement) from a larger units (say, congressional districts) before sampling within them. It is similar to the situation you outline, but it is probably more useful at a different scale, like lots of larger units (where your algorithm is likely to provide no solution) rather than a handful. It is probably less useful in terms of survey design, where you are trying to *design* a process to get a result, than it is in queueing theory and related fields, where you are trying to *describe* and simulate a process that is pre-defined. -- Robert Kern
![](https://secure.gravatar.com/avatar/ab7e74f2443b81e5175638d72be65e07.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 3:34 PM Robert Kern <robert.kern@gmail.com> wrote:
It is what I would have expected too, but we have a concrete example of a user who expected otherwise; where one user speaks up, there are probably more who didn't (some of whom probably have code that's not doing what they think it does). So for the cost of adding a Note, why not help some of them? As for the standardness of the definition: I don't know, have you a reference where it is defined? More natural to me would be to have a list of items with integer multiplicities (as in: "cat" 3 times, "dog" 1 time). I'm hesitant to claim ours is a standard definition unless it's in a textbook somewhere. But I don't insist on my phrasing. Anne
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 9:22 AM, Anne Archibald <peridot.faceted@gmail.com> wrote:
On Mon, Jan 23, 2017 at 3:34 PM Robert Kern <robert.kern@gmail.com> wrote:
I don't object to some Notes, but I would probably phrase it more like
we are providing the standard definition of the jargon term "sampling without replacement" in the case of non-uniform probabilities. To my mind (or more accurately, with my background), "replace=False" obviously picks out the implemented procedure, and I would have been incredibly surprised if it did anything else. If the option were named "unique=True", then I would have needed some more documentation to let me know exactly how it was implemented.
It is what I would have expected too, but we have a concrete example of a
user who expected otherwise; where one user speaks up, there are probably more who didn't (some of whom probably have code that's not doing what they think it does). So for the cost of adding a Note, why not help some of them? That's why I said I'm fine with adding a Note. I'm just suggesting a re-wording so that the cautious language doesn't lead anyone who is familiar with the jargon to think we're doing something ad hoc while still providing the details for those who aren't so familiar.
Textbook, I'm not so sure, but it is the *only* definition I've ever encountered in the literature: http://epubs.siam.org/doi/abs/10.1137/0209009 http://www.sciencedirect.com/science/article/pii/S002001900500298X -- Robert Kern
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 5:47 PM, Robert Kern <robert.kern@gmail.com> wrote:
Very interesting. This paper (PDF available if you search for its name in Google) explicitly mentions one of the uses of this algorithm is "multistage sampling", which appears to be exactly the same thing as in the hypothetical Gulliver example I gave in my earlier mail. And yet, I showed in my mail that this algorithm does NOT reproduce the desired frequency of the different sampling units... Moreover, this paper doesn't explain why you need the "without replacement" for this use case (everything seems easier, and the desired probabilities are reproduced, with replacement). In my story I gave a funny excuse why "without replacement" might be warrented, but if you're interested I can tell you a bit about my actual use case, with a more serious reason why I want without replacement.
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Tue, Jan 17, 2017 at 7:18 PM, alebarde@gmail.com <alebarde@gmail.com> wrote:
Yes, this formula does fit well with the actual algorithm in the code. But, my question is *why* we want this formula to be correct:
Right, these are the numbers that the algorithm in the current code, and the formula above, produce: P([1,2]) = P([1,3]) = 0.23333 P([2,3]) = 0.53333 What I'm puzzled about is that these probabilities do not really fullfill the given probability vector 0.2, 0.4, 0.4... Let me try to explain explain: Why did the user choose the probabilities 0.2, 0.4, 0.4 for the three items in the first place? One reasonable interpretation is that the user wants in his random picks to see item 1 half the time of item 2 or 3. For example, maybe item 1 costs twice as much as item 2 or 3, so picking it half as often will result in an equal expenditure on each item. If the user randomly picks the items individually (a single item at a time), he indeed gets exactly this distribution: 0.2 of the time item 1, 0.4 of the time item 2, 0.4 of the time item 3. Now, what happens if he picks not individual items, but pairs of different items using numpy.random.choice with two items, replace=false? Suddenly, the distribution of the individual items in the results get skewed: If we look at the expected number of times we'll see each item in one draw of a random pair, we will get: E(1) = P([1,2]) + P([1,3]) = 0.46666 E(2) = P([1,2]) + P([2,3]) = 0.76666 E(3) = P([1,3]) + P([2,3]) = 0.76666 Or renormalizing by dividing by 2: P(1) = 0.233333 P(2) = 0.383333 P(3) = 0.383333 As you can see this is not quite the probabilities we wanted (which were 0.2, 0.4, 0.4)! In the random pairs we picked, item 1 was used a bit more often than we wanted, and item 2 and 3 were used a bit less often! So that brought my question of why we consider these numbers right. In this example, it's actually possible to get the right item distribution, if we pick the pair outcomes with the following probabilties: P([1,2]) = 0.2 (not 0.233333 as above) P([1,3]) = 0.2 P([2,3]) = 0.6 (not 0.533333 as above) Then, we get exactly the right P(1), P(2), P(3): 0.2, 0.4, 0.4 Interestingly, fixing things like I suggest is not always possible. Consider a different probability-vector example for three items - 0.99, 0.005, 0.005. Now, no matter which algorithm we use for randomly picking pairs from these three items, *each* returned pair will inevitably contain one of the two very-low-probability items, so each of those items will appear in roughly half the pairs, instead of in a vanishingly small percentage as we hoped. But in other choices of probabilities (like the one in my original example), there is a solution. For 2-out-of-3 sampling we can actually show a system of three linear equations in three variables, so there is always one solution but if this solution has components not valid as probabilities (not in [0,1]) we end up with no solution - as happens in the 0.99, 0.005, 0.005 example.
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Tue, Jan 17, 2017 at 4:13 PM, Nadav Har'El <nyh@scylladb.com> wrote:
On Tue, Jan 17, 2017 at 7:18 PM, alebarde@gmail.com <alebarde@gmail.com>
wrote: the formula above, produce:
the given probability vector 0.2, 0.4, 0.4... pairs from these three items, *each* returned pair will inevitably contain one of the two very-low-probability items, so each of those items will appear in roughly half the pairs, instead of in a vanishingly small percentage as we hoped.
But in other choices of probabilities (like the one in my original
example), there is a solution. For 2-out-of-3 sampling we can actually show a system of three linear equations in three variables, so there is always one solution but if this solution has components not valid as probabilities (not in [0,1]) we end up with no solution - as happens in the 0.99, 0.005, 0.005 example. I think the underlying problem is that in the sampling space the events (1, 2) (1, 3) (2, 3) are correlated and because of the discreteness an arbitrary marginal distribution on the individual events 1, 2, 3 is not possible. related aside: I'm not able (or willing to spend the time) on the math, but I just went through something similar for survey sampling in finite population (e.g. survey two out of 3 individuals, where 3 is the population), leading to the Horvitz–Thompson estimator. The books have chapters on different sampling schemes and derivation of the marginal and joint probability to be surveyed. (I gave up on sampling without replacement, and assume we have a large population where it doesn't make a difference.) In some of the sampling schemes they pick sequentially and adjust the probabilities for the remaining individuals. That seems to provide more flexibility to create a desired or optimal sampling scheme. Josef
out probability
![](https://secure.gravatar.com/avatar/8be442c3468fa1ee11a3f5acf5697ce8.jpg?s=120&d=mm&r=g)
2017-01-17 22:13 GMT+01:00 Nadav Har'El <nyh@scylladb.com>:
fundamental law: https://en.wikipedia.org/wiki/Law_of_total_probability + https://en.wikipedia.org/wiki/Bayes%27_theorem Thus, the result we get from random.choice IMHO definitely makes sense. Of course, I think we could always discuss about implementing other sampling methods if they are useful to some application.
p is not the probability of the output but the one of the source finite population. I think that if you want to preserve that distribution, as Josef pointed out, you have to make extractions independent, that is either sample with replacement or approximate an infinite population (that is basically the same thing). But of course in this case you will also end up with events [X,X].
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Tue, Jan 17, 2017 at 6:58 PM, alebarde@gmail.com <alebarde@gmail.com> wrote:
With replacement and keeping duplicates the results might also be similar in the pattern of the marginal probabilities https://onlinecourses.science.psu.edu/stat506/node/17 Another approach in survey sampling is also to drop duplicates in with replacement sampling, but then the sample size itself is random. (again I didn't try to understand the small print) (another related aside: The problem with discrete sample space in small samples shows up also in calculating hypothesis tests, e.g. fisher's exact or similar. Because, we only get a few discrete possibilities in the sample space, it is not possible to construct a test that has exactly the desired type 1 error.) Josef
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 1:58 AM, alebarde@gmail.com <alebarde@gmail.com> wrote:
Hi, Yes, of course the formula is correct, but it doesn't mean we're not applying it in the wrong context. I'll be honest here: I came to numpy.random.choice after I actually coded a similar algorithm (with the same results) myself, because like you I thought this was the "obvious" and correct algorithm. Only then I realized that its output doesn't actually produce the desired probabilities specified by the user - even in the cases where that is possible. And I started wondering if existing libraries - like numpy - do this differently. And it turns out, numpy does it (basically) in the same way as my algorithm.
Thus, the result we get from random.choice IMHO definitely makes sense.
Let's look at what the user asked this function, and what it returns: User asks: please give me random pairs of the three items, where item 1 has probability 0.2, item 2 has 0.4, and 3 has 0.4. Function returns: random pairs, where if you make many random returned results (as in the law of large numbers) and look at the items they contain, item 1 is 0.2333 of the items, item 2 is 0.38333, and item 3 is 0.38333. These are not (quite) the probabilities the user asked for... Can you explain a sense where the user's requested probabilities (0.2, 0.4, 0.4) are actually adhered in the results which random.choice returns? Thanks, Nadav Har'El.
![](https://secure.gravatar.com/avatar/8be442c3468fa1ee11a3f5acf5697ce8.jpg?s=120&d=mm&r=g)
2017-01-18 9:35 GMT+01:00 Nadav Har'El <nyh@scylladb.com>:
I think that the question the user is asking by specifying p is a slightly different one: "please give me random pairs of the three items extracted from a population of 3 items where item 1 has probability of being extracted of 0.2, item 2 has 0.4, and 3 has 0.4. Also please remove extract items once extracted."
-- -------------------------------------------------------------------------- NOTICE: Dlgs 196/2003 this e-mail and any attachments thereto may contain confidential information and are intended for the sole use of the recipient(s) named above. If you are not the intended recipient of this message you are hereby notified that any dissemination or copying of this message is strictly prohibited. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. Thank you. --------------------------------------------------------------------------
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 11:00 AM, alebarde@gmail.com <alebarde@gmail.com> wrote:
You are right, if that is what the user wants, numpy.random.choice does the right thing. I'm just wondering whether this is actually what users want, and whether they understand this is what they are getting. As I said, I expected it to generate pairs with, empirically, the desired distribution of individual items. The documentation of numpy.random.choice seemed to me (wrongly) that it implis that that's what it does. So I was surprised to realize that it does not. Nadav.
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 4:52 AM, Nadav Har'El <nyh@scylladb.com> wrote:
As Alessandro and you showed, the function returns something that makes sense. If the user wants something different, then they need to look for a different function, which is however difficult if it doesn't have a solution in general. Sounds to me a bit like a Monty Hall problem. Whether we like it or not, or find it counter intuitive, it is what it is given the sampling scheme. Having more sampling schemes would be useful, but it's not possible to implement sampling schemes with impossible properties Josef
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 8:53 AM, <josef.pktd@gmail.com> wrote:
BTW: sampling 3 out of 3 without replacement is even worse No matter what sampling scheme and what selection probabilities we use, we always have every element with probability 1 in the sample. (Which in survey statistics implies that the sampling error or standard deviation of any estimate of a population mean or total is zero. Which I found weird. How can you do statistics and get an estimate that doesn't have any uncertainty associated with it?) Josef
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 4:30 PM, <josef.pktd@gmail.com> wrote:
I agree. The random-sample function of the type I envisioned will be able to reproduce the desired probabilities in some cases (like the example I gave) but not in others. Because doing this correctly involves a set of n linear equations in comb(n,k) variables, it can have no solution, or many solutions, depending on the n and k, and the desired probabilities. A function of this sort could return an error if it can't achieve the desired probabilities. But in many cases (the 0.2, 0.4, 0.4 example I gave was just something random I tried) there will be a way to achieve exactly the desired distribution. I guess I'll need to write this new function myself :-) Because my use case definitely requires that the output of the random items produced matches the required probabilities (when possible). Thanks, Nadav.
![](https://secure.gravatar.com/avatar/ab7e74f2443b81e5175638d72be65e07.jpg?s=120&d=mm&r=g)
On Wed, Jan 18, 2017 at 4:13 PM Nadav Har'El <nyh@scylladb.com> wrote:
It seems to me that the basic problem here is that the numpy.random.choice docstring fails to explain what the function actually does when called with weights and without replacement. Clearly there are different expectations; I think numpy.random.choice chose one that is easy to explain and implement but not necessarily what everyone expects. So the docstring should be clarified. Perhaps a Notes section: When numpy.random.choice is called with replace=False and non-uniform probabilities, the resulting distribution of samples is not obvious. numpy.random.choice effectively follows the procedure: when choosing the kth element in a set, the probability of element i occurring is p[i] divided by the total probability of all not-yet-chosen (and therefore eligible) elements. This approach is always possible as long as the sample size is no larger than the population, but it means that the probability that element i occurs in the sample is not exactly p[i]. Anne
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 6:27 AM, Anne Archibald <peridot.faceted@gmail.com> wrote:
implement sampling schemes with impossible properties. linear equations in comb(n,k) variables, it can have no solution, or many solutions, depending on the n and k, and the desired probabilities. A function of this sort could return an error if it can't achieve the desired probabilities.
It seems to me that the basic problem here is that the
numpy.random.choice docstring fails to explain what the function actually does when called with weights and without replacement. Clearly there are different expectations; I think numpy.random.choice chose one that is easy to explain and implement but not necessarily what everyone expects. So the docstring should be clarified. Perhaps a Notes section:
When numpy.random.choice is called with replace=False and non-uniform
probabilities, the resulting distribution of samples is not obvious. numpy.random.choice effectively follows the procedure: when choosing the kth element in a set, the probability of element i occurring is p[i] divided by the total probability of all not-yet-chosen (and therefore eligible) elements. This approach is always possible as long as the sample size is no larger than the population, but it means that the probability that element i occurs in the sample is not exactly p[i]. I don't object to some Notes, but I would probably phrase it more like we are providing the standard definition of the jargon term "sampling without replacement" in the case of non-uniform probabilities. To my mind (or more accurately, with my background), "replace=False" obviously picks out the implemented procedure, and I would have been incredibly surprised if it did anything else. If the option were named "unique=True", then I would have needed some more documentation to let me know exactly how it was implemented. -- Robert Kern
![](https://secure.gravatar.com/avatar/8be442c3468fa1ee11a3f5acf5697ce8.jpg?s=120&d=mm&r=g)
2017-01-23 15:33 GMT+01:00 Robert Kern <robert.kern@gmail.com>:
-- -------------------------------------------------------------------------- NOTICE: Dlgs 196/2003 this e-mail and any attachments thereto may contain confidential information and are intended for the sole use of the recipient(s) named above. If you are not the intended recipient of this message you are hereby notified that any dissemination or copying of this message is strictly prohibited. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. Thank you. --------------------------------------------------------------------------
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 4:52 PM, alebarde@gmail.com <alebarde@gmail.com> wrote:
With my own background (MSc. in Mathematics), I agree that this algorithm is indeed the most natural one. And as I said, when I wanted to implement something myself when I wanted to choose random combinations (k out of n items), I wrote exactly the same one. But when it didn't produce the desired probabilities (even in cases where I knew that doing this was possible), I wrongly assumed numpy would do things differently - only to realize it uses exactly the same algorithm. So clearly, the documentation didn't quite explain what it does or doesn't do. Also, Robert, I'm curious: beyond explaining why the existing algorithm is reasonable (which I agree), could you give me an example of where it is actually *useful* for sampling? Let me give you an illustrative counter-example: Let's imagine a country that a country has 3 races: 40% Lilliputians, 40% Blefuscans, an 20% Yahoos (immigrants from a different section of the book ;-)). Gulliver wants to take a poll, and needs to sample people from all these races with appropriate proportions. These races live in different parts of town, so to pick a random person he needs to first pick one of the races and then a random person from that part of town. If he picks one respondent at a time, he uses numpy.random.choice(3, size=1,p=[0.4,0.4,0.2])) to pick the part of town, and then a person from that part - he gets the desired 40% / 40% / 20% division of races. Now imagine that Gulliver can interview two respondents each day, so he needs to pick two people each time. If he picks 2 choices of part-of-town *with* replacement, numpy.random.choice(3, size=2,p=[0.4,0.4,0.2]), that's also fine: he may need to take two people from the same part of town, or two from two different parts of town, but in any case will still get the desired 40% / 40% / 20% division between the races of the people he interviews. But consider that we are told that if two people from the same race meet in Gulliver's interview room, the two start chatting between themselves, and waste Gulliver's time. So he prefers to interview two people of *different* races. That's sampling without replacement. So he uses numpy.random.choice(size=2,p=[0.4,0.4,0.2],replace=False) to pick two different parts of town, and one person from each. But then he looks at his logs, and discovers he actually interviewed the races at 38% / 38% / 23% proportions - not the 40%/40%/20% he wanted. So the opinions of the Yahoos were over-counted in this poll! I know that this is a silly example (made even sillier by the names of races I used), but I wonder if you could give me an example where the current behavior of replace=False is genuinely useful. Not that I'm saying that fixing this problem is easy (I'm still struggling with it myself in the general case of size < n-1). Nadav.
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 9:41 AM, Nadav Har'El <nyh@scylladb.com> wrote:
On Mon, Jan 23, 2017 at 4:52 PM, alebarde@gmail.com <alebarde@gmail.com>
wrote: possible), I wrongly assumed numpy would do things differently - only to realize it uses exactly the same algorithm. So clearly, the documentation didn't quite explain what it does or doesn't do. In my experience, I have seen "without replacement" mean only one thing. If the docstring had said "returns unique items", I'd agree that it doesn't explain what it does or doesn't do. The only issue is that "without replacement" is jargon, and it is good to recapitulate the definitions of such terms for those who aren't familiar with them.
The references I previously quoted list a few. One is called "multistage sampling proportional to size". The idea being that you draw (without replacement) from a larger units (say, congressional districts) before sampling within them. It is similar to the situation you outline, but it is probably more useful at a different scale, like lots of larger units (where your algorithm is likely to provide no solution) rather than a handful. It is probably less useful in terms of survey design, where you are trying to *design* a process to get a result, than it is in queueing theory and related fields, where you are trying to *describe* and simulate a process that is pre-defined. -- Robert Kern
![](https://secure.gravatar.com/avatar/ab7e74f2443b81e5175638d72be65e07.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 3:34 PM Robert Kern <robert.kern@gmail.com> wrote:
It is what I would have expected too, but we have a concrete example of a user who expected otherwise; where one user speaks up, there are probably more who didn't (some of whom probably have code that's not doing what they think it does). So for the cost of adding a Note, why not help some of them? As for the standardness of the definition: I don't know, have you a reference where it is defined? More natural to me would be to have a list of items with integer multiplicities (as in: "cat" 3 times, "dog" 1 time). I'm hesitant to claim ours is a standard definition unless it's in a textbook somewhere. But I don't insist on my phrasing. Anne
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 9:22 AM, Anne Archibald <peridot.faceted@gmail.com> wrote:
On Mon, Jan 23, 2017 at 3:34 PM Robert Kern <robert.kern@gmail.com> wrote:
I don't object to some Notes, but I would probably phrase it more like
we are providing the standard definition of the jargon term "sampling without replacement" in the case of non-uniform probabilities. To my mind (or more accurately, with my background), "replace=False" obviously picks out the implemented procedure, and I would have been incredibly surprised if it did anything else. If the option were named "unique=True", then I would have needed some more documentation to let me know exactly how it was implemented.
It is what I would have expected too, but we have a concrete example of a
user who expected otherwise; where one user speaks up, there are probably more who didn't (some of whom probably have code that's not doing what they think it does). So for the cost of adding a Note, why not help some of them? That's why I said I'm fine with adding a Note. I'm just suggesting a re-wording so that the cautious language doesn't lead anyone who is familiar with the jargon to think we're doing something ad hoc while still providing the details for those who aren't so familiar.
Textbook, I'm not so sure, but it is the *only* definition I've ever encountered in the literature: http://epubs.siam.org/doi/abs/10.1137/0209009 http://www.sciencedirect.com/science/article/pii/S002001900500298X -- Robert Kern
![](https://secure.gravatar.com/avatar/0277c8ef956d521affbf04a5a01168da.jpg?s=120&d=mm&r=g)
On Mon, Jan 23, 2017 at 5:47 PM, Robert Kern <robert.kern@gmail.com> wrote:
Very interesting. This paper (PDF available if you search for its name in Google) explicitly mentions one of the uses of this algorithm is "multistage sampling", which appears to be exactly the same thing as in the hypothetical Gulliver example I gave in my earlier mail. And yet, I showed in my mail that this algorithm does NOT reproduce the desired frequency of the different sampling units... Moreover, this paper doesn't explain why you need the "without replacement" for this use case (everything seems easier, and the desired probabilities are reproduced, with replacement). In my story I gave a funny excuse why "without replacement" might be warrented, but if you're interested I can tell you a bit about my actual use case, with a more serious reason why I want without replacement.
participants (5)
-
alebarde@gmail.com
-
Anne Archibald
-
josef.pktd@gmail.com
-
Nadav Har'El
-
Robert Kern