After completing this section, you should be able to do the following.
Use counting techniques to compute probabilities of events.
Understand the basic rules of probability.
Compute probabilities of compound events, both independent and dependent.
SubsectionSection Preview
Investigate!
Suppose you, like the 17th-century French nobleman Chevalier de Mere, liked gambling on the outcome of rolling fair 6-sided dice (each numbered 1 to 6). Would you bet him that he couldn’t roll at least one 6 in four rolls of a single die? What about betting that he couldn’t roll at least one double-6 in 24 rolls of both dice?
To make these decisions, we should decide
How likely is it that in four rolls of a single die, there will be at least one 6?
How likely is it that in 24 rolls of two dice, there will be at least one double-6?
Since the ratio \(4:6\) is equal to the ratio \(24:36\text{,}\) should the probability of these events be the same? That’s what the Chevalier de Mere thought. Do you?
Here is a python script that can help you get a feel for the questions above. You can switch between the two questions by commenting and uncommenting out the appropriate lines (lines that start with a # are comments). See how lucky you are!
If you know some python, you might want to modify the script to run the experiment 1000 times and see how many of those are “wins”.
We can get a feel for probability empirically by observing how frequently events occur when an experiment is repeated many times. It often happens, as it did with the Chevalier de Mere, that our intuition about probability is not quite right. Using the counting techniques we have studied, we can explain why our intuition is off and what the true probabilities are.
Most of the questions about counting we have considered in this chapter can also be asked as a question about probability. For example: How many passwords of length 8 can you make using just lower-case letters? What is the probability that randomly selecting 8 lower-case letters will give you your password?
While the subject of probability is vast and complex, the basics of discrete probability are little more than counting. So here we will take a brief look at how our study of counting can help us understand probability.
Suppose you were in a class of 30 students. How likely is it that at least two of the students were born on the same day of the year?
Assume that all days are equally likely and that nobody was born on February 29th. Would you believe the answer is more than 25%? More than 50%? More than 70%??? Let’s find the answer.
1.
First, what should we mean by probability? If you roll a fair six-sided die, what is the probability of rolling a 6?
What is the probability of rolling an even number?
2.
We will define the probability of an event as the number of ways the event can happen divided by the total number of things that can happen.
(a)
Suppose you roll two dice (one red and one green). How many total outcomes are there?
(b)
Of those outcomes, how many have different numbers on the two dice?
Hint.
How many sequences of two different numbers can you make using the numbers 1 to 6?
(c)
Combining the two numbers your found above, what is the probability that two dice will show different numbers?
(d)
What is the probability that you will get three different numbers when rolling three dice? (Assume the dice are different colors).
3.
Now to birthdays. There are 365 days in a year.
(a)
How many possible sequences of 30 birthdays are there?
(b)
How many possible sequences of 30 birthdays contain no repeats?
(c)
What is the probability that 30 people have no repeated birthdays?
(d)
Among the 30 people, either they all have different birthdays or at least two share a birthday. Since this is certain, its probability is 1. So what is the probability that at least two people (out of the 30) share a birthday?
(e)
What is the smallest number of people you would need to have a greater than 90% chance that at least two share a birthday?
SubsectionComputing Probabilities
Think about how we use the language of probability in our everyday lives. We might say that tossing a coin has a 50% chance of coming up heads. Or that when rolling two dice, having the sum of the dice result in a 7 is more likely than having the sum be a 2. Casinos certainly rely on certain pairs of cards being consistently more likely than others when setting payouts for Blackjack. All of this assumes that there is some randomness to events, and that even in this randomness, there is some consistency to what can happen. We will assume this model of reality.
The things we can assign probabilities to are called random experiments. These can have different possible outcomes. We will call the (finite) set of possible outcomes to a random experiment the sample space (we will usually denote this set as \(S\)). By definition, performing a random experiment will always result in exactly one outcome from the sample space.
Throughout this section, we will always assume the uniform probability distribution, which means that we insist that each outcome in the sample space is equally likely. Then the probability of any particular outcome in the sample space \(S\) is exactly \(\frac{1}{\card{S}}\text{.}\)
Note3.7.1.
The uniform probability distribution is a common and reasonable assumption to make, but it does preclude us from asking some questions. For example, throwing a dart at a dartboard is not uniformly distributed, and similarly, rolling weighted dice would not be. What is the probability that a thumbtack lands point up? But how would we even start to answer these questions? We would have to make some assumptions about what the probabilities of the outcomes actually are (perhaps via some repeated experiments).
There are other reasons to study different probability distributions, and this is a major topic of study in a course in probability theory.
Example3.7.2.
Suppose you flip two fair coins (a penny and a nickel). What is the sample space of possible outcomes? What is the probability of getting two heads?
Solution.
The same space is the set of all possible outcomes of the experiment, which in this case is the set \(\{HH, HT, TH, TT\}\text{.}\) The probability of getting two heads is then \(\frac{1}{4}\text{.}\) In fact, every outcome has probability \(\frac{1}{4}\) since there are 4 outcomes in the sample space.
Finding probabilities of outcomes really is this easy. Where things get more fun is if we look for the probability of an event: a subset of the sample space. For a particular random experiment, there might be lots of different events we ask about, and they do not need to be mutually exclusive. An event can also be a set containing just a single outcome or might contain no outcomes.
For example, suppose you roll a fair 6-side die. The sample space contains six outcomes \(\{1,2,3,4,5,6\}\text{.}\) Some events we might care about include rolling an even number (the subset \(\{2,4,6\}\)), rolling a number less than \(3\) (the set \(\{1,2\}\)), or rolling a number less than \(10\) (the subset \(\{1,2,3,4,5,6\}\)). In fact, we now know that there are exactly \(2^6 = 64\) different events we could ask about, since there are \(64\) subsets of the sample space.
What does our intuition suggest about the example events described above? Rolling an even number should be just as likely as rolling an odd number, so we hope that the probability of rolling an even number is \(\frac{1}{2}\text{.}\) Similarly, the probability of rolling a number less than \(3\) should be \(\frac{1}{3}\) since a third of the possible outcomes are less than 3. What about rolling a number less than \(10\text{?}\) Well, this must happen, so it would be \(100\%\text{,}\) which as a fraction is just \(1\text{.}\)
Consistent with our intuition, we define the probability of an event as follows.
Definition3.7.3.
Suppose a random experiment has sample space \(S\text{.}\) The probability of an event \(E\) is the number of outcomes in \(E\) divided by the number of outcomes in \(S\text{.}\) We write this as \(P(E) = \frac{\card{E}}{\card{S}}\text{.}\)
Example3.7.4.
Suppose you roll a regular 6-sided die (each side contains a number from 1 to 6). What is the probability that you will roll an even number?
Solution.
The sample space is the set \(\{1,2,3,4,5,6\}\) of possible rolls. The event, call it \(E\) for even, is the set of outcomes \(\{2, 4, 6\}\text{.}\) Thus the probability of \(E\) occurring is
We have spent a lot of effort learning how to count the size of sets. We can then use this to compute probabilities by counting the size of the sample space (set) and the size of the event (set).
Example3.7.5.
If you draw 5 cards from a regular deck of 52 cards, what is the probability that you will draw 4-of-a-kind?
Solution.
First, let’s count the sample space, which will consist of all 5-card hands. The order of the cards in a hand is not important, so we will just count 5-element subsets of the 52 cards. The sample space therefore contains \(\binom{52}{5}\) elements. (This number is just under 2.6 million: \(2,598,960\) to be exact.)
Now, how many of those will be 4-of-a-kind? One way we could count this would be to first select which of the 13 values will be the 4-of-a-kind, which can be done in \(\binom{13}{1} = 13\) ways. What about the other card in the hand? Well, there are 48 other cards it could be, so the number of 4-of-a-kind hands is \(13\cdot 48 = 624\text{.}\)
This makes the probability of getting 4-of-a-kind,
An important subtlety: Whenever counting the size of the sample space and the event, we must make sure that we are really counting the number of elements of the sample space that are in the event. In particular, if we count subsets of cards in the sample space (using a combination instead of using a permutation to count sequences of cards) then we must count the number of subsets of cards in the event.
Interestingly, we can find the probability of getting 4-of-a-kind using permutations too: The number of 5-card sequences is \(P(52,5) = 311,875,200\text{.}\) Finding the number of 4-of-a-kind sequences is a little more complicated. There are 13 possible values for the 4-of-a-kind, and 48 remaining cards for the fifth card. But those five cards can be arranged in \(5!\) different ways. So the number of 4-of-a-kind sequences is \(13\cdot 48\cdot 5!\text{.}\) This gives,
Is this close to the same answer we had before? It is exactly the same (we can verify this by noticing the extra \(5!\) in both the numerator and denominator).
While picking between combinations and permutations (as long as you pick the same for both the sample space and the event) will give you the same probability, this is not always true, as you are asked to explore in some of the additional exercises.
SubsectionProbability Rules
Here are a few basic probability facts that follow easily from our definition of probability and understanding of counting. While we are still under the assumption that the outcomes in the sample space are equally likely (the uniform probability distribution), these rules will hold for all probability distributions.
First, we often are interested in the probability that an event does not occur. We call this the complement of the event. Remember, events are subsets of the sample space, and not being “in” the event means you are in the complement of that subset. Using the same notation we have for sets, the complement of an event \(E\) will be written \(\bar{E}\text{.}\) Here is the relationship between the probability of an event and its complement.
Theorem3.7.6.
The probability of the complement of an event \(E\) is
Remember that \(P(E) = \frac{\card{E}}{\card{S}}\text{,}\) the number of outcomes in the event \(E\) divided by the total number of outcomes. But how many outcomes are not in \(E\text{?}\) All the others. That is,
Suppose you flip a fair coin 10 times. What is the probability that you will get at least one heads?
Solution.
There are lots of ways you can get at least one head, but only one way you can get no heads (that is, get all tails). So it makes sense to try to compute the requested probability as the complement of a probability easier to compute.
The sample space here is the set of all 10-toss sequences. How many are there? For each term in the sequence, it could be a head (H) or tail (T), so there are \(2^{10} = 1024\) possible sequences.
We want to find the probability of getting at least one head. Let’s think of this just as a counting question: How many 10-toss sequences have at least one head? All \(1024\) of them, except the one all tails sequence. So there are \(1023\) sequences with at least one head. Thus the probability of getting at least one head is \(\frac{1023}{1024}\text{.}\)
Wait, did we use Theorem 3.7.6? Not explicitly, but essentially we have. Using the theorem, we would have said that the probability of getting at least one head is
So whether we do the subtraction to calculate the size of the complement, or use the complement formula and subtract fractions, we get the same answer.
Complementary probabilities are very useful when answering historical questions about dice.
Example3.7.8.
What is the probability that you will roll at least one 6 in four rolls of a fair 6-sided die?
Is this the same as the probability that you will roll at least one double 6 in 24 rolls of two dice?
Solution.
The complementary event is rolling a die four times and never getting a 6. Of the \(6^4\) possible rolls, there are \(5^4\) that contain no 6. So the probability of getting at least one 6 in four rolls is
\begin{equation*}
P(\text{at least one 6}) = 1 - P(\text{no 6}) = 1 - \frac{5^4}{6^4} \approx 0.5177\text{.}
\end{equation*}
For the double 6 in 24 rolls variant, we use the complementary event as well: what is the probability of not getting double 6s? That means on every roll you get one of the 35 other pairs.
\begin{equation*}
P(\text{at least one double 6}) = 1 - P(\text{no double 6}) = 1 - \frac{35^{24}}{36^{24}} \approx 0.4914\text{.}
\end{equation*}
Indeed, the Chevalier de Mere noticed that when playing the game with two dice, he tended to lose money in the long run. Who did he turn to to ask for help? Blaise Pascal, of course!
Another way to think about complementary probabilities is to say that
A probability of 1 means the event is certain, so perhaps we should think of this as giving the probability that event \(E\) either happened or didn’t happen. This is exactly what we want to mean by adding probabilities.
Theorem3.7.9.
Suppose \(A\) and \(B\) are two disjoint events. Then the probability of either \(A\) or \(B\) happening is,
\begin{equation*}
P(A\cup B) = P(A) + P(B)\text{.}
\end{equation*}
If \(A\) and \(B\) are not disjoint, then the probability of \(A\) or \(B\) occurring is,
The proof of this fact is one of the exercises in this section. However, it should become clear how this works with an example.
Example3.7.10.
Suppose you roll a fair 6-sided die. What is the probability of rolling a number that is even or less than 3 ?
Solution.
We don’t need a theorem to answer this. The sample space is \(\{1,2,3,4,5,6\}\) and the event is the subset \(E = \{1,2,4,6\}\text{.}\) So \(P(E) = \frac{4}{6}\text{.}\)
To see where the \(4\) comes from, let \(A\) be the event of rolling an even number (so \(A = \{2,4,6\}\)) and \(B\) be the event of rolling a number less than 3 (so \(B = \{1,2\}\)). Notice that the notation \(P(E) = P(A \cup B)\) makes sense, since as sets, we really do have \(E = A \cup B\text{.}\)
If we go back to the definition of the probability of an event, we have,
We must find the size of the set \(A \cup B\text{.}\) But we know how to find the size of the union of non-disjoint sets: use PIE! So \(\card{A \cup B} = 3 + 2 - 1 = 4\text{.}\)
As the example demonstrates, we have basically translated the sum principle into the language of probability. Can we do the same for the product principle?
We use the product principle to find the number of ways two events can both happen, one after the other. Many probability questions ask for the probability of such compound events. Let’s consider an example to see what is going on.
Example3.7.11.
What is the probability of getting an even number when rolling a 6-sided die and a heads when flipping a coin?
Solution.
First we will find the probability directly from the definition. The sample space consists of all pairs of outcomes from the die and the coin, so \(S = \{(1,H), (1,T), (2,H), (2,T), (3,H), (3,T), (4,H), (4,T), (5,H), (5,T), (6,H), (6,T)\}\text{.}\) Without listing these, we could have calculated the size of the sample space using the product principle: \(\card{S} = 6\cdot 2 = 12\text{.}\) The event we are interested in is the set of outcomes \(E = \{(2,H), (4,H), (6,H)\}\text{.}\) Obviously that is size \(3\text{,}\) which we could have also found as \(3 \cdot 1\text{.}\) So the probability of this event is \(P(E) = \frac{3}{12} = \frac{1}{4}\text{.}\)
Now consider the two events separately. Say \(A\) is rolling an even number, and \(B\) flipping the coin and getting heads. The probability of the first event is \(P(A) = \frac{3}{6}\text{.}\) The probability of the second event is \(P(B) = \frac{1}{2}\text{.}\) It appears that the correct way to combine these probabilities is to multiply them:
\begin{equation*}
P(E) = (A \text{ and } B) = P(A)P(B) = \frac{3}{6}\frac{1}{2} = \frac{1}{4}\text{.}
\end{equation*}
How convenient that multiplying fractions is done by multiplying the numerators and denominators separately, and this is the same as applying the product principle to the numerator and denominator of the fraction.
The reason the above example worked out was that the events were independent. Intuitively, this means that the outcome of the first event has no influence on the outcome of the second event. Actually, we use this principle like the product principle to define independence.
Definition3.7.12.
Given two events \(A\) and \(B\text{,}\) we say that they are independent provided the probability of both events happening is the product of the probabilities of each event happening:
\begin{equation*}
P(A \cap B) = P(A)P(B)\text{.}
\end{equation*}
Notice that in the definition we describe the event that both \(A\) and \(B\) happen as the intersection \(A \cap B\text{.}\) Since events are sets, it makes sense to take an intersection. The intersection of two sets contains all the elements that are in both sets, which is exactly what we want here.
This shines a light on a key difference between this definition and the product principle. We use the product principle to construct a new set of outcomes by combining the outcomes in two sets. This creates new sorts of outcomes. For example, the product principle would combine the sets
\begin{equation*}
\{1,2,3\} \text{ and } \{H,T\} \text{ into } \{1H, 1T, 2H, 2T, 3H, 3T\}\text{.}
\end{equation*}
This is not the intersection of two sets (it is actually the Cartesian product: \(A \times B = \{(a,b) \st a \in A; b \in B\}\)). The definition of independence involves probabilities relative to a fixed set of outcomes. So the elements in \(A\) and \(B\) in the definition of independence are already sequences like we would have created using the product principle.
If we are more careful in Example 3.7.11 where we rolled a die and flipped a coin, we should describe the event \(A\) of first rolling an even number as the set
\begin{equation*}
A = \{(2,H), (2,T), (4,H), (4,T), (6,H), (6,T)\}
\end{equation*}
and the event \(B\) of then flipping heads as the set
\begin{equation*}
B = \{(1,H), (2,H), (3,H), (4,H), (5,H), (6,H)\}\text{.}
\end{equation*}
We then have \(P(A) = \frac{6}{12}\) and \(P(B) = \frac{6}{12}\text{,}\) with product \(P(A)P(B) = \frac{6}{12}\frac{6}{12} = \frac{36}{144} = \frac{1}{4}\text{.}\) So our solution in the example was correct but misleading. The events \(A\) and \(B\) are indeed independent since
\begin{equation*}
A \cap B = \{(2,H), (4,H), (6,H)\}
\end{equation*}
so \(P(A \cap B) = \frac{3}{12} = \frac{1}{4}\text{.}\)
Example3.7.13.
Suppose you roll a 12-sided die (numbered 1 to 12). Consider the events:
\(A\) is the event of rolling a number that is a multiple of 3.
\(B\) is the event of rolling a number that is a multiple of 4.
\(C\) is the event of rolling a number less than 7.
Are the events \(A\) and \(B\) independent? What about \(A\) and \(C\text{?}\) What about \(B\) and \(C\text{?}\)
Solution.
The sample space is the set \(\{1,2,3,4,5,6,7,8,9,10,11,12\}\text{.}\) The event \(A\) is the set \(\{3,6,9,12\}\text{,}\)\(B\) is the set \(\{4,8,12\}\text{,}\) and \(C\) is the set \(\{1,2,3,4,5,6\}\text{.}\) Thus the probabilities for each of these events are
To decide whether events \(A\) and \(B\) are independent, we find \(P(A \cap B)\text{.}\) The intersection of events \(A\) and \(B\) (meaning the number rolled is both a multiple of 3 and 4) is \(\frac{1}{12}\) (the only element of the intersection is 12). We compare this to \(P(A)P(B) = \frac{1}{3}\frac{1}{4} = \frac{1}{12}\text{.}\) Since these are equal, the events are independent.
Events \(B\) and \(C\) are not independent though. Since \(B \cap C = \{4\}\text{,}\) we have \(P(B \cap C) = \frac{1}{12}\text{.}\) But \(P(B)P(C) = \frac{1}{4}\frac{1}{2} = \frac{1}{8}\text{.}\) Since these are not equal, the events are not independent. This makes sense since there are fewer multiples of 4 less than 7 than not.
Finally, \(A\) and \(C\) are independent: \(P(A \cap C) = \frac{2}{12} = \frac{1}{6}\) and \(P(A)P(C) = \frac{1}{3}\frac{1}{2} = \frac{1}{6}\text{.}\)
When events are not independent, we get a new interesting question we can ask: What is the probability of one event given that another event has occurred? This is called ...
SubsectionConditional Probability
The famous probability problem, known as the Monty Hall problem, presents the following conundrum. You are on the game show Let’s Make a Deal and will win whatever is behind one of three doors you decide to open. Behind one door is a car; behind the other two are goats. You pick a door, but before opening it, the host (Monty Hall) reveals one of the other doors that has a goat behind it. You then have the opportunity to switch doors. Should you switch? What is the probability of getting the car if you do?
You might be tempted to say that the probability of getting the car when you switch is \(\frac{1}{2}\text{.}\) After all, there are two doors left, and the car is behind one of them. However, we must ask what the probability of getting the car is given that Monty has revealed a goat behind another, unpicked door.
Definition3.7.14.
Given two events \(A\) and \(B\text{,}\) the conditional probability of \(A\) given \(B\) is,
Does this definition agree with our intuition for what conditional probability should mean? Let’s think about the sample space. We want to know the chances of \(A\) occurring under the assumption that \(B\) has already occurred. In other words, we only care about the elements of the sample space that belong to \(B\text{.}\)
If \(B\) becomes the sample space, then the only outcomes from \(A\) that can possibly occur are the outcomes that are in \(A\)and\(B\text{.}\) So perhaps the definition of conditional probability really should be,
\begin{equation*}
P(A | B) = \frac{\card{A \cap B}}{\card{B}}\text{.}
\end{equation*}
Unfortunately, I’m not in charge of probability definitions. It turns out that the standard definition is just as good though. This is because,
Suppose you roll two 6-sided dice with your eyes closed. Your friend says, “Hey look, at least one of your dice is a 4.” What is the probability that you rolled a sum of 7?
Solution.
First note that the probability of rolling a sum of 7 is \(\frac{6}{36} = \frac{1}{6}\text{,}\) since of the 36 pairs of numbers that can appear, there are six pairs that sum to 7: \(\{(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)\}\text{.}\) However, we are now in a situation where at least one die is a 4. This limits the sample space to the 11 pairs that contain a 4 (we can count this using PIE as \(6 + 6 - 1\)). Of these, only two sum to 7: \(\{(3,4), (4,3)\}\text{.}\) So the probability of rolling a sum of 7 given that one die is a 4 is \(\frac{2}{11}\text{.}\)
If we use the definition of conditional probability, we would compute this slightly differently but arrive at the same answer. We have events \(A\) (the sum is 7) and \(B\) (at least one die is a 4). Then \(P(B) = \frac{11}{36}\) and \(P(A \cap B) = \frac{2}{36}\text{.}\) So \(P(A|B) = \frac{2}{11}\text{.}\)
Notice that the probability of rolling a sum of 7 given that the red die is a 4 (say they are different colors) will be different! That would be \(\frac{1}{6}\text{,}\) since we are really just asking for the probability that the other die is a 3.
Example3.7.16.
Suppose you draw two cards from a standard deck of 52 cards. What is the probability that the second card is a face card given that the first card is red?
What is the probability that the first card is red given that the second card is a face card?
Solution.
We have a sample space consisting of the \(52\cdot 51\) sequences of two cards. Event \(A\) will be those pairs that have a red card as the first in the sequence. Event \(B\) will be those pairs that have a face card second in the sequence.
We are looking for both \(P(A | B)\) and \(P(B | A)\text{,}\) so we will need to find \(P(A)\text{,}\)\(P(B)\text{,}\) and \(P(A \cap B)\text{.}\) There are \(26 \cdot 51\) pairs in \(A\) (select one of the 26 red cards, and then any of the remaining 51 cards), so
Finding the size of the intersection is a little more challenging (we did so in the subsection Combining Principles). There are \(20\cdot 12\) pairs that start with a red, non-face card, and end with a face card, and another \(6 \cdot 11\) pairs that start with a red face card and end with a face card. So
Wait a second! What? The probability that the first card is red given that the second card is a face card is the same as the probability that the first card is red?? It seems that \(P(A | B) = P(A)\text{,}\) and that \(P(B | A) = P(B)\text{.}\) What could that mean?
Look what happens when you clear the denominator in the definition of conditional probability
\begin{equation*}
P(A | B) = \frac{P(A \cap B)}{P(B)}
\end{equation*}
becomes
\begin{equation*}
P(A \cap B) = P(B)P(A | B)\text{.}
\end{equation*}
This looks almost like the definition of events being independent, except that instead of \(P(A)\) in the product we now have \(P(A | B)\text{.}\) But what does \(P(A | B)\) even mean if \(A\) and \(B\) are independent? If the events are independent, then it should be no more or less likely that \(A\) occurs given that \(B\) has occurred. So we should have \(P(A | B) = P(A)\text{.}\) This is exactly what we saw in the last example.
Reading QuestionsReading Questions
1.
Which of the following are true about the equation \(P(A \cup B) = P(A) + P(B)\text{?}\)
This is true as long as the events \(A\) and \(B\) are disjoint.
This is true as long as the events \(A\) and \(B\) are independent.
This is always true.
This is never true.
2.
Which of the following relationships hold for any two events \(A\) and \(B\text{?}\)
\(P(A\cap B) = P(A)P(B)\text{.}\)
This is only true if \(A\) and \(B\) are independent.
\(P(A \cap B) = P(A)P(B|A)\)
\(P(A \cap B) = P(B)P(A|B)\)
Right. Since \(P(A \cap B) = P(B \cap A)\text{.}\)
\(P(A \cap B) = P(A)P(A|B)\)
\(P(A|B)\) is read, “the probability of \(A\) given \(B\text{.}\)”
3.
What questions do you have after reading this section? Ask at least one question about the material that you are curious about.
ExercisesPractice Problems
1.
You flip a fair coin three times and record whether it lands head (H) or tails (T).
(a)
List all the elements in the sample space. For example, one outcome is HHT.
(b)
Suppose you bet your friend that you would get more heads than tails. List all elements in this event.
(c)
What is the probability of getting more heads than tails?
(d)
What is the probability of getting exactly two heads?
What is the probability of getting exactly three heads?
2.
Suppose you flip a fair coin 18 times.
(a)
What is the size of the sample space?
(b)
What is the size of the event that you get exactly 8 heads?
So then what is the probability that you get exactly 8 heads?
(c)
What is the size of the event that you do NOT get exactly 8 heads?
What is the probability that you do NOT get exactly 8 heads? Use the definition of probability and the previous answer.
What probability do you get for this same event if you use the fact that the probability of an event is 1 minus the probability of the complement of the event?
3.
Suppose you flip a fair coin 19 times.
(a)
What is the size of the sample space?
(b)
What is the size of the event that you get exactly 3 heads?
What is the size of the event that you get exactly 15 heads?
What is the size of the event that you get exactly 3 heads OR exactly 15 heads?
(c)
What is the sum of the probabilities of getting exactly 3 heads and getting exactly 15 heads?
+ =
What is the probability of getting exactly 3 heads OR getting exactly 15 heads? Compute this by finding the size of the event, divided by the size of the sample space.
\(\div\)\(=\) .
4.
You have a bag of special math-themed M&M’s. The bag promises that inside there are 1 blue, 4 red, 2 orange, 1 green, 1 brown, and 5 yellow M&M’s.
Find the probabilities of the following events.
(a)
What is the probability that if you pick a single M&M, it is blue or yellow?
(b)
What is the probability that if you pick two M&M’s at the same time, you will get a blue and yellow?
(c)
What is the probability that if you pick two M&M’s one at a time, you will get a blue first and a yellow second?
5.
In a standard deck of 52 cards, 26 cards are red and 26 cards are black. Thus the probability of drawing a red card is \(0.5\text{.}\)
(a)
What is the probability that when flipping a coin twice, you get tails both times?
(b)
What is the probability that if you are dealt two cards from a standard deck, both cards are red?
6.
Suppose you take just eight playing cards, four red and four black. You also have a fair coin that you can flip as many times as you want.
(a)
Compare the probability of getting tails when flipping a coin once to the probability of drawing a single red card.
Suppose you have three dice: a 4-sided die, a 6-sided die, and an 8-sided die. Each die is fair and numbered from 1 to the number of sides it has. You roll all three dice.
(a)
What is the probability that the sum of the dice is 3?
(b)
What is the probability that the sum of the dice is 6?
Hint.
You could list out all the ways you can get a 6, or use sticks and stones.
(c)
What is the probability all three dice have the same number when rolled?
(d)
What is the probability all three dice have the same number when rolled given that the sum of the dice is 6?
8.
You roll 3 fair 6-sided dice. What is the probability that all three dice show different numbers?
What is the probability that all three dice show the same number?
Is it possible for the dice to show neither all the same nor all different numbers? If so, the probability of this happening would be 0. What is the probability?
ExercisesAdditional Exercises
1.
When playing 5-card poker, a full house is a hand that contains three cards of one rank and two cards of another rank. For example, you could have three 7s and two 4s.
Find the probability of being dealt a full house in two different ways:
Assume that all five cards are dealt at once, so that the sample space has size \(\binom{52}{5}\text{.}\)
Assume that the cards are dealt one at a time, so that the sample space has size \(P(52,5)\text{.}\)
Are the two answers the same? Why does this make sense?
2.
A random number generator selects single-digit numbers (0 through 9) with equal probability. Suppose the generator produces five numbers.
What is the probability that the five numbers will all be different? Answer this question in two ways:
Assume the numbers come out of the generator in a sequence, so that the sample space has size \(10^5\text{.}\)
Assume the numbers come out as a multiset, or equivalently, that the numbers must appear in non-decreasing order. You will want to use sticks and stones to count the size of the sample space.
Are the two answers the same? Why does this make sense?
Each of 10 friends has a deck of cards that they shuffle thoroughly. Each friend draws a card from their deck. What is the probability that at least one pair of friends draw a matching card?
Hint.
Use complementary probabilities. And don’t be surprised if your answer is larger than you would have expected.
5.
How many people do you need to have in a room to have a 50% chance that at least two people share the same birthday (day of the year)? Assume that all birthdays are equally likely, and that nobody is born on Leap Day (February 29th).
6.
At your 20th high school reunion, you meet an old friend you hadn’t heard from in years. You talk about pets, specifically cats and dogs. She tells you that she has two pets, and that at least one of them is a cat. What is the probability that she has two cats? (Assume that having a cat or a dog is equally likely.)
7.
Another old friend overhears your pet conversation and says that he also has two pets, and that the one he has had the longest is a cat. What is the probability that he has two cats? And why is this answer different from the previous question?
8.
You are playing a shell game with three cups. Under one cup are two green balls, under another cup are two red balls, and under the third cup are one green and one red ball. You close your eyes, and your friend rearranges the cups. You then open your eyes and pick a cup at random. You see that it contains a green ball. What is the probability that the other ball under that cup is also green? Explain your answer in terms of conditional probability.