Probability problem from Star Trek: The Next Generation

In the first episode of season 7 of Star Trek: TNG, “Descent, part II”, a certain character (no spoilers from me!) tells another character that a medical experiment has a 60% chance of failing, meaning it will kill the subject. But, this evil character says, since he has three captives to perform the experiment on, “the odds are that at least one of the procedures will be successful.”

Is he right? Is there a >50% chance that at least one of the procedures will be successful? With a 40% chance of succeeding and three trials to get it right, it seems obvious at an intuitive level that at least one of them will succeed. But because I last watched this episode shortly after my first semester of Statistics, I thought it’d be fun to calculate the exact probability that at least one of the procedures will be successful.

From introductory Statistics, we can see that this is a relatively simple binomial experiment, with \(p\) (the probability of success) \(= .4\) and \(n = 3\). As is often the case when you need to calculate the probability that something will happen at least once, it is easiest to calculate the probability that it won’t happen, and subtract that from \(1\).

So,

\(P(all~three~procedures~fail) = .6^3 = .216. \\
P(at~least~one~procedure~succeeds) = 1 – .216 = .784\).

There are two other ways to compute this probability. Hopefully, they yield the same result!

From an important binomial probability theorem,

\(b(x; n, p) = {n \choose x} p^x (1 – p)^{n-x}\)

where \(b\) is the probability mass function (pmf) of a binomial experiment, meaning the probability of a single outcome (as opposed to the cumulative density function, which measures the collective probability of multiple outcomes), \(x\) is the number of successes, \(n\) is the total number of trials, and \(p\) is the probability of success. The notation \({n \choose x}\) is pronounced “n choose x” and means the total number of ways to choose \(x\) outcomes out of \(n\) possible outcomes. This is a good introduction to combinations (and permutations).

First, let’s use the binomial pmf to calculate the probability of zero survivors among the three procedures:

\(b(0; 3, .4) = {3 \choose 0} (.4)^0 (1 – .4)^3 = .216\)

As it turns out in this simple example, the above computation is just \(1\cdot 1\cdot .6^3\), so basically the same as the original high-school-level computation we did first. I’ll go out on a limb and assume that subtracting this from \(1\) will give the same result as it did above.

We can also use that binomial pmf to calculate the probability that one procedure will succeed plus the probability that two will succeed plus the probability that all three will succeed. This calculation would ignore the reality that the evil experimenter will stop after the first success, but to calculate the probability that at least one procedure will succeed, we need to include all three of them.

\(b(1; 3, .4) + b(2; 3, .4) + b(3; 3, .4) \\
= {3 \choose 1} (.4)^1 (1 – .4)^2 + {3 \choose 2} (.4)^2 (1 – .4)^1 + {3 \choose 3} (.4)^3 (1 – .4)^0 \\
= .432 + .288 + .064 = .784\)

I know of one final way to calculate the probability that at least one procedure will succeed: use the TI-83’s binomcdf function. It is located under the DISTR menu, which is the 2nd option on the VARS key. The syntax is

\(binomcdf(n,p,x)\)

and this tells you the cumulative probability of all outcomes in a binomial experiment from \(0\) to \(x\) successes. In this case, we are interested in the cumulative probability from \(x=1\) to \(x=3\), not \(x=0\) to \(x=3\). Therefore, in the TI-83 we can type either

\(binomcdf(3,.4,3) – binomcdf(3,.4,0)\)
or
\(binomcdf(3,.4,3) – binompdf(3,.4,0)\)

Both commands tell us the cumulative probability of zero successes through three successes minus the probability of zero successes, and both give \(.784\).

So we can see that our common-sense intuition was right: with a 40% chance of success, the chances are very favorable that at least one of the first three trials will produce a success.

At what point does the probability of success surpass 50%? My guess is two trials. This can be easily confirmed by changing \(n\) from \(3\) to \(2\) and calculating the binomial probability:

\(P(getting~at~least~one~success~out~of~the~first~two~trials) \\
= b(2; .4, 2) + b(2; .4, 1) = {2 \choose 2} .4^2 .6^0 + {2 \choose 1} .4^1 .6^1 \\
= .64 \\
(= 1 – b(2; .4, 0) = 1 – .36 = .64)\)

Another, more high-school-ish way to verify the probability of succeeding within the first two trials is to realize there are only two ways this could happen: succeed on the first trial, or fail on the first trial and succeed on the second:

\(P(succeed~on~the~first~trial) + P(fail~first~and~then~succeed)\\
= .4 + .6\cdot .4\\
= .64\)

Another thing our evil experimenter might be interested in is the expected value of the number of captives he will need to achieve success. Expected value is basically a weighted average. This is a good beginner’s summary of expected value. One of the first things that strikes any Statistics/Probability student about expected value is that you should hardly ever actually expect to get the expected value in an experiment, because often the expected value is impossible to achieve. For instance, your experiment only produces integer outcomes, but the expected value, being a (weighted) average, is a decimal. This is the case with many binomial experiments. The number of captives our evil experimenter will perform the procedure on is \(1\), \(2\), or \(3\), but I bet the expected value of this binomial experiment will be between \(1\) and \(2\).

The definition of expected value as a weighted average is more apt for random variables than binomial variables, but you can still calculate expected value for binomial distributions. In fact, in this case we can calculate two different expected values.

First, the simple, standard expected value of a binomial distribution: \(E(X) = np\). That is, the expected number of successes from \(n\) trials is \(n\) times the probability of success. Pretty simple, huh? So

\(E(X) = np = 3\cdot .4 = 1.2\)

So if he performed the procedure on all three captives, he should expect \(1.2\) successes. Similarly, the expected number of successes after the first two trials is \(.8\), and the expected number of successes after the first trial is \(.4\).

But that’s not the expected value I originally referred to. I said the experimenter might be interested in the expected number of procedures he’d have to perform to reach one successful procedure. I can’t find any definitive theorem or formula that tells how to calculate such an expected value in my Statistics textbook or the few places I’ve looked online, but I think it’s this:

\(1 = n(.4) \\
1/.4 = n\\
2.5 = n\)

In other words, since each experiment has a \(.4\) chance of succeeding, how many experiments do you expect to need to reach \(1\) success? What times \(.4\) equals \(1\)? It’s \(2.5\).

That’s higher than I expected. That was the number I expected to be between \(1\) and \(2\). This seems incongruent with our result above that the probability of success surpasses 50% after two trials. If the probability of success becomes better than even after two trials, shouldn’t you expect to reach one success in \(\leq 2\) trials? And shouldn’t the expected number of successes after two trials be something greater than \(1\), instead of \(.8\), then? I know both sets of calculations are correct, so this is either one of those counterintuitive results you often get in probability, or I’m framing one of the questions wrong…

This entry was posted in Math, TV. Bookmark the permalink.

One Response to Probability problem from Star Trek: The Next Generation

  1. Shay says:

    Simpler to calculate the chance that all experiments fail, which is just the probabilities multiplied. In the first, 60%x60%x60% = 21.6%. So the chance that at least one won’t fail is just 100%-21.6%=78.4%.