Guide to Chapter 6
Copyright © 2008–2013 by Stan Brown, Oak Road Systems
Copyright © 2008–2013 by Stan Brown, Oak Road Systems
This is your guide to what’s important in the chapter, with comments on some things that the chapter leaves out or doesn’t explain well. Page numbers refer to Sullivan, Michael, Fundamentals of Statistics 3/e (Pearson Prentice Hall, 2011), which is equivalent to the “second custom edition” for TC3.
Always check Corrections to Sullivan’s Fundamentals of Statistics, 3rd Edition for known mistakes in the textbook.
Overview: In Chapter 5 we looked at probabilities of specific outcomes. In Chapters 6 and 7 we’ll look at the distribution of probability for all possible outcomes.
292 “Random variable” is one of the recurring concepts of the course. For example, the mean of a random sample is a random variable because taking the sample is a probability experiment. We’ll look at that particular type of random variable in Chapter 8.
292 notation: X is the variable, x is a particular value (data point)
Distinguish discrete and continuous random variables: only certain points on the number line versus all values (perhaps limited to a range)
This is nothing new: you’ve already learned the difference between discrete and continuous in Chapter 1.
293 discrete probability distribution (DPD) = probability model from Chapter 5 pg 225, and the rules are the same.
294–5 probability histogram = rel. freq. histogram. Bars are labeled at the middle because they are ungrouped discrete values.
probability represented by area
optional: Use MATH200A part 1 to help make the histogram.
296 mean of DPD need not be a possible value of X. Note symbol: μ or μx, not x̅.
1-VarStats L1,L2 (pg 303)
rather than formula; check for n=1 (verify this against book
This is just the weighted mean we saw in Chapter 3.
Interpret the mean value as the average outcome of a zillion trials.
297–8The mean value of a probability distribution is sometimes called expected value. Know both terms. You can think of expected value as the average outcome if you tried the experiment many times. For instance, if the expected value of a $1 slot machine is −0.08, then in the long run you expect to lose an average of 8 cents for every dollar bet.
299 s.d. of a DPD — compare to s.d. of a relative frequency distribution.
note: σ not s
1-VarStats L1,L2 (pg 303)
interpretation (not in book): variability. We can’t be too precise in interpretation at this point, but consider σ in relation to the size of μ.
Example: should you park in a lot or on the street? If you park in a lot, it’s $10 for less than an hour (p = 25%) and $14 for more than an hour. If you park on the street, you might receive a simple $30 parking ticket (p = 20%), or a $100 citation for obstruction of traffic (p = 5%), but of course you might get neither (p = 75%). Which should you do?
See answer below.
(adapted from John Allen Paulos, A Mathematician Plays the Stock Market)
300 try review #1–6
323 Can you see the flaw in the book’s reasoning in “Should we convict?” If not, please read the note for this page in Corrections to Sullivan’s Fundamentals of Statistics, 3rd Edition.
Answer to parking problem:
Parking in a lot has μ = $13, σ = $1.73. Parking on the street has μ = $11, σ = $23.64. Street parking represents a slightly lower expected value (average cost), but very much greater uncertainty. Now, do you feel lucky?
304 Know the criteria for binomial experiment. (Criteria 2 and 4 are the same.)
305–6 success and failure don’t mean good and bad. They mean “the thing you’re counting” and “the other thing”.
306–8 This section explains why the formulas are what they are. Skim-read it but don’t obsess over it because you will use MATH200A part 3 for all calculations. (Binomial Probability Distribution on TI-83/84 gives procedure if you don’t have the program.)
Try Example 3 using calculator.
309 There’s no need to use tables. But try Example 4 using the program and verify that you get about the same answers. (Answers from table lookups are rounded, and your calculator is more accurate.)
312 You need the formulas on this page.
312–3 If you wish, use MATH200A part 1 to display a binomial probability histogram. However, constructing binomial probability histograms is not as important as reading them.
313 Figure 11 is a 10-question true/false where you know nothing about the subject and you answer by pure guessing. What is your chance of passing (60% or better)? Implications?
314–5 Know the rule of thumb for normal approximation: variance of X or VAR(X) = np(1−p) ≥ about 10
Use Empirical Rule (when it applies) to identify unusual results. Figure the values of μ±2σ; the Empirical Rule says 95% of the cases fall within those bounds. Anything outside those bounds is unusual (probability < 5%).
Example 8 is preview of inferential stats.
What if np(1−p) < about 10? Then the histogram is too different from a bell curve, so you can’t use the Empirical Rule and must use exact computations with MATH200A part 3, and your criterion for “unusual” is probability under .025, not .05. See example below.
315 try review #1,3–6
(See the Web page; this isn’t in your book.) Briefly, in Talladega County in 1965, 26% of eligible jurors were black, but the 100-man jury pool in Swain’s case included only 8 blacks. Racial bias, or just the luck of the draw in random selection?
Solution: First note that this is a binomial distribution: a pool member is either black (success) or not black (failure), there is a fixed number of trials (100 in the pool), and whether one is black should have nothing to do with whether another is black. So we have a BPD with n = 100, p = 0.26, x = 8. Is this black/white mix unusual? And is it so unusual that it calls the state’s claim of random selection into question and suggests that the selection was racially biased?
Method 1: Check whether the BPD can be approximated by a normal distribution:
np(1−p) = 100×.26×(1−.26) ≅ 19.2
This is >10, so we can use the normal approximation.
μ = np = 100×.26 = 26
σ = √[np(1−p)] = √[100×.26×(1−.26)] ≅ 4.4
μ−2σ = 26−2×4.38634 ≅ 17.2 and μ+2σ = 26−2×4.38634 ≅ 34.8
Any 100-man jury pool with fewer than 17.2 blacks or more than 34.8 blacks would be unusual. (We can say that better as “fewer than 18 or more than 34”.) The jury pool in Swain’s case had only 8 blacks, so this is indeed unusual. How unusual? You’ll learn to compute a p-value in Chapter 10, but for now look at the z-score:
z = (x−μ)/σ = (8−26)/4.38634 ≅ -4.10
You already know that z-scores outside ±2 are unusual (5% likely), and outside ±3 are quite unusual (0.3% chance). So a z-score below −4 is very, very unusual. We have a lot of trouble believing this was just random chance, particularly if we found a similar pattern with other cases involving black defendants and white victims.
Method 2: MATH200A Program part 3 lets you compute the likelihood directly, whether or not the normal approximation is valid in a particular problem.
n=100, p=.26, x from 0 to 8. P(x≤8) = 4.73×10-6 ≅ 0.000 005
In other words, in a 100-man jury pool assembled by random selection, there are only 5 chances in a million of getting as few as eight blacks. This is far below the “unusual” threshold of 0.025.
Why did we test 0 to 8, not just 8? When deciding whether a particular result is unusual or surprising, you always compute the probability of getting the number you’re interested in, or a number further from the expected value. The mean or expected value is np = 100×.26 = 26, and 8 is below that so you find the probability of x from 0 to 8. If you wanted to test whether 35 in the jury pool was unusual, 35 is greater than the mean so you compute the probability of x from 35 to 100.
With this method, why is “unusual” 0.025 instead of 0.05? When you use the 68–95–99.7 Rule to test whether something is unusual, you’re testing simultaneously two possibilities: that it’s too far above the mean, and that it’s too far below. When you compute the binomial probability, you’re testing only one of those two directions, and therefore you allow only half the probability as your criterion. But, having the actual probability, you’re not dependent on words like “unusual”.