Category Archives: Statistics

Seeing Stars with Random Sampling

Adapted from Introduction to Statistical Investigations, AP Version, by Tintle, Chance, Cobb, Rossman, Roy, Swanson and VanderStoep

Before the Thanksgiving break, I started the sampling chapter in AP Statistics.  This is a unit filled with new vocabulary and many, many class activities.  To get students thinking about random sampling, I have used the “famous” Random Rectangles activity (Google it…you’ll find it) and it’s cousin – Jelly Blubbers. These activities are effective in causing students to think about the importance of choosing a random sample from a population, and considering communication of procedures. But a new activity I first heard about at a summer session on simulation-based inference, and later explained by Ruth Carver at a recent PASTA meeting, has added some welcome wrinkles to this unit.  The unit uses the one-variable sampling applet from the Rossman-Chance applet collection, and is ideal for 1-1 classrooms, or even students working in tech teams.  Also, Beth Chance is wonderful…and you should all know that!

starsIn my classroom notes, students first encounter the “sky”, which has been broken into 100 squares. To start, teams work to define procedures for selecting a random sample of 10 squares, using both the “hat” (non-technology) method, and a method using technology (usually a graphing calculator). Before we draw the samples however, I want students to think about the population – specifically, will a random sample do a “good job” with providing estimates? Groups were asked to discuss what they notice about the sky.  My classes immediately sensed something worth noting:

There are some squares where there are many stars (we end up calling these “dense” squares) and some where there are not so many.

Before we even drew our first sample, we are talking about the need to consider both dense and non-dense areas in our sample, and the possibility that our sample will overestimate or underestimate the population, even in random sampling.  There’s a lot of stats goodness in all of this, and the conversation felt natural and accessible to the students.

Studestars1nts then used their technology-based procedure to actually draw a random sample of 10 squares, marking off the squares.  But counting the actual stars is not reasonable, given their quantity – so it’s Beth Chance to the rescue!  Make sure you click the “stars” population to get started.  Beth has provided the number of stars in each square, and information regarding density, row and column to think about later.

But before we start clicking blindly, let’s describe that population.   The class quickly agrees that we have a skewed-right distribution, and take note of the population mean – we’ll need it to discuss bias later.

Click “show sampling options” on the top of the screen and we can now simulate random samples.  First, students each drew a sample of size 10 – the bottom of the screen shows the sample, summary statistics, and a visual of the 10 squares chosen from the population.

stars2.JPG

Groups were asked to look at their sample means, share them with neighbors, and think about how close these samples generally come to hitting their target.  Find a neighbor where few “dense” area were selected , or where many “dense” squares made the cut, how much confidence do we have in using this procedure to estimate the population mean?

Eventually I unleashed the sampling power of the applet and let students draw more and more samples.  And while a formal discussion of sampling distributions is a few chapters away, we can make observations about the distributions of these sample means.

stars3

And I knew the discussion was heading in the right direction when a student observed:

Hey, the population is definitely skewed, but the means are approximately normal.  That’s odd…

Yep, it sure is…and more seeds have been planted for later sampling distribution discussions. But what about those dense and non-dense areas the students noticed earlier?  Sure, our random samples seem to provide an unbiased estimator of the population mean, but can we do better?  This is where Beth’s applet is so wonderful, and where this activity separates itself from Random Rectangles.  On the top of the applet, we can stratify our sample by density, ensuring that an appropriate ratio of dense / non-dense areas (here, 20%) is maintained in the sample.  The applet then uses color to make this distinction clear: here, green dots represent dense-area squares.

stars4

Finally, note the reduced variability in the distribution from stratified samples, as opposed to random samples. The payoff is here!

Later, we will look at samples stratified by row and/or column.  And cluster samples by row or column will also make an appearance.  There’s so much to talk about with this one activity, and I appreciate Ruth and Beth for sharing!

Compute Expected Value, Pass GO, Collect $200

Photo Oct 11, 7 08 49 AM.jpgExpected Value – such a great time to talk about games, probability, and decision making!  Today’s lesson started with a Monopoly board in the center of the room. I had populated the “high end” and brown properties with houses and hotels.  Here’s the challenge:

When I play Monopoly, my strategy is often to buy and build on the cheaper properties.  This leaves me somewhat scared when I head towards the “high rent” area if my opponents built there.  It is now my turn to roll the dice.  Taking a look at the board, and assuming that my opponents own all of the houses and hotels you see, what would be the WORST square for me to be on right now?  What would be the BEST square?

For this question, we assumed that my current location is between the B&O and the Short Line Railroads.  The conversation quickly went into overdrive – students debating their ideas, talking about strategy, and also helping explain the scenario to students not as familiar with the game (thankfully, it seems our tech-savvy kids still play Monopoly!).  Many students noted not only the awfulness of landing on Park Place or Boardwalk, but also how some common sums with two dice would make landing on undesirable squares more likely.

ANALYZING THE GAME

After our initial debates, I led students through an analysis, which eventually led to the introduction of Expected Value as a useful statistic to summarize the game.  Students could start on any square they wanted, and I challenged groups to each select a different square to analyze.  Here are the steps we followed.

photo-oct-11-8-14-56-am

First, we listed all the possible sums with 2 dice, from 2 to 12.

Next, we listed the Monopoly Board space each die roll would causes us to land on (abbreviated to make it easier).

Next, we looked at the dollar “value” of each space.  For example, landing on Boardwalk with a hotel has a value of -$2,000.  For convenience, we made squares like Chance worth $0.  Luxury Tax is worth -$100.  We agreed to make Railroads worth -$100 as an average.  Landing on Go was our only profitable outcome, worth +$200. Finally, “Go to Jail” was deemed worth $0, mostly out of convenience.

Finally, we listed the probability of each roll from 2 to 12.

Now for the tricky computations.  I moved away from Monopoly for a moment to introduce a basic example to support the computation of expected value.

I roll a die – if it comes out “6” you get 10 Jolly Ranchers, otherwise, you get 1.  What’s the average number of candies I give out each roll?

This was sufficient to develop need for multiplying in our Monopoly table – multiply each value by its probability, find the sum of these and we’ll have something called Expected Value.  For each initial square, students verified their solutions and we shared them on a class Monopoly board.

photo-oct-11-8-21-37-am

The meaning of these numbers then held importance in the context of the problem – “I may land on Park Place, I may roll and hit nothing, but on average I will lose $588 from this position”.

HOMEWORK CHALLENGE: since this went so well as a lesson today, I held to the theme in providing an additional assignment:

Imagine my opponent starts on Free Parking.  I own all 3 yellow properties, but can only afford to purchase 8 houses total.  How should I arrange the houses in order to inflict the highest potential damage to my opponent?

monopoly-back-row

I’m looking forward to interesting work when we get back to school!

Note: I discussed my ideas about this topic in a previous post.  Enjoy!

Who Assessed it Better? AP Stats Inference Edition

Free-response questions and exam information in this post freely available on the College Board – AP Statistics website

Today I am stealing a concept from Dan Meyer’s task comparison series “Who Wore it Best”, and bringing it to the AP Statistics exam world. In the series of 6 free-response questions on the AP Stats exam, it is not unusual for one question to focus solely on inference. Compare these two questions, which each deal with inference for proportions.

From 2012:

2012q

From 2016:

2016q

I read (graded) the question from 2012 as an AP Exam reader, and observed a variety of approaches. I find that while many students understand the structure of a hypothesis test, it’s the nuance – the rationales for steps – which are often lost in the communication. In the 2012 question, students were expected to do the following:

  1. Identify appropriate parameters
  2. State null and alternate hypotheses
  3. Identify conditions
    • Independent, random samples and normality of sampling distribution
  4. Name the correct hypothesis procedure
  5. Compute / communicate test statistic and p-value
  6. Compare the p-value to an alpha level
  7. Make an appropriate conclusion in context of the problem

It’s quite a list.  And given that individual AP exam problems are worth a total of 4 points, steps are often combined into one scoring element.  Here, naming the test and checking conditions were bundled – as such, precision in providing a rationale for conditions was often forgiven.  For example, if students identified the large sample sizes as a necessary condition, this was sufficient, even if there was no recognition of a link to normality of sampling distributions.  Understanding the structure of a hypothesis test – with appropriate communication – was clearly the star of the show.  While inference is one the “big ideas” in AP Statistics, my view is that questions like this from 2012 encourage cookbook statistics, where memorized structure take the place of deeper understanding.

So, it was with much excitement that I saw question 5 from 2016.  Here, the interpretation of a confidence interval was preserved. But I appreciate the work of the test development committee in parts b and c; rather than have students simply list and confirm conditions for inference, the exam challenges students to be quite specific about their rationales. With parts b and c, students certainly struggled more than with the conditions in 2012, but I hope their inclusion causes statistics teachers to consider their approach to hypothesis conditions. The mean scores for each question speak to struggles students on this question, compared to traditional hypothesis testing structure.

  • 2012: 1.56 (out of 4)
  • 2016: 1.27 (out of 4)

The inclusion of part b of question 5 this year, where students were asked to defend the np > 10 condition, was perfect timing for my classes.  This year, I tried a new approach to help develop student understanding of the binomial distribution / sampling distribution relationship. I found that while many students will continue to resort to the “short cut” – memorizing conditions – a higher proportion of students were able to provide clear communication of this inference condition.

The AP Statistics reading features “Best Practices Night” – where classroom ideas are shared.  You can find resources from the last few years at Jason Molesky’s APStatsMonkey site. I shared my np > 10 ideas with the group, and received many positive comments about it.  Enjoy my slides here, and feel free to contact me with questions regarding this lesson:

Finally, I can’t express how wonderfully rich a professional-development experience the AP reading is.  I always find myself with a basket of new classroom ideas and contacts to share with – it’s stats-geek Christmas.  For me, 2016 is the year the #MTBoS started to make its mark at the Stats reading – I met so many folks from Twitter, and we held our first-ever tweet-up!

Also, the vibrant Philadelphia-area stats community was active as always.  We meet as a group a few times a year to share ideas and lessons; seeing so many from this area participate in the reading makes us all better with what we do for our students.

 

When Binomial Distributions Appear Normal

We’re working through binomial and geometric distributions this week in AP Stats, and there are many, many seeds which get planted in this chapter which we hope will yield bumper crops down the road. In particular, normal estimates of a binomial distribution – which later become conditions in hypothesis testing – are valuable to think about now and tuck firmly into our toolkit.  This year, a Desmos exploration provided rich discussion and hopefully helped students make sense of these “rules of thumb”.

gifsmosEach group was equipped with a netbook, and some students chose to use their phones. A Desmos binomial distribution explorer I had pre-made was linked on Edmodo. The explorer allows students to set the paremeters of a binomial distribution, n and p, and view the resulting probability distribution. After a few minutes of playing, I asked students what they noticed about these distributions.

A lot of them look normal.

Yup. And now the hook has been cast.  Which of these distributions “appear” normal, and under what conditions?  In their teams, students adjusted the parameters and assessed the normality.  In the expressions, the normal overlay provides a theoretical normal curve, based on the binomial mean and standard deviation, along with error dots. This provides more evidence as students debate normal-looking conditions.

bins1

Each group was then asked to summarize their findings:

  • Provide 2 settings (n and p) which provide firm normality.
  • Provide 2 settings (n and p) which provide a clearly non-normal distribution.
  • Optional: provide settings which have you “on the fence”

My student volunteer (I pay in Jolly Ranchers) recorded our “yes, it’s normal!” data, using a second Desmos parameter tracker.  What do we see in these results?

bins2.PNG

Students quickly agreed that higher sample sizes were more likely to associate with a normal approximation. Now let’s add in some clearly non-normal data dots. After a few dots were contributed, I gave an additional challenge – provide parameters with a larger sample size which seem anti-normal. Hers’s what we saw:

 

bins3

The discussion became quite spirited: we want larger sample sizes, but extreme p’s are problematic – we need to consider sample size and probability of success together!  Yes, we are there!  The rules of thumb for a normal approximation to a binomial had been given in a flipped video lecture given earlier, but now the interplay between sample size and probability of success was clear:

And what happens when we overlay these two inequalities over our observations?

bins4

Awesomeness!  And having our high sample sizes clearly outside of the solution region made this all the more effective.

Really looking forward to bringing this graph back when we discuss hypothesis testing for proportions.

An Egg-Cellent Simulation

The scenario I used for a fun lesson with my 9th graders this week comes from a talk by James Bush from Waynesburg College, which I attended at the US Conference on Teaching Statistics in May.  James is a master of finding clips from TV and movies to use in his class to encourage discussion, and this clip from the Jimmy Fallon Late Show features a game called “Egg Russian Roulette”. I have embedded a clip here, but you can search for many times Jimmy played this game on his show.

For James’ college statistics courses, this clip is a helpful opener to the hypergeometric distribution, where we are interested in multiple successes from draws done without replacement.  While this setting could eventually be presented to my AP Stats students, it lives a bit outside of the scope of what we do at the college level. But there are some strong entry points for discussion with my 9th graders, including probability trees, conditional probability, and simulation.

PLAYING THE GAME IN CLASS

Before showing the video, two volunteers were called to participate in a mystery game.  The two student volunteers became a bit nervous over their decision when a carton of eggs was produced, eventually shown to be filled with plastic eggs (awesome idea by James!). My first chance to try this with volunteers on my own was at Twitter Math Camp in June, and lots of fun tweets followed.

Thanks to Richard Villanueva, who recorded many of the My Favorites from Twitter Math Camp, we have coverage of the ganeplay.  Check out all of the videos of My Favorites from TMC15 on his YouTube Channel.

Next, I asked the class to think of questions they have about the game they saw, or in general about Egg Roulette.  A good starting list developed:

  • How likely is it that Tom Cruise would lose that quickly?
  • Once Tom picks a raw egg, how likely is it that Jimmy is safe on his draw?
  • Is it better to go first or second?

Photo Sep 18, 8 06 14 AM I then challenged my student groups to sketch out the first three rounds of egg roulette, and find the probability of Tom losing in 3 rounds. We had worked on trees the day before, and this game presented a good chance to apply what we had discussed earlier.

Photo Sep 18, 8 13 52 AM

SIMULATION

After our analysis of the first three rounds, the conversation then moved to strategy: is it better to go first or second, or does it not matter?  Our “gut reaction” poll revealed that “it doesn’t matter” was the most common response, with “go first” was in second place.  The thought behind going first is that you could easily draw a hard-boiled egg, and thus put pressure on the other player.

To simulate the game, pairs of students were given one suit from a deck of cards.  The ace was moved aside, leaving 12 cards (representing the 12 eggs).  The 10, jack, queen and king then  represent raw eggs.  After shuffling the cards, students dealt cards into two piles, Tim and Jimmy. When a player was dealt two raw eggs, the game ends and the result recorded.  We were quickly able simulate over 50 plays of Egg Roulette, and the class results were recorded.

Egg Simulation

One student quickly identified, and then defended, that the game can NEVER go the full 12 rounds.  Also, some students noted that the second player (here, Tom) has one extra opportunity to lose the game.  A second straw poll revealed that student perceptions on the game had shifted – few thought it was a 50/50 game, and many saw that the first player held a disadvantage.

FOLLOW-UP

In teams, students are now challenged to explore a similar (yet shorter) Egg Roulette game, compute theoretical probabilities, conduct a simulation, and analyze the results.  I’m looking forward to some interesting-looking trees.  The document here shows guidelines for this activity, some assignment ideas, and a full tree for the first 3 rounds of Egg Roulette.

Download the document

Desmos Lessons for AP Statistics

In the past year-plus, Desmos has added useful features to help those of us in the statistics world. The elegant addition of regressions (check out my tutorial video) has been a welcome new feature, and simple stats commands have also been added for lists.  Here are 3 Desmos creations which will become part of my classroom lessons for the 2015-16 AP Stats year.

THE COEFFICIENT OF DETERMINATION

That dreaded r-squared sentence…..yep, the kids need to memorize, but let’s add some meaning behind the “percent of variation due to the linear relationship….blah blah blah…” mantra.  Here’s an activity I do with my classes which has helped flesh out this regression idea.  To start, every student is handed a card face down with a prompt.  On my signal, the students turn over the card and respond to the prompt, with specific instructions not to discuss their response with classmates.  Here’s the prmopt:

An adult male enters the room. Estimate his weight.

After some nervous mumbling, I now hand out a second prompt card, and we will repeat the process.  But this time the card looks a little different.

An adult male who is {*see below} tall enters the room. Estimate his weight.

This time, I have 6 different versions of cards, and they are randomly scattered about the room.  Some cards say “5 feet, 6 inches” for the height, with other cards for 5’9″, 6’0″, 6’3″, 6’6″ and 6’9″.

After responses for both cards have been given, the responses are written on the board, along with the associated heights for the 2nd round of cards.  How did the background information given in the 2nd set of cards influence our responses?  Now the bait has been set to look at the Coefficient of Determination on Desmos.

rsqr1In this Desmos, heights and weights of adult males are given in a scatterplot. Activating the first folder – “using the mean of y1 for prediction” shows us the mean of all weights, and associated errors if the mean weight were used to predict for all men. The folder is activated by clicking the open circle to the left.

rsqr2Next, we can explore how the regression line helps improve predictions. Click the “LSRL and explained variation” folder and note the reduction of error.  The calculation for r-squared as the reduction of error is given, and can be compared to the calculated r-squared value from the regression.  Also, points in the scatterplot are draggable – so play away!

THE MEAN-MEAN POINT IN REGRESSION

I have done this exploration of regression facts for many years, using worksheets from Daren Starnes along with Fathom. I find this Desmos version to be much easier for kids to handle, and it can be saved for future discussion.  And while in this demonstration I have all of the commands prepared for you, I would walk students through entering the commands themselves in class.

lsrl1First, we have a scatterplot with its LSRL included.  Activate the “mean of x and y” folder” and notice the intersection of these value lines. Here, the points are all draggable, so we can easily generalize that all LSRL’s pass through the point x-bar, y-bar.

lsrl2The second discovery is a bit more subtle.  Click the next folder, and now we have new lines 1 standard deviation in each direction for x and y.  Clearly, our intersection point is no longer on the LSRL, but what is its significance?  How far do we rise and run to get to this new point on the LSRL?  Some calculation and discussion may help students discovery this fact about the slope of an LSRL:

rformula

This is not a fact students need to memorize in AP Stats, but certainly the discussion builds understanding of regression beyond what our calculator provides.

NORMAL APPROXIMATION OF THE BINOMIAL DISTRIBUTION

binomialLists on Desmos have strong potential for investigating a distribution by using a formula repeatedly.  In this Desmos demonstration, students investigate the behavior of the binomial distribution, using sliders to define values for n and p in the distribution.  Activating the normal curve folder allows us to assess the “fit” of the binomial distribution against a normal curve.  I added the purple dots near the top to make it easier to investigate where the normal approximation is strong/weak in approximating its binomial cousin.

While Desmos has a while to go before it will replace graphing calculators in my AP Stats class, these activities will be part of my classroom this year.  Looking forward to creating and sharing more!

8 Take-Aways From USCOTS

This weekend, I spend two days at USCOTS, the United States Conference on Teaching Statistics at Penn State University. The opportunity to connect with old friends, share ideas, and reflect my own practices was exciting. Here are just a sampling of my experiences, many of which could be their own blog post.  You can find many speaker presentations and more resources on the CauseWeb site. Hope you enjoy!

MOVING FROM SANITIZED DATA SETS – a Keynote by Shonda Kuiper of Grinnell College noted that while interest in the study of statistics is at an all-time high, are we really preparing students to apply statistical concepts in a realistic manner? Shonda challenged the group to move from canned textbook data sets and let large, real data sets drive coursework.  Her Stats2Labs website is a treasure chest of activities and data sets, from which she shared a rich set of NY Stops and Arrests, and Shonda shared her methods for using the set to faciltate discussions.  For my high school classes, I am most looking forward to using the Tangram Game applet, which collects many variables on gameplay.
kuiper

I often tell my students that statistics is often about telling a story, and was thrilled to hear a college professor share this theme as well!

Photo May 30, 9 40 44 AMSTUDENT POSTERS – the poster sessions were interesting to me as a high school teacher, as my own students are preparing for “Stats Fair” next week.  How awesome to show my students that the presentations they are about to share are not too dissimilar from those they may encounter later in their academic careers, just in the sophistication of the studies and methods. You can’t beat having a small-group discussion with Beth Chance regarding the Rossman-Chance stats applets, which should be a part of every Stats class.  Posters on HS integrated math programs, flipped learning, and formative assessments provided info to think about for next year.

EGG ROULETTE – a session by James Bush and Jen Bready led to a fun “hook” I hope to try with my 9th graders next year.  James and Jen are masters of using video and pictures from the media to engage learners and grow discussion. Here, James chose Doug Tyson and I as “volunteers” to participate in a game. I quickly became worried when it was advertised as “Egg Russian Roulette”, and a clip from the Jimmy Fallon show was played:

me eggsWhat have I gotten myself into here?

The plot thickens when an egg carton makes an appearance…but filled with plastic eggs, some containing packing peanuts. I lost after 3 picks, and a simulation of the activity ensued from the group. Is there an advantage to being first? James alleges the person playing first loses 5/9 of the time. Try a simulation with your classes and find out!

CATCHING UP WITH OLD FRIENDS – Ruth Carver teaches high school about 20 minutes from me, and I cherish the times we find to trade stories.  By the time arrived at the conference, Ruth was already gushing over the many great sessions she had attended, and shared a quote from Dick DeVeaux which applies nicely to all classrooms:

Students like uncomfortable learning less and less.  They like things clear as a bell with no sweat, no thinking, no neurons firing.  They are confusing easy and comfortable with learning. To use a sports analogy, “Is that what you want from your sports practices – easy, comfortable, they didn’t break a sweat?  Well it should be the same with your classes; they should be sweating afterwards.  It’s hard stuff; they should be thinking hard.

What are we all doing to make sure our students sweat in math class?

Photo May 29, 6 04 25 PMIn return, I shared one of my new favorite ways to collect fun data: the website how-old.net. How well does it predict your age? What if you smile? What it I wear a hat? You will be toying with this and your friends at your next gathering.

SIMULATION-BASED INFERENCE –  This has become a hot topic in the stats world – I have come to use the StatKey site more often in my classes to have students simulate distributions – and was eager to learn how to leverage simulations with traditional hypothesis testing methods. Robin Lock and Kari Lock Morgan  shared examples where simulations allowed us to compute simulation distributions, but then move those results into traditional distributions and test statistics. My AP classes have generally been “successful” in that my AP passing rate is quite good, so it becomes tricky to want to ditch old methods. But the experiences and communication gained from simulation methods are too rich to be ignored. Infusing my classroom with more simulation-based inference could dominate much of my planning for next year.

CONNECTING – Connection was the theme of the conference, and a part of all of the talks. I strengthed bonds with old friends (many of whom I will see in 2 weeks for the AP Stats reading), and appreciate the many new folks I met for the first time. Doug Tyson’s silly selfie challenge gave me the courage to say hello to many people I wouldn’t normally have approched.  And though Doug won the challenge with a late-Friday “get” of Jessica Utts, the new AP Stats Cheif Reader – which I came so close to photo-bombing, I’ll take my photos with Allan Rossman and Roxy Peck as a well-deserved second-place.

Photo May 29, 6 17 19 PMPhoto May 29, 6 48 08 PM

DISCONNECTING – The saturday lunch-time talk by Michael Posner of Villanova University inspired the group by sharing the many connections he has made with the Stats community over his career. Michael often shares at our local PASTA (Philly-Area Stats Teachers Association) meetings, and I appreciate his desire to connect with high school teachers.

While explaining the power of connections he has made, Michael also challenged the group to disconnect, and reflect upon their teaching.  In particular, are we using our Stats expertise to clearly measure the efficacy of our teaching methods?  And while sharing ideas at conferences can be energizing, how do we personalize what we have gathered to work for our classroom?  Such great themes to consider at the close of a conference.

AFFIRMATION AND REFLECTION – When I first started teaching AP Stats, I was cautioned that stats teachers are often the lonliest people in their departments. Walk into a high school math planning room, bring up methods for solving quadratic functions, and you may soon have a full group conversation.  But try to start a discussion of two-sample t-tests?  Crickets…. This conference was attended by about 450 passionate stats people, with only about 10% being high school folks. But the college crowd could not have been nicer or more accomodating in wanting to share their ideas.  The entire experience left me energized that I am headed in (mostly) the right direction in what I do to encourage stats study, and with plenty of resources and connections for improving my practice.  Looking forward to USCOTS 2017!