Category Archives: Statistics

New AP Stats Teacher Moves Using Desmos

Last spring, the awesome folks at Desmos released a slew of slick (but easy-to-use) statistics features. Here is a brief video I made which walks through a few of the new features. With a new academic year beginning, I’m looking forward to changing some of my classroom moves in AP Stats to leverage the new features and build understanding. Here are 3 moves I’m planning to try this year:

ASSESSING NORMALITY (Here is a previous post on this topic)

Pop quiz! Below you see 6 boxplots. Each boxplot represents a random sample of size 20, each drawn from a large population. Which of the underlying populations have an approximately normal shape? Take a moment to think how you…and your students…might answer…

6 random samples (n=20) from “large” populations

Have your answers ready? Here comes the reveal…..

Not only do each of the samples above come from normal populations, they each come from the same theoretical population! This year in class I plan to walk students through how to build their own random sampler on Desmos, which takes only a few intuitive commands. When the “random” command is used, we now get a re-randomize” button which allows students to cycle through many random samples and assess the shapes. You can toy with my graph here.

Often students look for strict symmetry or place too much stock in different-sized tails. This is a great opportunity to have students explore and understand the variability in sampling. Teach your students to widen their nets when trying to assess normality and remember – our job is usually not to “prove” normality; instead, these samples show that the assumption of population normality is often safe and reasonable, especially with small samples.

LINEAR TRANSFORMATIONS OF DATA

Analyzing univariate data using Desmos is now quite easy. Let your students build and explore their own data sets. Data can be either typed in as a list or imported from a spreadsheet using copy/paste. The command “Stats” provides the 5-number summary, and commands for mean and standard deviation are also available. You can play around with my dataset here.

Next, I want my students to consider transformations to the data set. In my example I have provided a list of test scores and summary statistics are provided. Let’s think about a “what if”. In the next lines I provide 2 boxplot commands, but I have intentionally ruined the command by placing an apostrophe before the command (thanks Christopher Danielson for this powerful move!). What will happen if every student is given 5 “bonus” points? What if I feel generous and add 10% to everyone’s grade?

What will happen when I remove those apostrophes? Think about the center, shape and spread of the resulting boxplots? How will these new boxplots be similar to and different from the original?

Compute new summary statistics. Which stats change…by now much…and what stays the same? Why? I’m looking forward to having students build their own linear transformation graphs, investigating and summarizing their findings! Here is a graph you can use with your classes to explore these linear transformations with sliders.

COMBINATIONS OF DISTRIBUTIONS

An important topic later in AP Stats – what happens when we combine distributions by adding or subtracting? Often I will use SAT scores as a context to introduce this topic because there are two sections (verbal and math) and a built-in need to add them – What are the total scores? On which section do students tend to do “better”…and by how much? To build a Desmos interactive here, I start with a theoretical normal distribution with mean 500 and standard deviation 100 to represent both mean and verbal score distributions. Next, taking 2 random samples of size 1000 and building commands to add and subtract them allows us to look at distributions of sums and differences and compare their center, shape and spread.

The most important take-away for students here should be that distributions of sums and differences have similar variability. This is a tricky, yet vital, idea for students as they begin to think about hypothesis tests for 2 samples. You can use my graph, or build your own. Note – in my graph the slider is used to generate repeated random samples.

Advertisements

What’s Going On in This Graph

Today the New York Times Learning Network dropped the first “What’s Going On in This Graph?” (WGOITG) of the new school year. This feature started last year as a monthly piece, but now expands to a weekly release. In WGOITG, an infographic from a previous NYT article is shown with the title, and perhaps some other salient details, stripped away – like this week’s graph…

12GraphLN4-jumbo

Challenge your students to list some things they notice and wonder about the graph, and visit the NYT August post to discover how teachers use WGOITG in their classrooms. Here are some ideas I have used before with my 9th graders:

  • Have groups work in pairs to write a title and lede (brief introduction) to accompany the graph.
  • Ask tables to develop a short list of bullet points facts which are supported by the graph, and share out on note cards.
  • Have students consider how color, sizing, scaling are used in effective ways to support the story (note how the size of the arrows play a role in the graph shown here). This is a wonderful opportunity to think of statistics beyond traditional graphs and measures.

Invite your students to join in the moderated conversation, which drops on Thursday. Have your own favorite way to use WGOITG? Share it in the comments!

 

 

Seeing Stars with Random Sampling

Adapted from Introduction to Statistical Investigations, AP Version, by Tintle, Chance, Cobb, Rossman, Roy, Swanson and VanderStoep

Before the Thanksgiving break, I started the sampling chapter in AP Statistics.  This is a unit filled with new vocabulary and many, many class activities.  To get students thinking about random sampling, I have used the “famous” Random Rectangles activity (Google it…you’ll find it) and it’s cousin – Jelly Blubbers. These activities are effective in causing students to think about the importance of choosing a random sample from a population, and considering communication of procedures. But a new activity I first heard about at a summer session on simulation-based inference, and later explained by Ruth Carver at a recent PASTA meeting, has added some welcome wrinkles to this unit.  The unit uses the one-variable sampling applet from the Rossman-Chance applet collection, and is ideal for 1-1 classrooms, or even students working in tech teams.  Also, Beth Chance is wonderful…and you should all know that!

starsIn my classroom notes, students first encounter the “sky”, which has been broken into 100 squares. To start, teams work to define procedures for selecting a random sample of 10 squares, using both the “hat” (non-technology) method, and a method using technology (usually a graphing calculator). Before we draw the samples however, I want students to think about the population – specifically, will a random sample do a “good job” with providing estimates? Groups were asked to discuss what they notice about the sky.  My classes immediately sensed something worth noting:

There are some squares where there are many stars (we end up calling these “dense” squares) and some where there are not so many.

Before we even drew our first sample, we are talking about the need to consider both dense and non-dense areas in our sample, and the possibility that our sample will overestimate or underestimate the population, even in random sampling.  There’s a lot of stats goodness in all of this, and the conversation felt natural and accessible to the students.

Studestars1nts then used their technology-based procedure to actually draw a random sample of 10 squares, marking off the squares.  But counting the actual stars is not reasonable, given their quantity – so it’s Beth Chance to the rescue!  Make sure you click the “stars” population to get started.  Beth has provided the number of stars in each square, and information regarding density, row and column to think about later.

But before we start clicking blindly, let’s describe that population.   The class quickly agrees that we have a skewed-right distribution, and take note of the population mean – we’ll need it to discuss bias later.

Click “show sampling options” on the top of the screen and we can now simulate random samples.  First, students each drew a sample of size 10 – the bottom of the screen shows the sample, summary statistics, and a visual of the 10 squares chosen from the population.

stars2.JPG

Groups were asked to look at their sample means, share them with neighbors, and think about how close these samples generally come to hitting their target.  Find a neighbor where few “dense” area were selected , or where many “dense” squares made the cut, how much confidence do we have in using this procedure to estimate the population mean?

Eventually I unleashed the sampling power of the applet and let students draw more and more samples.  And while a formal discussion of sampling distributions is a few chapters away, we can make observations about the distributions of these sample means.

stars3

And I knew the discussion was heading in the right direction when a student observed:

Hey, the population is definitely skewed, but the means are approximately normal.  That’s odd…

Yep, it sure is…and more seeds have been planted for later sampling distribution discussions. But what about those dense and non-dense areas the students noticed earlier?  Sure, our random samples seem to provide an unbiased estimator of the population mean, but can we do better?  This is where Beth’s applet is so wonderful, and where this activity separates itself from Random Rectangles.  On the top of the applet, we can stratify our sample by density, ensuring that an appropriate ratio of dense / non-dense areas (here, 20%) is maintained in the sample.  The applet then uses color to make this distinction clear: here, green dots represent dense-area squares.

stars4

Finally, note the reduced variability in the distribution from stratified samples, as opposed to random samples. The payoff is here!

Later, we will look at samples stratified by row and/or column.  And cluster samples by row or column will also make an appearance.  There’s so much to talk about with this one activity, and I appreciate Ruth and Beth for sharing!

Compute Expected Value, Pass GO, Collect $200

Photo Oct 11, 7 08 49 AM.jpgExpected Value – such a great time to talk about games, probability, and decision making!  Today’s lesson started with a Monopoly board in the center of the room. I had populated the “high end” and brown properties with houses and hotels.  Here’s the challenge:

When I play Monopoly, my strategy is often to buy and build on the cheaper properties.  This leaves me somewhat scared when I head towards the “high rent” area if my opponents built there.  It is now my turn to roll the dice.  Taking a look at the board, and assuming that my opponents own all of the houses and hotels you see, what would be the WORST square for me to be on right now?  What would be the BEST square?

For this question, we assumed that my current location is between the B&O and the Short Line Railroads.  The conversation quickly went into overdrive – students debating their ideas, talking about strategy, and also helping explain the scenario to students not as familiar with the game (thankfully, it seems our tech-savvy kids still play Monopoly!).  Many students noted not only the awfulness of landing on Park Place or Boardwalk, but also how some common sums with two dice would make landing on undesirable squares more likely.

ANALYZING THE GAME

After our initial debates, I led students through an analysis, which eventually led to the introduction of Expected Value as a useful statistic to summarize the game.  Students could start on any square they wanted, and I challenged groups to each select a different square to analyze.  Here are the steps we followed.

photo-oct-11-8-14-56-am

First, we listed all the possible sums with 2 dice, from 2 to 12.

Next, we listed the Monopoly Board space each die roll would causes us to land on (abbreviated to make it easier).

Next, we looked at the dollar “value” of each space.  For example, landing on Boardwalk with a hotel has a value of -$2,000.  For convenience, we made squares like Chance worth $0.  Luxury Tax is worth -$100.  We agreed to make Railroads worth -$100 as an average.  Landing on Go was our only profitable outcome, worth +$200. Finally, “Go to Jail” was deemed worth $0, mostly out of convenience.

Finally, we listed the probability of each roll from 2 to 12.

Now for the tricky computations.  I moved away from Monopoly for a moment to introduce a basic example to support the computation of expected value.

I roll a die – if it comes out “6” you get 10 Jolly Ranchers, otherwise, you get 1.  What’s the average number of candies I give out each roll?

This was sufficient to develop need for multiplying in our Monopoly table – multiply each value by its probability, find the sum of these and we’ll have something called Expected Value.  For each initial square, students verified their solutions and we shared them on a class Monopoly board.

photo-oct-11-8-21-37-am

The meaning of these numbers then held importance in the context of the problem – “I may land on Park Place, I may roll and hit nothing, but on average I will lose $588 from this position”.

HOMEWORK CHALLENGE: since this went so well as a lesson today, I held to the theme in providing an additional assignment:

Imagine my opponent starts on Free Parking.  I own all 3 yellow properties, but can only afford to purchase 8 houses total.  How should I arrange the houses in order to inflict the highest potential damage to my opponent?

monopoly-back-row

I’m looking forward to interesting work when we get back to school!

Note: I discussed my ideas about this topic in a previous post.  Enjoy!

Who Assessed it Better? AP Stats Inference Edition

Free-response questions and exam information in this post freely available on the College Board – AP Statistics website

Today I am stealing a concept from Dan Meyer’s task comparison series “Who Wore it Best”, and bringing it to the AP Statistics exam world. In the series of 6 free-response questions on the AP Stats exam, it is not unusual for one question to focus solely on inference. Compare these two questions, which each deal with inference for proportions.

From 2012:

2012q

From 2016:

2016q

I read (graded) the question from 2012 as an AP Exam reader, and observed a variety of approaches. I find that while many students understand the structure of a hypothesis test, it’s the nuance – the rationales for steps – which are often lost in the communication. In the 2012 question, students were expected to do the following:

  1. Identify appropriate parameters
  2. State null and alternate hypotheses
  3. Identify conditions
    • Independent, random samples and normality of sampling distribution
  4. Name the correct hypothesis procedure
  5. Compute / communicate test statistic and p-value
  6. Compare the p-value to an alpha level
  7. Make an appropriate conclusion in context of the problem

It’s quite a list.  And given that individual AP exam problems are worth a total of 4 points, steps are often combined into one scoring element.  Here, naming the test and checking conditions were bundled – as such, precision in providing a rationale for conditions was often forgiven.  For example, if students identified the large sample sizes as a necessary condition, this was sufficient, even if there was no recognition of a link to normality of sampling distributions.  Understanding the structure of a hypothesis test – with appropriate communication – was clearly the star of the show.  While inference is one the “big ideas” in AP Statistics, my view is that questions like this from 2012 encourage cookbook statistics, where memorized structure take the place of deeper understanding.

So, it was with much excitement that I saw question 5 from 2016.  Here, the interpretation of a confidence interval was preserved. But I appreciate the work of the test development committee in parts b and c; rather than have students simply list and confirm conditions for inference, the exam challenges students to be quite specific about their rationales. With parts b and c, students certainly struggled more than with the conditions in 2012, but I hope their inclusion causes statistics teachers to consider their approach to hypothesis conditions. The mean scores for each question speak to struggles students on this question, compared to traditional hypothesis testing structure.

  • 2012: 1.56 (out of 4)
  • 2016: 1.27 (out of 4)

The inclusion of part b of question 5 this year, where students were asked to defend the np > 10 condition, was perfect timing for my classes.  This year, I tried a new approach to help develop student understanding of the binomial distribution / sampling distribution relationship. I found that while many students will continue to resort to the “short cut” – memorizing conditions – a higher proportion of students were able to provide clear communication of this inference condition.

The AP Statistics reading features “Best Practices Night” – where classroom ideas are shared.  You can find resources from the last few years at Jason Molesky’s APStatsMonkey site. I shared my np > 10 ideas with the group, and received many positive comments about it.  Enjoy my slides here, and feel free to contact me with questions regarding this lesson:

Finally, I can’t express how wonderfully rich a professional-development experience the AP reading is.  I always find myself with a basket of new classroom ideas and contacts to share with – it’s stats-geek Christmas.  For me, 2016 is the year the #MTBoS started to make its mark at the Stats reading – I met so many folks from Twitter, and we held our first-ever tweet-up!

Also, the vibrant Philadelphia-area stats community was active as always.  We meet as a group a few times a year to share ideas and lessons; seeing so many from this area participate in the reading makes us all better with what we do for our students.

 

When Binomial Distributions Appear Normal

We’re working through binomial and geometric distributions this week in AP Stats, and there are many, many seeds which get planted in this chapter which we hope will yield bumper crops down the road. In particular, normal estimates of a binomial distribution – which later become conditions in hypothesis testing – are valuable to think about now and tuck firmly into our toolkit.  This year, a Desmos exploration provided rich discussion and hopefully helped students make sense of these “rules of thumb”.

gifsmosEach group was equipped with a netbook, and some students chose to use their phones. A Desmos binomial distribution explorer I had pre-made was linked on Edmodo. The explorer allows students to set the paremeters of a binomial distribution, n and p, and view the resulting probability distribution. After a few minutes of playing, I asked students what they noticed about these distributions.

A lot of them look normal.

Yup. And now the hook has been cast.  Which of these distributions “appear” normal, and under what conditions?  In their teams, students adjusted the parameters and assessed the normality.  In the expressions, the normal overlay provides a theoretical normal curve, based on the binomial mean and standard deviation, along with error dots. This provides more evidence as students debate normal-looking conditions.

bins1

Each group was then asked to summarize their findings:

  • Provide 2 settings (n and p) which provide firm normality.
  • Provide 2 settings (n and p) which provide a clearly non-normal distribution.
  • Optional: provide settings which have you “on the fence”

My student volunteer (I pay in Jolly Ranchers) recorded our “yes, it’s normal!” data, using a second Desmos parameter tracker.  What do we see in these results?

bins2.PNG

Students quickly agreed that higher sample sizes were more likely to associate with a normal approximation. Now let’s add in some clearly non-normal data dots. After a few dots were contributed, I gave an additional challenge – provide parameters with a larger sample size which seem anti-normal. Hers’s what we saw:

 

bins3

The discussion became quite spirited: we want larger sample sizes, but extreme p’s are problematic – we need to consider sample size and probability of success together!  Yes, we are there!  The rules of thumb for a normal approximation to a binomial had been given in a flipped video lecture given earlier, but now the interplay between sample size and probability of success was clear:

And what happens when we overlay these two inequalities over our observations?

bins4

Awesomeness!  And having our high sample sizes clearly outside of the solution region made this all the more effective.

Really looking forward to bringing this graph back when we discuss hypothesis testing for proportions.

An Egg-Cellent Simulation

The scenario I used for a fun lesson with my 9th graders this week comes from a talk by James Bush from Waynesburg College, which I attended at the US Conference on Teaching Statistics in May.  James is a master of finding clips from TV and movies to use in his class to encourage discussion, and this clip from the Jimmy Fallon Late Show features a game called “Egg Russian Roulette”. I have embedded a clip here, but you can search for many times Jimmy played this game on his show.

For James’ college statistics courses, this clip is a helpful opener to the hypergeometric distribution, where we are interested in multiple successes from draws done without replacement.  While this setting could eventually be presented to my AP Stats students, it lives a bit outside of the scope of what we do at the college level. But there are some strong entry points for discussion with my 9th graders, including probability trees, conditional probability, and simulation.

PLAYING THE GAME IN CLASS

Before showing the video, two volunteers were called to participate in a mystery game.  The two student volunteers became a bit nervous over their decision when a carton of eggs was produced, eventually shown to be filled with plastic eggs (awesome idea by James!). My first chance to try this with volunteers on my own was at Twitter Math Camp in June, and lots of fun tweets followed.

Thanks to Richard Villanueva, who recorded many of the My Favorites from Twitter Math Camp, we have coverage of the ganeplay.  Check out all of the videos of My Favorites from TMC15 on his YouTube Channel.

Next, I asked the class to think of questions they have about the game they saw, or in general about Egg Roulette.  A good starting list developed:

  • How likely is it that Tom Cruise would lose that quickly?
  • Once Tom picks a raw egg, how likely is it that Jimmy is safe on his draw?
  • Is it better to go first or second?

Photo Sep 18, 8 06 14 AM I then challenged my student groups to sketch out the first three rounds of egg roulette, and find the probability of Tom losing in 3 rounds. We had worked on trees the day before, and this game presented a good chance to apply what we had discussed earlier.

Photo Sep 18, 8 13 52 AM

SIMULATION

After our analysis of the first three rounds, the conversation then moved to strategy: is it better to go first or second, or does it not matter?  Our “gut reaction” poll revealed that “it doesn’t matter” was the most common response, with “go first” was in second place.  The thought behind going first is that you could easily draw a hard-boiled egg, and thus put pressure on the other player.

To simulate the game, pairs of students were given one suit from a deck of cards.  The ace was moved aside, leaving 12 cards (representing the 12 eggs).  The 10, jack, queen and king then  represent raw eggs.  After shuffling the cards, students dealt cards into two piles, Tim and Jimmy. When a player was dealt two raw eggs, the game ends and the result recorded.  We were quickly able simulate over 50 plays of Egg Roulette, and the class results were recorded.

Egg Simulation

One student quickly identified, and then defended, that the game can NEVER go the full 12 rounds.  Also, some students noted that the second player (here, Tom) has one extra opportunity to lose the game.  A second straw poll revealed that student perceptions on the game had shifted – few thought it was a 50/50 game, and many saw that the first player held a disadvantage.

FOLLOW-UP

In teams, students are now challenged to explore a similar (yet shorter) Egg Roulette game, compute theoretical probabilities, conduct a simulation, and analyze the results.  I’m looking forward to some interesting-looking trees.  The document here shows guidelines for this activity, some assignment ideas, and a full tree for the first 3 rounds of Egg Roulette.

Download the document