Categories
Middle School Statistics

How to Quantify Opinions

A recent twitter post by Math Coach Geoff (@emergentmath) from New York asked for help in quantifying a poster which lists pros and cons of using paper towels versus electric hand dryers.  The task reminded me of a quantitative data collection method I have used with Advanced Placement Statistics students.  After the AP exams in May, my students were required to develop and complete a project which demonstrated some application of what they had learned during the year, which were then presented at a Stats Fair at the beginning of June.

Without fail, a few groups would propose some sort of a taste test: cookies, ice cream, soda, coffee.  Ignoring the fact that taste tests always take 10 times longer to actually carry-out than students think, the inherent problem with taste tests is that the data you get isn’t all that exciting:  “we did a taste test….39 people liked Coke, 45 people liked Pepsi”…yawn….  We can do some hypothesis tests for proportions, maybe break it up by gender, and that’s about it.

Using number lines, and having volunteers record their opinions by placing a dot on the lines, allows for more interesting data.  Consider a test between two colas, Fizz and Shanta.  Make a 10cm number line for each, and let each end represent the extreme opinions:

Fizz 1

Invite volunteers to record their opinions by placing a mark anywhere on the number line:

Fizz 2

Then, measure the distance from the left side in centimeters.  The distance represent the taste score for each cola.

Fizz 3

Think about how much more rich the data is that we get from this method.  Not only do I have a record of which cola the volunteer prefers, but also a measure of the magnitude of the preference.  We could do a two-sample t-test of the means, do a one-sample test for the differences, and even look at a scatterplot of the score pairs.

We can use a similar approach to look at Geoff’s request.  In the pic he shared, the pros and cons of choosing between paper towels and electric hand dryers are listed:

Paper Towels

The electric dryer seems to have many more pros than paper towels, but just how compelling are these pros?  Perhaps one of the cons outweighs all of the pros?  How can we measure the magnitude of each of the arguments?  The number line method provide a means for us to quantify each of these items.  Create number lines from -10 to 10, attaching each to a statement from the poster.  Have students not only identify the pros and cons, but also judge how large or a pro or con each feature is by placing a dot on the number line.

Pros Cons

In a pre-algebra class, students can discuss how to combine all of the data points, and we have a natural “in” for wanting to add positive and negative integers.  “What’s the total con score?”, “What’s the total pro score?” and “What’s the overall score?”  Have students use the data and measurements they collect to defend their opinion, and use the natural opening to encourage writing in math class.

Categories
Statistics

Statistical Illiteracy, Education Policy, and Home Runs

Bob, they printed teacher salaries in the newspaper this week, and it said that the median salary for your school district is $65,000.  That’s great.  How long until you make $130,000?

Yep, that’s my mother.  A good cook, but a bit misguided statistically.  A recent CNN interview of Diane Ravitch reminded me of the need for statistical literacy, and the danger of inappropriate statistical conclusions.

It has been my intent to keep this blog non-political.  Of course, I have my opinions on educational issues at both the national and local levels, but my mission here has been to provide ideas and resources for my math teacher friends.  So now I will try to tip-toe through a recent CNN interview between Diane Ravitch, the former Assistant Secretary of Education, and CNN reporter Randi Kaye.  Diane expressed her concerns over statistical misuse in the interview through a recent blog post:

Randi Kaye asked me about NAEP scale scores, which was technically a very dumb question, and I was stunned.  She thinks that a scale score of 250 on a 500 point scale is a failing grade, but a scale score is not a grade at all.  It’s a trend line.  She asserted that the scale scores are a failing grade for the nation.

Fortunately, CNN chose to cut this portion from the aired interview.  But the statistical arguments are still worth exploring and learning from.  Consider how many parents would feel if their child were to score 600 on the math portion of the SAT.  What does this number represent, and how should it be applied?  On  Diane Ravitch’s blog,  this SAT example is used to relate to the misuse of the NAEP statistic:

That is like saying that someone who scores a 600 on the SAT is a C student, because it is only 75% of 800. But that’s wrong.  The scale is a technical measure. It is not a grade, period.

And while the SAT analogy helps in explaining the misuse of scale scores, I feel it only tells part of the stats abuse story.  The use of a group average to reach a conclusion about all members of the population is simply inappropriate.  Given that Finland is often cited as having the top mean NAEP scores, does this imply that all of their schools are passing?  Of course not, and it would be foolish for their schools to not work to improve.  If, instead of country mean scores, we plotted individual student scores, what would we see?  I suspect that we would see each country represented with sprinklings of students at the top, many students represented in the middle, and sprinklings towards the ends.  Perhaps we would find that the differences often cited as evidence often become indistinguishable.

Let’s move away from the debate and politics of education and towards a classroom lesson.  Consider major league baseball home runs.  The dot plot below shows the mean number of home runs by players on National League team rosters at the end of last year, made using Tinkerplots, the middle-school cousin of Fathom.

Mean HR's

The dot at the far right represents the Milwaukee Brewers, with the LA Dodgers just behind at 11.2  What conclusions can be made about the Brewers, based on this data point?  How many of the conclusions below are appropriate, based on the means?

  • The Brewers are the best home-run hitting team in the national league.
  • The Brewers have all of the best home run hitters.
  • The Brewers score the most runs.
  • The Brewers are the best team in baseball.

What do we know about the individual home run hitters on the Brewers, or any other team?  Not much.  If we plotted all of their individual home runs, what should we expect to see?  Will half be above 12, and half below 12….or can something else occur?  The plot below shows all hitters’ home runs from last year, with the Dodgers and Braves pulled out to demonstrate individual team distributions.Individuals

The Dodgers’ team average of 11.2 gives us a nice summary of the team performance, but how well does it describe individual performance?  Only 3 of the Dodger players hit near 11 home runs, while the others are distributed away from that number.  We also gain an appreciation for the right-skewness of the distribution, which the means did not reveal.  This provides a vital lesson that while the mean provides one overall summary of a distribution, it is not the only summary we should consider, and the mean tells us little about individuals.

Finally, we may also become impressed by high means in our first plot, as the scale of that plot shows clear “leaders”.  But what happens when we use similar scales for both means and individuals?

Similar Scales

Are we as impressed with our means leaders as we once were?  And while there are certainly teams that are ahead, but how much are they ahead?  What are the similarities and differences between the distributions of means and individuals?

I hope this demonstrates the rich discussions you can have with your students regarding the application of statistics.  Get your students writing about statistics, sharing conclusions, and discussing ideas.  The statistical future of our CNN anchors, government officials, and mothers depend upon it!

Categories
Middle School

Putting Theory to the Test – The Classroom Experiment

One of the most intriguing educational videos I’ve come across in the last year is a two-part BBC series called “The Classroom Experiment” (2010).  In this series, a middle-school class sees educational traditions and protocols changed by Dylan Wiliam, an expert on formative assessment from the University of London.  As we consider the start of a new school year, I strongly suggest this series as an example of educational theory in practice, and a great way to become energized for your new school year resolutions.  Questioning traditions and establishing new ideas can be a tough fight, and I’m appreciative that this series shows the effects, warts and all, as the students and teachers struggle to adapt to Mr. Wiliam’s suggestions.

The series features characters and plot-lines as intriguing as any soap opera.  We get to know the students, and their candor in expressing their feelings over the changes is refreshing.  The teachers embrace change, but we learn that hard work, honest reflection, teamwork and adaptation are all needed to produce growth.  Parents come away with a new-found appreciation of the rigor of today’s school.  And the plot twist between episodes 1 and 2 had me yelling across my office to my coaching colleagues.

HOW TO WATCH “THE CLASSROOM EXPERIMENT’

Finding TV show episodes online can be a tricky proposition.  But if you go to our friend YouTube and search for “The Classroom Experiment episode 1”, you might just stumble upon the show.  Each episode is about 60 minutes in length.

Below are links and resources for a number of the classroom changes featured in the video:

“No more hands-up”:  choosing students randomly to respond in class.

Choosing Students to Answer Questions – from Suite101

Random Reporter – linked from PA Dept of Education

Random Name and Group Generator – from SMART Exchange

Colored Cups and Markerboards

Classroom Student Engagement Tips – from Edutopia

Whiteboarding in the Classroom – Dr. Dan MacIsaac, SUNY Buffalo

Classroom Anecdote on Colored Cups – from “Irrational Cube” blog

Make Your Own Mini-Whiteboards! – from theartofeducation.com

Feedback, not grades

Feedback on Learning – Dylan Wiliam video clip

Formative Assessment in Mathematics – Dylan Wiliam

Secret Student

Successes of the Secret Student – MissSMitch’s blog

Feel free to comment and share your favorite formative assessment resource!  Happy new (school) year!