Categories

## The Common Core and Simulation Models

The Common Core Standards provide an exciting opportunity for statistics education, with inference concepts starting informally in middle school and sampling distributions with inference moving into the high school mainstream. Under the “Probability and Statistics” strand, we find the following:

#### Make inferences and justify conclusions from sample surveys, experiments, and observational studies

CCSS.MATH.CONTENT.HSS.IC.B.4
Use data from a sample survey to estimate a population mean or proportion; develop a margin of error through the use of simulation models for random sampling.

But many of our high school teacher colleagues will need supports to help their students think statistically.  AP Stats teachers, often the loneliest folks in their departments, will now need to share their expertise with non-stats inclined colleagues.  Below are snapshots from a lesson on sampling and margin of error I used with my 9th grade classes, with many ideas adapted from my AP Stats teaching experiences.

STARTING WITH SAMPLING DISTRIBUTIONS

Student in this class had already been exposed to a probability unit, which included conditional probabilities, binomial settings and the normal distribution.  After a discussion of sampling techniques, I wanted to conceptualize margin of error before diving into formulas.  Here’s what I did:

• Class is broken in two large groups
• Each side of the room was given the task of flippling virtual coins, using a graphing calculator
• For one side of the room, 25 coins were flipped.  For the other side, 100 coins were flipped.
• “Coins” were sorted and counted, and the proportion of heads communicated to team captain, who recorded the result on a group graph.

The results for the two groups are shown here.

The class discussion the moves to a comparison of the graphs.  How are they similar?  How are they different?  Clearly, both graphs center about the true proportion (.5).  The shapes also seem similar.  But the vaiability seems to be different.  How likely is it to have 60% of coins show heads, if 25 coins are flipped?  Does this likelihood change with 100 coins?  We’re moving towards sampling distributions and inference topics, without all the scary-sounding language.

LINKING SAMPLING TO PUBLIC-OPINION POLLS

In the next part of the lesson, I wanted students to explore public-opinion polls, choose a topic of interest to them, and think about the associated language and structure. The site pollingreport.com is ideal for this.  The site collects and summarizes poll results from many sources, categorized by topic.

Student pairs were asked to find one poll of interest to them, record the source and the number of people surveyed, and record it on the back board.  We then looked for common threads:

• Sources: we found many repeats in the sources chosen (Rasmussen, Gallup, Washington Post, FoxNews) and discussed reliable sources of information.
• Sample sizes: most surveys had about 1,000 participants, with one poll using 2,000.

Can 1,000 people possible allow us to represent a population?  How reliable can we expect a poll to be if just 1,000 people are surveyed?  Time to move on to the last stage of the lesson – simulation.

SAMPLING WITH STATKEY

The great free site StatKey allows us to look at sampling distributions easily and discuss our observations.  In my class, pairs had a netbook for exploring the site.  We started with the poll about college football above, where 47% of those surveyed registered “support”.  If 47% is the true proportion, and we survey 1000 people, how close to 47% should we expect to see in our survey?  Is 46% plausible?  How about 44%, or even as low as 40%?  We can set the population parameter, and the sample size, then it’s time to draw some samples:

Most of the possible samples center about 47%, but here’s the follow-up for class:

A large majority of the possible samples seem to be within ___ % of 47%.

Certainly, we see that ALL of the surveys are within 10%, but can we narrow that window?  After some dicsussion, the class agreed that 3% was a reasonable window for capturing a large portion of the surveys.  How many of the samples here are within 3% of the “true” proportion of 47%?  StatKey can take of that work for us:

As the unit progressed, StatKey was always shining up on the board to verify our margin of error calculations, and provide a link to the sampling distribution ideas.

Categories

## When Subtraction Problems Aren’t Really About Subtraction

There’s a subtraction problem making it’s way around the internet, supposedly authored by a electronics engineer / frustrated parent (but who knows whom the real author is…) which rails against the “Common Core Mathematics approach” to subtraction:

Online debate with math folk I follow on Twitter has centered on this parent’s misunderstanding of Common Core; that while the standards describe skills students should master, they do not suggest methods for reaching the standards.  My friend Justin Aion has done a fantastic job of summarizing the disconnect on his blog.  Feel free to leave here for a few moments are check out this wonderful summary.  Meanwhile, Christoper Danielson has provided an annotated version, which lends some clarity to the intent of the given question.  His blog provides additional clarifying examples.  It’s another great read!

To this engineer / parent, I would ask why, if time efficiency is your primary motivation, then why bother even writing down the problem?  If it takes you 5 seconds with your traditional method, and I can perform the calculation in 2 seconds with a calculator, then you should be fired and replaced as I can do the problem with a 60% efficiency increase.  But I suspect this parent doesn’t want to entertain this argument.

THE HIGH SCHOOL PERSPECTIVE

I’m a high school math teacher, so why should I care about the method students use to subtract numbers?  Once students learn how to subtract, they take a test to prove their learning, and we move on to the next idea.  It’s just that easy…..or is it?  Actually, this debate matters quite a bit to us high school folks, as it speaks to the global issue here: What does it mean to teach and learn mathematics?  Which of these choices best describes what you want from your math students:

• I want students to master a series of skills, and apply them when needed.
• I want students to understand structures, and apply these understandings to increasingly complex structures.

I suspect that most non-educators, and perhaps many of my math teacher colleagues, feel that the first choice is just fine.  And this is the problem.  Math in our schools is often presented as a series of disconnected skills.  Once you master addition, you get to do subtraction, then you can do multiplication, then eventually fractions and decimals.  4th grade math is a step up from 5th grade math, then 6th grade. Eventually, after we run out of those skills, you get to take algebra, which is a distinct experience from all previous maths.

It turns out that the skills students learn in elementary school, and their embedded understandings, have deep consequences when it’s time to consider algebra.  Students who have been exposed to methods which promote generalization, reflection on algorithms, and communication will find a transition to formal algebra a seamless experience.  Here’s an example:

Depending on your school’s or state’s algebra structure, a unit on rational expressions, and operations on them, often comes around the end of Algebra 1 or the start of Algebra 2.  A problem from this unit might look like this:



An activity I use at the start of this unit often allows me to identify students who understand structures, compared to those who have memorized a disconnected process.  Index cards are handed out, and students are asked to respond to the following prompt:

What does it mean to find a Least Common Denominator (LCD)?

I have given this prompt for many years, with students in all academic levels.  Without fail, the responses will fit into 3 categories:

• Students who use an example to demonstrate their mastery of finding LCD’s (such as: if the denominators are 6 and 8, the LCD is 24) without justifying which their approach works.
• Students who attempt to describe a method for finding a LCD, often using approproate terms like factors and products, but falling short of a complete definition.
• Students who don’t recall much about LCD’s at all.

Students are first exposed to least-common denominators at the start of middle school, perhaps earlier, when it is time to add fractions.  I suspect that many teachers support a similar approach here: make lists of multiples of each denominator, and search for the first common multiple.

It’s an effective method.  And many of my students, even my honors kids, have been “good at math” by mimicking these methods. But how many students can remove these training wheels, and can describe a method for finding an LCD given ANY denominator. Lists are nice, but they aren’t always practical, and they certainly don’t provide an iron-clad definition.  A procedure which ties together understanding of prime factors and their role becomes useful not only in middle-school, but carries over to our study of rational expressions:

A least-common denominator is the product of the prime factors in each denominator, raised to the highest power with which the factor appears in ANY denominator.

I don’t suggest that teachers provide this definition on day 1, and have students struggle with its scary-looking language.  Rather, generalizations require development, discussion and reflection.  Certainly start with making lists, but eventually teachers and students need to analyze their work, and consider the underlying patterns.

Here’s your homework: provide your elementary-age student a subtraction problem.  Then ask them to defend why their method works.  It’s the understanding of structure which seprates “skills in isolation” math versus “big picture” math.  We need more big picture thinking.

Categories

## This Week’s Required Reading for Algebra Teachers!

Mid-April, that time of year where teachers and students start to see the finish-line of the school year.  Everyone feels the burdens…state testing, class distractions, covering all the “material”….teachers have a lot on their plate.  But it’s also a great time to reflect upon the past year, work in teams to consider best-practice, and plan changes for next year.  Two intriguing blog posts by Grant Wiggins this week should be required reading for all secondary math teachers.

First, Grant Wiggins rants against courses we call algebra 1.  What could be wrong with Algebra 1?  We all took it, we all agree kids “need” it, and isn’t a proven gate-keeper to college success?

Algebra, as we teach it, is a death march through endless disconnected technical tools and tips, out of context. It would be like signing up for carpentry and spending an entire year being taught all the tools that have ever existed in a toolbox, and being quizzed on their names – but without ever experiencing what you can craft with such tools or how to decide which tools to use when in the face of a design problem.

Amen, brother.  In algebra, we move from the unit of linear functions, to the unit on systems of equations, to the unit on exponents, then the unit on polynomials. At the end of each unit, we duitfully give the unit test, get some number score back, then move on to the next unit.  We have trained students to think this way:  that algebra means mastering one skill, then the next.  How often do we provide rich tasks which allow students to reflect upon their cumulative skills set?  I appreciate the work of many math folk out there to change the nature of Algebra 1 from a rigid sequence of skills to a course which encourages application and reflection, driven by interesting, authentic problems.  Some examples of outstanding math educators working to promote inquiry in math class are listed at the end of this post.

For many special education students, chunking is a device used to “help” students in algebra.  By continued pounding of square pegs into round holes, using worksheets of similar problems (i.e. solving a one-variable equation, with variables on both sides), students can achieve temporary, recordable “success”.  The students most in need of seeing auhentic problems are often those least likely to move past the chunking, and into authenticity.  Fortunately, to help sort out the madness, Grant Wiggins provided a second great article of required reading for math teachers this week:

Grant Wiggins on turning math classes into bits of disconnected microstandards.

What’s so harmful about taking a broad subject like Algebra and breaking it into pieces?  What is the consequence?

Take a complex whole, divide into the simplest and most reductionist bits, string them together and call it a curriculum. Though well-intentioned, it leads to fractured, boring, and useless learning of superficial bits.

Hallelujah!  Make sure you check out Grant’s driver-ed analogy for the full effect.  More ammunition for us to develop math courses rich with interesting, relevant tasks, where algebra is the tool, not the star of the show.

Fortunately, there are many educators out there working to develop tasks which develop algebraic thinking, and encourage the use of algebra as the tool, rather than the exercise.  Keep them in your toolbox for future planning.

Dan Meyer: the king of perplexity.  If you aren’t visiting Dan’s blog at least semi-regularly, then start now.  And check out his spreadsheet of tasks for the math classroom.  In the same theme, visit Timon Piccini, and his many on-point 3-act tasks.

Sam Shah:  Sam leans more towards the pre-calc, calc end of the math spectrum, but I apprecaite Sam’s constant self-reflection and great ideas for engaging kids in math discussions.

Kate Nowack:  sometimes task-oriented, sometimes ranting on policy, but always interesting.

NCTM’s reasoning and sense-making task library has a number of problems around which algebraic ideas can be wrapped.