Categories
Uncategorized

Reflections from TMC14, Part 2 – Hinge Questions and Plickers

There’s so much goodness from Twitter Math Camp to share, and only so many blog posts! Today I want to share two great ideas for assessing student understanding during a unit.

HINGE QUESTIONS

In an afternoon session, my friend Nik Doran led a session on hinge questions: questions diagnostic in nature, not only intended to assess students understanding, but also carefully written to categorize students by their cognitive misconceptions. Many of Nik’s examples were multiple-choice, and I appreciated the subtle differences between the responses which predict student errors. In addition, I liked Nik’s suggestions for how to work with students after these questions: by grouping them by error, differentiating instruction based on correctness, and/or having students defend their response choice.

On Nik’s blog, you will find a wealth of resources regarding these questions, including blog posts on hinge questions and formative assessment research from Dylan Wiliam.

Breaking into content-based teams, the group then attempted to build their own hinge questions. Here are my observations regarding the crafting of these formative experiences:

  • Keep it simple. One team attempted to build a question around solving a quadratic equation. The question required terms to be moved, then irrational solutions found. There were too many places where students could trip up that it was impossible to pigeon-hole student responses into neat boxes.
  • The issue of non-response came up in discussion: how to tell the difference between a student misconception, and a guess. To me, this factor is minimized by limiting hinge questions to “big ideas”. We shouldn’t ask hinge questions on topics which have limited entry points. Every student should have some basic understanding of the task at hand.
  • Use hinge questions in lessons where you have observed patterns of errors, and can easily describe them.

In my example below, I chose to tackle binomial probability, where the set-up of an expression has a number of possible “land mines”.

In a large batch of metal parts, it is anticipated that 15% of the parts will contain a defect. If a random sample of 20 parts is taken, what is the probability that exactly 4 of the parts will show a defect?

My multiple-choice selections are given below:

Choices

In A, students will mis-interpret the number of successes and failures in the problem, though the combination is correctly done.

In D, the required combination is incorrect, while the rest of the expression is correct.

In E, the number of successes and failures as exponents have been reversed.

Note that both B and C are correct, and I would be interested in having these two groups try to sell each other on the correctness of their response.

Do these reponses provide a complete picture of ALL mistakes students will potentially make? Probably not, but there should be enough information given for all students to make a reasonable attempt at the problem


PlickerNik’s session leads me to one of the cooler tech-related happenings of the week.  Before a “My Favorites” session, I was handed a piece of paper with a strange design on it, and told that I would be asked to participate later in the session. Wow! I was chosen! So cool! But what is this thing? It’s not quite a QR code, and it seems a number of other folks were given similar pieces of paper. I’m not special after all….

For this sharing session, the group was asked a multiple-choice question. Those of use with the figures were asked to hold them up high so that the letter of our response was on top (note the letters on each side of the figure). With 40 cards in the air, the presenter scanned the room with her cell phone and…..what!!!…our choices were recorded on screen! How cool! I admit I get pretty geeked out when I see a cool tool, but this was seriously impressive. And these are Plickers, and app you can load, with free response cards you can download. Definitely looking forward to trying out this tool for formative assessments.

Categories
Uncategorized

Reflections from TMC14, Part 1 – Steve Leinwand and NRICH

This past weekend, I had the pleasure of participating in Twitter Math Camp 2014, held in Jenks (Tulsa) Oklahoma. 150 math teachers from around the USA, Canada and England, many who had only previously shared ideas and personalities via twitter and blogs, met to share their ideas, successes, best practices, and favorite activities. Morning sessions focused on course and task-specific study groups (I participated in the Statistics group). Afternoons started off with teachers sharing “My Favorites”, followed by a keynote (Steve Leinwand, Dan Meyer, and Eli Luberoff) and a menu of teacher-led sessions. Today is the first of a few recaps I’ll share of this jam-packed learning event.


Sly Stallone

There’s a crappy 80’s movie “Over the Top” which starred Sylvester Stallone as a professional arm-wrestler who eventually battles for custody of his son (yes…this was a pretty craptastic movie). In the movie, Sly motivates himself by turning his baseball cap to the side. This action triggers some arm-wrestling adreniline receptors, a competitive “on” switch, and Sly is then prepared to kick butt (or…arm).

This is my best description of Steve Leinwand.  A self-described “math education change agent”, Steve is a mild-mannered math expert…until you place him in front of an audience, at which point the Mathmazian Devil emerges! I have seen Steve talk in person twice now (do yourself a favor and check out his Ignite talk on Youtube) and his inspirational message leaves me in a constant reflective state over my classroom practices.

In this time of debates over Common Core, “fuzzy math”, dots and standard algorithms, it’s refreshing to hear a speaker attempt to tackle the question “what is math?”.  In his presentation, Steve offers up two options for defining mathematics:

A set of rules to be learned and memorized to find answers to exercises that have limited real world value.

OR

A set of competencies and understandings driven by sense-making and used to get solutions to problems that have real world value.

Clearly, the first definition is not correct, though I fear there are many who would find aspects of the definition acceptable.  I, and the room, gravitated towards the second definition, but is this a complete picture of mathematics?  I have 2 quarrels…

First, the phrase “problems that have real world value” bugged me quickly, conjuring images of contrived real world problems where kids factor expressions which never really occur naturally so they can find where a fake baseball which ignores some pretty important laws of physics might land.

Does “real world value” necessarily imply context? If a math problem provides insight into an abstract pattern, and the process provides some structure later to tackle real world scenarios, then by transference, the problem had real world value. so I have become ok with this aspect of Steve’s definition.  But I’d like to move beyond the perception that mathematics only adds value if it can be attached to the real world.

Working backwards in Steve’s definition, we reach the phrase  “used to get solutions”? Do all math problems have solutions? Is the primary goal of math to find a solution? Have we failed if we don’t find a solution? Some of the strongest formative mathematical experiences I have had centered around problems for which I never found a solution, or perhaps did not have a unique solution.  I prefer “used to analyze scenarios, either abstract or real-world.”

I appreciate Steve in that he challenges teachers to think about the many ways their students may approach similar problems, sieze opportunities to discuss methods, and let students determine their optimal strategy. Many of the common core math debates focus on method: there is a strange “my way or the highway” attitudes towards standard algorithms. Its refreshing to have Steve champion alternate methods so passionately, and he offers his admiration for the 3rd Standard for Mathematical Practice:

Construct viable arguments and critique the reasoning of others.

The ability to analyze, critique, and assess method is equally as important as the math being done. All of us who talk to parents, colleagues and stakeholders need to remember this and do a better job at effectively communicating the message of what math is really all about.


ProblemThe Enriching Mathematics site, NRICH, presented by Megan Schmidt in an afternoon session, provides problems with multiple entry-points which lead to argument sharing.  In the session, participants were presented with a Stage 3 and 4 problem from the site, where finding the value of the number marked with the question mark is the goal:

My PaperI chose to look at pairs of repeating symbols to craft my solution, while my tablemate dove into developing equations and forming systems. The most frustrating (but coolest AHA) moment for me when Megan offered adding sums of rows and columns as an alternate, quite obvious, possibility.  I am definiely looking foraard to exploring these problems and sharing them with my classes.

Thanks to Steve for giving us all the inspiration to think differently about classroom practices, and to Megan for the perplexing hour of sharing!

claimtoken-53da5ae523947

Categories
Statistics

AP Statistics “Best Practices” 2014

Last week, I arrived home after 8 days in Kansas City, where I participated in the AP Statstics Exam reading. It’s hard work, filled with long days of grading papers. But all the readers seem to take some sadistic delight in this work, and the professional connections made through the week are outstanding.

One of the highlights of the week is Best Practices Night, organized by my friend Adam Shrager. This year, 20 or so different folks presented 5-minute looks into their classrooms.  Below are summaries of some of my personal favorites. You can check out all of the presentations on Jason Molesky’s StatsMonkey site

GUMMI BEARS – KEVIN DiVIZIA

You’ll find that AP Stats teachers enjoy candy….too much so at times my doctor tells me. Last year, Kevin shared his data collection activity with stomp rockets.  This year, Kevin upped the ante, with an activity where students launch Gummy Bears, Gummy Worms and other candies using catapults.  Which type of candy flies farthest? What can we say about the consistancy of the launches? I’m looking to incorporate this into my 9th grade class as an introduction to variability and estimation.

Gummis

Kevin’s presentation on the StatsMonkey site is Keynote. I have converted it here to Powerpoint for us non-Keynote users.

MORBID MATH – BRIANNA KURTZ

Stats teachers have many data collection activities in their arsenal, but this idea from Brianna wins the prize for most off-beat concept. In this activity, students are asked to estimate life expectancy in a population. To collect data, the class uses something readily avilable every day: the obituaries. This presentation was one of the clear highlights of the evening, with many in attendance wondering what a class taught by the hysterically entertaining Brianna would be like!  Visit StatsMonkey for her activity worksheet, and use the dead as data!

zpuzzles Z-PUZZLES – CHRISTINE WOZNIAK

Jigsaw puzzles make for great reviews in just about any math class.  Here, Christine shares puzzles she uses to review the Normal Distribution. Cut out the pieces, find the probabilities and solve the puzzle!  Template included.

SAMPLING USING BEADS – PAUL RODRIGUEZ

Paul is part of the AP Stats Test Development Committee, and always has great ideas for the Stats Classroom. At the reading, Paul shared his sampling activity, using Air Gun ammo of different colors (and slightly different sizes) to draw small samples from a large population. Using a paddle made from pegboard, random samples can be drawn, leading to a first discussion on inference. Paul promises to share the plans for building your own sampling paddle, so check back on StatsMonkey often!

UPDATE: Paul’s presentation has been uploaded to the StatsMonkey Site, along with plans for making your own sampling paddles.

STARBUSTS AND R-SQUARED – DOUG TYSON

I appreciate presentations where speakers attempt to de-tangle a tricky concept in math class. Having students move beyond a “canned” understanding of the coefficient of determination and towards a real understanding of predictive improvement based on an explanatory variable is a worthwhile lesson. In his activity, Doug Tyson challenges students to grab as many Starburst candies (see…I told you Stats folks like cnady) as possible in their hand, then examines the predictive value of using hand size to estimate the number of grabbed candies.  How much are our predictions improved by thinking about hand size, as opposed to thinking about the mean?

There’s so much more sharing goodness on the StatsMonkey site, including:

  • A review of Geddit, for formative assessment
  • A QR code scavenger hunt
  • Hershey Kisses and Confident Intervals, which I used in my class this year

Soon, I will post more resources shared by Chris Franklin, who gave a brief history of stats education during her Professional Night presentation.