There’s so much goodness from Twitter Math Camp to share, and only so many blog posts! Today I want to share two great ideas for assessing student understanding during a unit.
In an afternoon session, my friend Nik Doran led a session on hinge questions: questions diagnostic in nature, not only intended to assess students understanding, but also carefully written to categorize students by their cognitive misconceptions. Many of Nik’s examples were multiple-choice, and I appreciated the subtle differences between the responses which predict student errors. In addition, I liked Nik’s suggestions for how to work with students after these questions: by grouping them by error, differentiating instruction based on correctness, and/or having students defend their response choice.
On Nik’s blog, you will find a wealth of resources regarding these questions, including blog posts on hinge questions and formative assessment research from Dylan Wiliam.
Breaking into content-based teams, the group then attempted to build their own hinge questions. Here are my observations regarding the crafting of these formative experiences:
- Keep it simple. One team attempted to build a question around solving a quadratic equation. The question required terms to be moved, then irrational solutions found. There were too many places where students could trip up that it was impossible to pigeon-hole student responses into neat boxes.
- The issue of non-response came up in discussion: how to tell the difference between a student misconception, and a guess. To me, this factor is minimized by limiting hinge questions to “big ideas”. We shouldn’t ask hinge questions on topics which have limited entry points. Every student should have some basic understanding of the task at hand.
- Use hinge questions in lessons where you have observed patterns of errors, and can easily describe them.
In my example below, I chose to tackle binomial probability, where the set-up of an expression has a number of possible “land mines”.
In a large batch of metal parts, it is anticipated that 15% of the parts will contain a defect. If a random sample of 20 parts is taken, what is the probability that exactly 4 of the parts will show a defect?
My multiple-choice selections are given below:
In A, students will mis-interpret the number of successes and failures in the problem, though the combination is correctly done.
In D, the required combination is incorrect, while the rest of the expression is correct.
In E, the number of successes and failures as exponents have been reversed.
Note that both B and C are correct, and I would be interested in having these two groups try to sell each other on the correctness of their response.
Do these reponses provide a complete picture of ALL mistakes students will potentially make? Probably not, but there should be enough information given for all students to make a reasonable attempt at the problem
Nik’s session leads me to one of the cooler tech-related happenings of the week. Before a “My Favorites” session, I was handed a piece of paper with a strange design on it, and told that I would be asked to participate later in the session. Wow! I was chosen! So cool! But what is this thing? It’s not quite a QR code, and it seems a number of other folks were given similar pieces of paper. I’m not special after all….
For this sharing session, the group was asked a multiple-choice question. Those of use with the figures were asked to hold them up high so that the letter of our response was on top (note the letters on each side of the figure). With 40 cards in the air, the presenter scanned the room with her cell phone and…..what!!!…our choices were recorded on screen! How cool! I admit I get pretty geeked out when I see a cool tool, but this was seriously impressive. And these are Plickers, and app you can load, with free response cards you can download. Definitely looking forward to trying out this tool for formative assessments.