I recalled reading about a potential golf-related task on twitter. To be honest, I don’t recall whose exact post provided the inspiration here (note – I am thinking it was Robert Kaplinsky or John Stevens, but I may be wrong. If anyone locates a source, I’ll edit this and provide ample credit), but it felt like a game-related task could provide by the strategy and fun elements which tend to be missed by drawing tasks.

**HOW THE GOLF CHALLENGE WORKS:**

The goal – write equations of lines which connect the “tee” to the “hole”. Use domain and/or range restrictions to connect your “shots”. Try to reach each hole in a minimal number of shots. Leaving the course (the green area) or hitting “water” are forbidden. All vertical or horizontal likes incur a one-stroke penalty.

On the day before the task, the class worked through a practice hole. Besides understanding the math task, there are also a few Desmos items for students to understand:

- Syntax for domain / range restrictions
- Placing items into folders
- Turning folders on /off

For the actual task, a shared a link to a Desmos file with 5 golf holes. I tried to build tasks which increased in their difficulty. In practice, the task took an entire class period (75 minutes), and students worked in pairs to discuss, plan, and complete the holes. All students then uploaded their graphs to Canvas for my review, and filled out a “scorecard” which included “par” for each hole. It became quite competitive and fun!

In the end, there is not too much I would change here. Perhaps add some more complex holes. I’d also like to provide opportunity for students to design and share their own golf holes, and study the “engine” which built mine. I hope your class has fun with it! Please share your suggestions, questions and adaptations.

]]>Challenge your students to list some things they notice and wonder about the graph, and visit the NYT August post to discover how teachers use WGOITG in their classrooms. Here are some ideas I have used before with my 9th graders:

- Have groups work in pairs to write a title and lede (brief introduction) to accompany the graph.
- Ask tables to develop a short list of bullet points facts which are supported by the graph, and share out on note cards.
- Have students consider how color, sizing, scaling are used in effective ways to support the story (note how the size of the arrows play a role in the graph shown here). This is a wonderful opportunity to think of statistics beyond traditional graphs and measures.

Invite your students to join in the moderated conversation, which drops on Thursday. Have your own favorite way to use WGOITG? Share it in the comments!

]]>

Comparing Data Sets and Summary Statistics

Regression Facts (Mean/mean point and slope)

Teaching the meaning of r-squared

“Release the Hounds” – my first attempt at random sampling

Participate as a student, Steal and Share

“Backpack Weights” – thinking about scatterplots (AP Stats)

]]>Each free-response solution begins with the “model solution” – the ideal explanation a student would provide for full question credit. It is not unusual for Statistics students to struggle with clear communication, and having students read and dissect the model solution can be helpful in strengthening statistical arguments. A few times this year, I have used the model solution as a formative assessment tool with an activity I call “Assembling the Model Solution”.

Here’s how it works – start with an AP Free-Response question with a narrative aspect. Today, I chose a problem which requires students to interpret a P-value, from 2009:

The model solution contains a number of non-negotiable elements: a conditional probability, a reference to smple results, and the “extremeness” of results.

Next, I took the model solution as broke it into small, strategic “bites”. At the same time, I added some parallel distractors and a junk phrase or two.

Then, use a paper cutter and slice the Word document into phrase slices, and paper clip together. All students then received the problem and the slips of paper, with the challenge to assemble the model solution for part a of the problem.

The conversation were rich, and the teams mostly debated the salient aspects of the problem apprpriately. The biggest points of debate and incorrect solutions came from:

- The difference between “sample” and “population” proportions.
- The assumption of sameness in the treatments as the conditional aspect of P-value.

I have used this strategy a few times now, and continue to tweak how I provide the slips of paper. I’m also looking at digital options, but I like the social aspect of moving the slips of paper. The method is not ideal for everything in AP Stats, but there are a few areas in our curriculum where this fits in nicely:

- Sampling and experimental design
- Conclusions for inference procedures
- Describing distribuitions.

You can download my file for this activity here. Enjoy!

- Credit to Jon Osters and the AP Stats glitterati who rightfully pointed out that my original post spelled “Yay!” incorrectly.

- Collect data using Shapesplosion – an online game (think the old Perfection Game) developed by folks from Grinnell College. The plan was to play with, and without color. Aside: it’s OK if you disappear for a while to play with this site, it’s super-fun!
- Share data using the collaboration space on OneNote.
- Use the artofstat.com web apps to make graphs and produce statistical summaries.

This is what I had in mind….Here’s what really happened

- Shapesplosion didn’t work – while I rehearsed the site on my laptop, it didn’t work for the kids. It was a Flash issue, and stopping to figure this out wasn’t in the cards. After a few minutes of hemming and hawing, I settled upon a far less fun data collection idea: Tell me a temperature you deem “cold” when you go outside, and one you deem “hot”. Not nearly as sexy as the time data I wanted…but hey, I needed a data set. But at least we have data until…..
- ArtOfStat was glitchy and wasn’t playing nice with copy/paste from OneNote. Kids are getting restless, we haven’t done much stats review, and I am definitely starting to lose my “big” class.

So, what do you do when a lesson goes south, your objective is slowly slipping away and the kids smell chum in the water?

**Remember:**

It’s not the kids’ fault when your plans go kaput. You may feel like some yelling is in order, but breathe, calm down, and be honest about what went wrong.

Student learning can’t be compromised because things go south. “There’s no time” is an easy out when we get rushed, but maintaining lesson fidelity is far more important than rushing to get to “stuff”.

Maintain clear expectations. Eventually all of my students were able to review some, and I had to alter my plan of attack. But stopping class, making sure we were all on the same page and understood the statistical expectations was necessary.

It won’t be the last time stuff goes wrong….roll with it…and laugh along with it.

]]>OK….so most shared work problems suck. I apologize to my students aspiring to be pipe organ re-varnishers, but we can do so much better.

This week I used Cocoa Puffs, stopwatches and Desmos to bring some engagement to my rational expressions lessons. To start, each student was provided with a plate filled with 30 grams of Cocoa Puffs (incuding the plate) . After my 3-2-1 countdown, students picked Puffs one at a time from the plate and tossed them onto an empty plate. As they completed the task, times were recorded for each student.

After students finished, I had them partner up and consider the question: “if you worked together with your partner on this task, with one plate of Cocoa Puffs, how long would it take you?”

Students asked a number of clariying questions (yes, there is one plate. yes, you can pick them off the plate together.), partnerships developed a few ideas. We debated the validity of many of them:

- Many groups took the average of the two times, then divided the result by 2. This seemed reasonable to a number of groups, and led to a discussion of the vavlidity of averaging rates.
- Some groups attempted to find a rate per gram. This was a good start, but given that groups did not know the mass of the plate (I use Chinet, so it’s bulky!), this introduced some guesswork.

To steer discussion, we focused on one student who took 80 seconds to complete the task. How much of the job did they complete after 40 seconds? After 20? Can we write a function which depends on time here? What does it mean? Crossing the bridge from the task time (80 seconds) to the job rate (1/80 per second) is a tricky transit. Using Desmos to show the “job” function lends some clarity.

From here, many partnerships felt more comfortable with establishing their own estimates. The next day, teams shared their work and estimates on OneNote, then peer-assessed the communication. Some of the work was wonderful, well-communicated, and served as a model for the class to emulate.

The next day, we listed our calculated shared work predictions on the board, and tested our estimates. Teams timed each other with cell phone stopwatches, and did not let participants see the clock until the task was complete.

Many groups were quite close to their calculated predictions! We discussed why our predictions didn’t quite meet the actual – bumping, variability in mass, general panic – and when error is acceptable. And now we have a firm background in rates and rational functions – time to conquer those pipe organs!

]]>

For my 9th grade class last year, I wanted an activity which would cause students to think about variability in data distributions, and introduce standard deviation as a useful measure of variability. You can preview the activity here. Take a few minutes, test drive it, and see if you can suss out the problems.

So, what went wrong. Well, a number of things – but here are the two primary suspects.

- It’s too damn long! It takes way too long to get to a working definition of standard deviation, and by screen 14 students are all over the place. Using student pacing could help remedy some of this, but I found much of the class losing interest by the time we got there.
- I was stubborn! I was looking for a “cute” visual way for students to think of standard deviation as “typical distance from the mean”. In my zeal to hammer this working definition home, I tried to build slick graphs which lost many students.

How to fix it – last year, the Desmos teaching faculty developed the “Desmos Guide to Building Great Digital Activities“. It’s worth a read (and a re-read) now and then to guide activity construction. In my variability activity, this bullet point from the guide resonated with me: **Keep expository screens short, focused, and connected to existing student thinking**. In many of the screens, I over-explained things. Students don’t want to read when they are completing a digital activity, they want to investigate, create, and explore. I robbed them of that chance.

Today I tried my new, rebuilt variability activity with 2 classes (slimmed down to 12 screens from 19), and there was a vast improvement in class engagement. There were more opportunities for students to express their ideas regarding comparisons of distributions, and we had plenty of time to pause, recap, discuss and think about next steps. A number of points from the Desmos Guide drove my thinking:

**Ask for informal analysis before formal analysis. **While I kept in the “typical distance” definition of standard deviation, it was only in a small way – we’ll move on to a more formal definition next. Students were able to conceptualize standard deviation as a useful measure, and now can move on to a formal definition. My old activity felt too “sledge-hammer-ish” and I knew it.

**Incorporate a variety of verbs and nouns**. I provided a number of ways for students to think about variability and distribution comparisons in the early screens, and strove to build different-looking screens. This kept the ideas fresh, and students talked with their partners to assess these differences in different ways.

**Create activities that are easy to start and difficult to finish**. In the last 3 screens, I ask students to extend their thinking, be brave, and apply new ideas. For those students who got this far – most did, these screens elicited the loudest debates. We ran out of time at the end, but we have some good stuff to build off of tomorrow morning.

I’ve learned that it’s important to be honest about an activity. It’s easy to blame the students when something goes wrong, especially which class heads away from learning and towards frustration. But performing an activity autopsy, focusing on clear goals, and keeping the design principles in mind is helpful to move an activity forward.

]]>

**This week my school admin team got it right. And I feel fortunate to work with them. **

For about a year, my school’s admin team had kicked around the concept of a school-wide EdCamp. To be honest, I never thought it would see the light of day…there are just too many other things loaded into the calendar. So an invitation to work with a team of teachers to organize a high school-wide EdCamp was a true surprise…then the work began.

We planned 3 morning sessions, followed by lunch and prizes. But beyond the structure of the day, we had a lot of talking-up to do. Would our teachers, many of whom had never been to an EdCamp, understand the concept? Would people propose sessions? Could we engage the curmudgeons in our teaching ranks? At our opening faculty meeting, we showed a brief video to help teachers understand the EdCamp concept, then talked it up over the next 2 days.

The morning of the conference many teachers suggested ideas, asked questions and thought about what they’s like to learn. In the end, we had a nice variety of topics and it felt like there was something for everyone!

As I walked around during the sessions, I was thrilled to see rooms filled with discussions, and teachers from different departments with an opportunity to engage. I know there is no possible way every to reach everyone, but I hope it was a day of professional learning for most.

On my end, I offered a session “Activity Builder for the Non-Math Crowd” which seemed to be of use to those assembled. Then later, a session where we just did math – problems from Open Middle, Nrich, Visual Patterns and others let math teacher talk, learn, and think about engaging problems for their classrooms. You can download the problems I shared with this link.

Thanks to the fantastic people I work with for letting me be part of this: Baker, Dennis, Kristina and Melissa.

And a big thanks to the HH admin team: Dennis, Ralph, Tracey and JZ. I appreciate the opportunity, and promise not to complain again….until the next time…..

]]>

This week the catcher of that team, Darren Daulton, died after a battle with brain cancer. Newspapers have shared memories of “Dutch” and among the articles is one which reminds us of the surprising number of former Phillies who have passed away due to brain cancer (Tug McGraw, John Vukovich, and Johnny Oates). A revised 2013 article from the *Philadelphia Inquirer* analyzes the the unusual number of Phillies who have developed brain cancer, and contains many appropriate entry points for Statistics courses. Some highlights from the article:

- A comparison of the observed effect to random chance – here a professor of epidemiology summarizes: “You can’t rule out the possibility that it’s random bad luck.”
- A summary of plausible variables which could lead to elevated levels of exposure, such as artificial turf (which may have contained lead) or anabolic steroids.
- An analysis of the increase rate of brain cancer among Phillies – here we are told that the Phillies’ rate is “about 3.1 times as high” as the national rate. A confidence interval, along with an interpretation and associated cautions are also included.

Let’s explore that “3.1 times” statistic…time to break out the technology.

A few weeks back, I attended the BAPS (Beyond AP Statistics) workshop in Baltimore, as part of the Joint Statistics Meetings. Allan Rossman and Beth Chance shared ideas on using their applet collection to explore simulation (see my earlier post using the applets to Sample Stars) along with a “new” statistic we don’t often talk about in AP Stats – relative risk.

To start, I used the Analyzing 2-Way Tables applet and used the “sample data” feature. Here I attempted to use the same numbers quoted in the article:

The national rate was 9.8 cases per 100,000 adult males per year, while the rate in the former Phillies was 30.1 cases per 100,000 – about 3.1 times as high.

There are two issues here: first, to perform a simulation we need counts, so numbers like 9.9 and 30.1 just don’t play nice. I’ll use 10 and 30. Also, I wasn’t surprised that this site was not real happy with my using a population of 100,000 for simulation. Here, I am going with 1,000 for convenience and to make the computer processor gods happy – we can debate the appropriateness of this down the road.

The applet will then simulate the random assignment of the 2,000 subjects here to the two treatment groups (group A: being a Phillie, group B: not being a Phillie). How likely is it that we will observe 30 or more “successes” (which here represent those who develop brain cancer) in one of the two groups? In the applet, we can see how the “successes” have been randomly assigned from their original spots in the 2-way table to new groupings.

AT BAPS, Allan Rossman then explained how we can summarize these two groups using Relative Risk, which is listed under the “Statistic” menu on the applet. In general, relative risk is the proportion of success in one group divided by the proportion of success in a second group. If we have proportions in two groups which are equal, then the relative risk would be 1. We can then link to the newspaper article which claims a 3.1 “relative risk”, simulate many times with the applet (below we see the results of 10,001 simulations), and compare to the reported statistic.

According to the simulation, we should only expect to see a relative risk of 3 or above about 0.08% of the time – clearly an “unusual” result.

But the article does not claim a significant difference, and cautioned against doing so as a number of assumptions were made which could alter conditions. This would be an opportunity to discuss some of these design assumptions and how they could change the outcome.

Rest in Peace Dutch!

]]>

Late in May, David Wees shared materials which challenge students to investigate the relationships between “standard form” and “completing the square” form (aside – does anyone agree on proper terms for these?) using area models to build representations. Given that I use area models often to introduce polynomial multiplication, I was eager to maintain consistency in the student understanding.

But before we dove into David’s lesson, I wanted students to revisit their understanding of area models. In this Desmos Activity Builder lesson I created, students shared their interpretations of area models and worked in pairs to investigate non-square models. In one of the final screens, students argued for the “correct” interpretation of a model.

Using the Desmos teacher dashboard, we could see clear visual arguments for both representations. This was valuable as we ended the lesson for the day, and tucked that nugget away for Monday, when we would begin to formalize these equivalencies.

After the weekend, students worked independently through David’s Completing the Square lesson. Not only did students quickly move through the area models and the dual representations, the debates between students to explain how to move from one representation to the other were loud and pervasive. I’m also loving how many of my students have started to use color as an effective tool in our OneNote-taking (below).

At the end of the sheet, all students completed problems which translate standard form to vertex form with no support from me (“no fuss…no muss”). It dawned on me that something amazing had happened….my students had figured out completing the square without my ever talking about completing the square.

Tomorrow we’ll tackle those pesky odd-number “b” terms, but my students own this already!

]]>