Thursday, November 12, 2015

Activity: The Parallel Tests (Day 54)

Today marks the transition from neutral to Euclidean geometry. We will be adding a parallel postulate tomorrow, which we will use to prove the Parallel Consequences.

Then what exactly am I posting today? With so many worksheets involving proofs this week, today I want to take a break from proofs to post some sort of activity. Usually, I want to post an activity on Friday, if not Friday then Monday. In other words, I post the activity on a day either preceding or following a day off. Well, all four days during this strange week have the property that it immediately precedes or follows a day off, so I could post an activity any day this week.

This is an activity from last year. This is what I wrote last year regarding the activity, including the reasons why I came up with it:

As we already know, many students confuse the concepts of "corresponding angles," "alternate interior angles," and "same-side interior angles." Indeed, this is one reason I began with the Two Perpendiculars Theorem as our first Parallel Test -- determining whether two lines are perpendicular doesn't require a student to know the difference between corresponding and alternate interior angles.

But now we really want students to know the difference. And so there are several ways to make an activity out of learning the difference.

We begin with a diagram of two lines cut by a transversal, forming eight angles, just as in the diagrams of the previous lesson. But now we have students choose numbers from 1-8 at random and then name the types of angles given by the numbers.

Let's assume the diagram from the previous lesson. If one of the chosen angles is 3, and the other is...

-- 1, then we have a linear pair.
-- 2, then we have vertical angles.
-- 4, then we have a linear pair.
-- 5, then we have same-side interior angles.
-- 6, then we have alternate interior angles.
-- 7, then we have corresponding angles.

But what about 8 -- what types of angles are 3 and 8? As it turns out, these angles have no standard given name in geometry.

Notice that angles 1 and 8 are occasionally called "alternate exterior angles." The U of Chicago mentions this in Section 5-6, Question 10. Similarly, we note that angles 1 and 8 could be called "same-side exterior angles."

But no one would even care about alternate interior angles at all, except for the fact that we have Parallel Tests (and Consequences) that mention them. So we can extend the activity so that it refers to the Parallel Tests. In particular, we have the students choose two angles at random, and we ask the question, if these two angles had the same measure, would it necessarily make the lines parallel? If one of the chosen angles is 3, then choosing angles 6 or 7 as the other angle would produce a win. But we don't win if we choose angles 3 and 5 -- same-side interior angles need to be supplementary, not equal, to produce parallel lines. Neither do 3 and 2 produce a winning combination -- these vertical angles are of equal measure regardless of whether the lines are parallel. But 1 and 8 -- alternate exterior angles -- are a winning combination.

A more advanced game would be to make pairs that, if supplementary, would make the lines parallel be the winning combinations. This is tricky because, as it turns out, the pair with no name, 3 and 8, is a winning combination. Students might try proving this if this is disputed.

Truly advanced students might try a game where a student chooses three angles rather than two, assumes that all three have equal measure, then earns a point for each pair of lines that can be proved parallel. Some combinations, such as 1, 5, and 9, are two-pointers -- but these are rare.

Now just as today, being a Thursday, shouldn't really be an activity day, neither should today's post have anything to do with traditionalists. But again, I post about the traditionalists when articles appear on the web -- those either written by traditionalists or discussing some aspect of Common Core math that traditionalists are likely to criticize. Well, here's the article I want to discuss:

Notice who the authors of this article are -- Drs. Katharine Beals and Barry Garelick, two of the major traditionalists I regularly talk about on the blog. We see that the article is clearly criticizing Common Core math -- particularly the demand that students explain their answers:

At a middle school in California, the state testing in math was underway via the Smarter Balanced Assessment Consortium (SBAC) exam. A girl pointed to the problem on the computer screen and asked “What do I do?” The proctor read the instructions for the problem and told the student: “You need to explain how you got your answer.”

The girl threw her arms up in frustration and said, “Why can’t I just do the problem, enter the answer and be done with it?”

The answer to her question comes down to what the education establishment believes “understanding” to be, and how to measure it. K-12 mathematics instruction involves equal parts procedural skills and understanding. What “understanding” in mathematics means, however, has long been a topic of debate. One distinction popular with today’s math-reform advocates is between “knowing” and “doing.” A student, reformers argue, might be able to “do” a problem (i.e., solve it mathematically) without understanding the concepts behind the problem-solving procedure. Perhaps he or she has simply memorized the method without understanding it and is performing the steps by “rote.”

I see that this article refers to a middle school classroom. You already know that I agree with the traditionalists with regards to the early elementary grades, but I don't agree with all they have to say about the middle grades.

Drs. Beals and Garelick mention "knowing" vs. "doing." Here's how I think about it -- suppose that the girl in the article sees the problem that she is described as solving, but in real life. That is, say she's actually in a store and sees a coat she wants to buy. There is a one-day sale where she can buy the coat today at 20% off, or $160. She doesn't have enough money to pay for it today, but tomorrow will be her payday. How much will the coat cost tomorrow at regular price? Or, perhaps more importantly, perhaps this girl will be the clerk in a store, and a customer is asking for the regular price of the coat, My question is, will she be able to find the answer as easily in the store as she did when sitting in front of the SBAC computer?

The Common Core creators want students who can "do" the problem on the computer, but doesn't "know" what to do inside a real store, to receive a low score on the test. And so they decided to have students explain their answers. I would agree with this idea -- provided that the test is authentic enough to distinguish between "doers" and "knowers" in this sense.

An argument can be made that the PARCC and SBAC are not authentic enough, in that someone can "know" what to do in a real-life store, but still can't describe the answer well enough to get a passing score on the SBAC. In that case, the Common Core would be wrong and the traditionalists correct.

Back to the article:

For problems at this level, the amount of work required for explanation turns a straightforward problem into a long managerial task that is concerned more with pedagogy than with content. While drawing diagrams or pictures may help some students learn how to solve problems, for others it is unnecessary and tedious. As the above example shows, the explanations may not offer the “why” of a particular procedure.

I remember one day when I was student teaching an Algebra I class. The question I asked myself was, should I make the student use and draw algebra tiles when solving one-step equations. For one girl, these algebra tiles were especially helpful, but another found drawing the tiles to be a waste of time.

Math learning is a progression from concrete to abstract. The advantage to the abstract is that the various mathematical operations can be performed without the cumbersome attachments of concrete entities—entities like dollars, percentages, groupings of pencils. Once a particular word problem has been translated into a mathematical representation, the entirety of its mathematically relevant content is condensed onto abstract symbols, freeing working memory and unleashing the power of pure mathematics. That is, information and procedures that have been become automatic  frees up working memory. With working memory less burdened, the student can focus on solving the problem at hand. Thus, requiring explanations beyond the mathematics itself distracts and diverts students away from the convenience and power of abstraction. Mandatory demonstrations of “mathematical understanding,” in other words, can impede the “doing” of actual mathematics.

The authors point out that all math learning progresses from the concrete to the abstract. This includes simple arithmetic such as addition -- we can concretely show that two fingers (or marks on a page) plus two fingers equals four fingers, but it's more abstract to show that 2 + 2 = 4. Traditionalists want students to be able to say that 2 +2 equals 4 in one second or less, without using their fingers, so that they have more time to spend on more difficult problems.

So likewise algebra is an extra level of abstraction up from arithmetic. But whereas people see the abstraction from counting to arithmetic as wonderful, they see the abstraction from arithmetic to algebra as terrible. Arithmetic makes things easier to understand, while algebra makes things more difficult (for most people outside of the traditionalists) to understand. There's a reason for the old saying, "each equation cuts book sales in half." Readers don't see equations -- especially algebraic equations with variables -- and think, "Wow, this equation is an abstraction that makes the content easier to understand." They think that it makes things harder and so don't buy the book.

We see that there is such a thing as too abstract -- and for many people, algebra is too abstract. So this is why I don't agree with the traditionalists in the higher grades.

Beals, on her blog last week, implied that special ed students may constitute a class of students who "know" but can't "explain" their answer, and she and Garelick discuss such students in their article:

Most exemplary are children on the autism spectrum. As the autism researcher Tony Attwood has observed, mathematics has special appeal to individuals with autism: It is, often, the school subject that best matches their cognitive strengths. Indeed, writing about Asperger’s Syndrome (a high-functioning subtype of autism), Attwood in his 2007 book The Complete Guide to Asperger’s Syndrome notes that “the personalities of some of the great mathematicians include many of the characteristics of Asperger’s syndrome.”

And yet, Attwood added, many children on the autism spectrum, even those who are mathematically gifted, struggle when asked to explain their answers. “The child can provide the correct answer to a mathematical problem,” he observes, “but not easily translate into speech the mental processes used to solve the problem.” Back in 1944, Hans Asperger, the Austrian pediatrician who first studied the condition that now bears his name, famously cited one of his patients as saying that, “I can’t do this orally, only headily.”

The argument being made is that this patient who can do it "headily" really will be able to solve the problem in the real-world, as in our store example.

Measuring understanding, or learning in general, isn’t easy. What testing does is measure “markers” or byproducts of learning and understanding. Explaining answers is but one possible marker.

That is, for such special ed students mentioned above, the PARCC and SBAC do not authentically tell us whether the student can solve real-world problems.

As Alfred North Whitehead famously put it about a century before the Common Core standards took hold:
It is a profoundly erroneous truism … that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.

I mentioned the mathematician-logician A.N. Whitehead -- along with Bertrand Russell -- around this time last year on the blog. These two mathematicians were the ones who came up with some of the rules of logic mentioned in Chapter 13 of the U of Chicago text.

It's ironic that Beals and Garelick would invoke the name of Whitehead to argue that students should learn to do math without thinking. After all, Whitehead and Russell once famously took 360 pages just to calculate that 1 + 1 = 2! Actually, what really happened was that the two mathematicians were working with extremely low-level axioms and postulates just to show how logic works. Nothing could be taken for granted, not even simple arithmetic. Obviously, no one -- not even the Common Core --expects kindergartners to write 360 pages of explanation just to find 1 + 1 = 2.

Just as last year when I mentioned Whitehead and Russell, I link to the Metamath website to show some examples of their work. First, here's a proof that shortens the 360-page proof down to two lines:

Of course, here's what this two-line proof basically looks like:

Statements          Reasons
1. 2 = 1 + 1         1. Definition of 2 (meaning)
2. 1 + 1 = 2         2. Symmetric Property of Equality

The whole proof was actually an April Fool's joke -- of course 1 + 1 = 2 because this site can simply define 2 to be 1 + 1! But here's a link to Metamath's long proof of 2 + 2 = 4:

This proof is ten lines long. But two of the lines are "1 is a complex number" and "2 is a complex number," and it takes many pages to define the complex numbers.

The takeaway from this article is that when it comes to 1 + 1 = 2 and 2 + 2 = 4, abstraction is great, but by the time we get to middle school math, we must be careful of too much abstraction.

At this time of this blog post, the article had drawn over 250 comments. Fortunately, Garelick responded to some of the comments. Let's look at some of these:

We had to show our work 50 years ago and we hated to do it just as much as kids today do. If this is somehow new when did they stop making kids show their work?

And here is Garelick's response:

The article includes an example of showing one's work which I consider to be an adequate explanation. If that's what you're saying, then we agree. But if you are saying they need the more formal structure and burden as also described in the article, then I disagree.
x = original cost of coat in dollars
100% – 20% = 80%
0.8x = $160
x = $200

Here's another Garelick response, comparing the U.S. to the Asian countries we want to emulate:

In the countries that out-compete us (Singapore, Korea, China) students are not required to explain their answers in the manner described in our article. In fact the article does make a point that showing one's work in a logical organized fashion suffices as an explanation.
As far as math procedures being taught without "understanding", that is a mischaracterization. Some students pick up on the understanding, some pick it up prior to the procedure, some after mastery of it, and some never. The article contains links to a study that shows that procedural fluency and understanding work in tandem.
Are you saying that the students in Mass have been required to explain their answers for years? I agree with you that it is a good idea to examine how students who do well in math and science in high school and pursue STEM majors and careers have been trained. I doubt it is because of being required to "explain their answers" in the manners we have described.

And of course we've mentioned that idea proposed by some other traditionalists before -- we start with surveying students who've done well in math -- say by earning 3 or better on an AP Calculus exam -- and find out what they did to get to that point. Then define the Common Core Standards to be the consensus response of the survey. Again, I'd agree with this approach provided that those who have no intention of a STEM path and can't earn a 3 in AP Calculus aren't labeled failures.

The next comment to which I'd like to respond wasn't written by Beals or Garelick, but the author states that he agrees with their ideas:

The comments on the Beals-Garelick piece are instructive for how they reflect the U.S.'s decades-long Math Wars: each side has its preferred methods, and they back their preference with lots of discussion about the methods themselves--their driving theories, what kids and teachers should be focusing on but aren't, etc., etc. Having watched and studied this argument closely over the last 20 or so years, and having seen it break out everywhere (from schools' faculty lounges to districts' central offices to media outlets to halls of congress), the rhetorical stances and weaponry employed here are familiar.
And, just as in the Math Wars Macrocosm this comment-exchange is reflecting, they're quite frustrating.
They're frustrating mainly because I don't ever see the NCTM-dipped, inquiry-based/constructivist side ever produce proof of where the methods they so prefer are working. (Or, rather, I never see them produce *accurate* proof. When such arguers invoke Common Core's construction, for instance, I wonder if they know how un-informed CC was by studies of the U.S.'s international math-instructional superiors; if it were, you can bet today's math instruction would look a lot like math as we knew it 50 years ago. Look it up.) Rather, they remain in the abstract, ideal, and methodological, hammering the same messages like they're self-evident and dismissing educators like Beals and Garelick as backward and/or behind-times.
With so much constructivist math being taught out there, however, and our national mathematical performance (according to several scales, that is--NAEP, TIMSS, PISA, state test data, ACT-/SAT-provided college-readiness indices, etc.) making so little progress forward, the question for me remains: where is it actually working as promised? True, I do come down on the Beals-Garelick side of the argument. Substantial research has secured that. When I first leaned this way, though, it was due primarily to the dearth of positive enterprise-wide evidence. For all the promise everyone kept spouting so certainly about these methods and ideals, I just kept seeing lots of struggling kids, frustrated teachers, and plummeting scores. School after school, district after district, data report after data report.
And, wouldn't you know, the commentary sparring here falls right in line. None of it ever says, 'Look at the results in DISTRICT using SHMINVESTIGATIONS [actual curriculum name withheld]: their ELL and Free-Reduced Lunch kids outperform the state's averages by 10%, and 85% of their ACT takers record Math scores two points above the college-ready benchmark!' Instead, it's all, 'My method is better than your method because THIS is how kids learn!' and on.
In short (I'm aware this hasn't been short, sorry), I just wish constructivist math proponents would produce concrete precedents or models to look at. As long as they've transformed--via Common Core, adopted curricula, teacher evaluation criteria, and so on--practical requirements in their preferred image, the least they could do now is show us places that have figured out how to make it work for kids. If they can't produce such a pattern (and I never have seen them produce such, by the way--hence my siding with Beals-Garelick), the conversation must change altogether.
[emphasis mine because I especially want to respond to that paragraph]

In the bolded paragraph above, the commenter Eric Kalenze writes that no one has ever shown that any progressive curriculum has produced concrete results, such as ACT scores going up. By contrast, he implies that the only curricula that have shown such positive results are traditionalist. And this is why he sides with the traditionalists Beals and Garelick.

But I don't fully side with Beals and Garelick, at least not for the higher grades. No, I don't have any evidence that any non-traditionalist curriculum will make teens' ACT scores go up. In fact, I suspect that Kalenze is probably correct. The things that traditionalists champion -- such as giving students long, challenging individual problem sets for homework -- most likely do result in higher mathematical achievement.

But here's the thing -- such problem sets raise students' test scores only if the students actually do all of the homework. Students may learn only a little math by doing progressive group assignments and projects, but learning a little math is more than the zero math learned by a student who rips up the individual homework the moment he or she steps out the door. Such a student typically has no desire to see his or her ACT math scores go up.

In fact, this is why I'm posting today's activity. "Corresponding angles" and "alternate interior angles" are concepts that may be too abstract for students. Today's activity makes shows concrete examples of these angles so that students can make this leap of abstraction.

I already mentioned how this activity has three levels. The first level uses only eight of the angles and students identify how a pair of congruent angles make lines parallel. The second level uses only eight of the angles and students identify how a pair of supplementary angles make lines parallel. The third level uses all 16 angles and students identify how three congruent angles make lines parallel -- along with which lines.

There can be a fourth level to this game. It uses all 16 angles and students are given that lines are parallel, and they must decide whether a randomly chosen pair is congruent or supplementary. This, of course, leads to the Parallel Consequences that are coming up soon.

Traditionalists will hate this activity because it isn't an individual problem set. I could have simply written a long problem set with lots and lots of lines cut by transversals. But students will learn more by doing this activity then they would by throwing away the homework.

No comments:

Post a Comment