This is my review of:
Author: | Philip Keller | ||
Title: | The New Math SAT Game Plan | ||
The Strategic Way to Score Higher | |||
ISBN: | 098158960X | ||
Format: | paperback, 180 pages, 8.5x11 | ||
Vendor: | http://www.amazon.com/The-New-Math-Game-Plan/dp/098158960X |
I recommend this book to more-or-less everybody who teaches math or science at the high school level, and also to everybody at the college level who ever looks at SAT scores.
Keller knows his stuff. He is a seriously smart guy who has been teaching high-school physics for 26+ years. He has won various teaching awards. Note that you can’t teach physics (at any level) without also teaching a lot of math, even if that doesn’t show up on the job description anywhere. Perhaps more importantly, there is a positive, even joyful style that comes through and makes the book fun to read.
The first 25% of the book is available online, free for all: http://www.satgameplan.com/gameplan123.pdf You might get some value from reading that. Then ... you can judge for yourself whether you want to procure the rest of the book.
Nominally the book is about two big tricks and various smaller tricks that students can use to raise their SAT math-section scores by 100 points or more – but it’s also quite a bit more than that.
A better name for it might be systematic hypothesis testing. Fancier versions of the same idea are called the method of undetermined coefficients, or curve fitting.
This family of ideas has applications in real life, not limited to algebra, and certainly not limited to multiple-guess tests. I’ve used it within the past week. Sometimes it can be used to rigorously prove two mathematical expressions are identical. More commonly, it can be used to prove that two expressions are non-identical. It can do this quite cheaply.
Contrary to what you may have learned in connection with a grade-school science fair, a hypothesis is not a guess or a prediction. It is merely a scenario to be considered. A bedrock principle of critical reasoning is to consider all of the plausible scenarios. On a multiple-guess test, it is easy to be systematic; if testing shows that one of the possible answers is consistent with a representative sample of data, and the other N−1 are not, you know which answer to choose.
More generally, suppose you have a complicated mathematical expression with some number of unknowns. If the expression is supposed to be true for all values of the unknowns, testing may sometimes reveal that the expression is incorrect. This is a very inexpensive way of checking the expression. An even fancier situation arises when the expression is supposed to be true for some values of the unknowns, and the task is to find a solution. Sometime solving a special case provides a partial – or maybe even complete – solution to the general case. If you’re clever, this can be done very systematically. Every computer-algebra system uses this trick internally for doing indefinite integrals and summations.
It is not so easy to write multiple-choice questions that actually force students to use algebra. ETS rarely succeeds.
The subtitle of the book could equally well be
If trickery can move a student’s score by a supposedly-very-significant 100 points or more, how can you infer anything from such a score?
The same goes for any multiple-guess tests that you design yourself. It’s really hard to design trick-resistant questions. It’s Morton’s fork:
It’s hard because “algebra” means different things to different people.
As always, there is no point in arguing over terminology. The idea is what matters, no matter what you call it. The idea here is that students absolutely need to read the language of algebra; otherwise they won’t be able to apply the tricks in the book, and will not get anywhere with the test. On the other hand, it is quite true that the test questions that “look” like they require solving for x can almost always be solved in other ways. If we wanted to remove the ambiguity, the title of the second lesson should be something like “No Equation-Solving Please”.
This is not much of a problem, because the language of algebra is easier to learn (and easier to apply) than the equation-solving procedures. Indeed much of the language is already part of human language and human thought, even among non-mathematical people, as we see for example in the famous XYZ Affair.
To say the same thing another way: We must distinguish blind guessing versus guess-and-check. Blind guessing is bad. Systematic guess-and-check sometimes (but not always) pays off, especially in connection with multiple-guess questions.
Many of the behaviors that multiple-guess tests encourage on test day are the opposite of what we want students to learn.
Parts of the Game Plan book (e.g. page 107) can be considered an invitation and a recipe for equation-hunting ... but don’t blame the book. Blame the SAT folks for foisting on us a test that rewards equation-hunting.
Keller wisely advises students to SLOW DOWN and take a more playful approach, looking for insightful and/or devious methods of solution. That message would be a lot easier to get across if the test didn’t have such brutal time pressure, and such a high reward for almost- mindless trickery.
I imagine the SAT folks operate on the theory that a test that rewards insightful solutions is a good measure of intelligence. Alas, they chop the legs out from under that theory by using only a rather small repertoire of tricks. We see this on page 107 of the book: Using basic principles plus a tiny amount of reasoning, you could have figured out that the triangles are similar, but you can get the answer even quicker without reasoning, just by recognizing the patterns.
This sort of equation-hunting based on pattern-matching virtually never works in real life.
This is super-important, because there are some school districts that fixate on testing-testing-testing-testing, from kindergarten on up. This is not good. This is reeeeeally not good.