
Kirby Urner writes:
A link between programming and algebra is in this concept of variables. A polynomial in the 2nd degree is typically written as Ax^2 + Bx + C, which in Python is more like A*x**2 + B*x + C. The capital letters are called constant coefficients and they "fix" a polynomial, give it its characteristic "call letters" (like in radio: the is KQPD...). Then x is what varies -- you might stipulate over some domain, e.g. from in [-10,10] (square brackets means inclusive).
This point is very subtle. I'm afraid that a lot of adults still couldn't tell -- for example -- the difference between an expression like "x+3" and an equation like "y=x+3". Mathematicians tend to get very comfortable with higher-order functions, "function factories", operations on functions, and the like. Unfortunately, it seems that other people often don't. If we see in a math book Consider the equation y = Ax + B there's already a good deal of sophistication: first, "y = Ax + B" is _not_ a general fact (or theorem) which is true in general, without context or qualification. In this case it's a hypothesis, or, some would say, a statement of a subworld in which this _is_ an axiom or a theorem (one of many possible such subworlds contained as logical possibilities within our own world). (This might be confusing because _some_ formulas, like those in physics or like the quadratic formula, _are_ general -- and are something to memorize and to apply in a whole class of situations. Whereas here, this equation is not a great truth about nature, but just an assumption which we make temporarily to see what would follow from it.) But also, A and B were not specified explicitly. So we're told that "A and B are numbers" and "x and y are numbers" -- but they have different roles with respect to the situation. A, B, x, and y are _all_ unspecified, but A and B are "constants" and x and y are "variables": yet later on we may make a further hypothesis. "Suppose x is 3 -- then what happens?" (One kind of answer is "Then y will be 3A + B".) Isn't it funny, from the point of view of a beginning algebra student, that a "constant", which is supposed to represent a _particular number_, is still often given in symbolic form, and we often never learn what the constant actually stood for? That leads to yet another conceptual problem: _do_ variables always actually "stand for" some particular quantity? At the very beginning of algebra, as I experienced it, the answer was an unqualified "yes": each letter is just a sort of "alias" or "code name" or something for a _particular quantity_, and it is our job to find that particular quantity and so "solve the problem". This completely glosses over the possibility of underdetermined equations, which simply describe relationships between quantities (some mathematicians like to think of functions, or mappings between sets). If we say y=f(x) -- without giving other simultaneous equations -- we have an underdetermined system, and there is no longer a meaningful question of "What is x?". x is not anything in particular; x is really a signifier for the entire domain of a function. But, interestingly, if we add more equations to a system, it may become determined. In that case, we are asking "What are the particular values of the variables x, y, z for which all of these things could be true at the same time?" or "If all of these constraints were applied at once, what possibilities would be left?" or "What is the intersection of the geometric objects corresponding to the loci of points satisfying each of these relations?". In the geometric interpretation, "y = Ax^2 + Bx + C" is actually an equation representing a shape in five-dimensional shape, one equation in five unknowns. When we study a particular quadratic, we are intersecting that shape with the shapes given by three other equations: A = [some given constant], B = [some given constant], and C = [some given constant]. Then, by substitution, we could turn this, if we choose, into one equation in two unknowns (the familiar quadratic). But some people would prefer to say that all five dimensions are still there -- we are just looking at a particular region, where some other conditions specific to our problem happen to obtain. The geometric interpretation of any equation (or inequality or other kind of relation) as expressing something about a subset of a space with a huge number of dimensions (one per variable) is something that shows up in explicit detail in linear algebra, but before that is only hinted at. Some Algebra II textbooks mention a bit about "so many equations in so many unknowns", and substitution, and maybe even determinants, but there isn't the strong geometric sense of "there are an infinite number of dimensions of space, and by writing mathematical relations we choose to focus our attention on intersections or disjunctions or whatever of subsets of that space". And I don't know whether the multidimensional spatial metaphor is helpful or harmful in 7th grade; if people have read E. A. Abbott, it will at least be _exciting_ to them. Once I studied Scheme and lambda and environments in _SICP_, I felt much more comfortable about all of this. Here programming can help a great deal, I think. But I wonder how many algebra students can't really see what's going on and what the actual roles of those letters are.
At the command line, 8th graders would have one kind of function called a "polynomial factory" that turned out polynomials with a specific set of coefficients. These would then be floated as functions in their own right, ready to take in x domain values and spit out f(x) range values.
There may be a better way to write the factory function than I've shown below. I'd like to see other solutions:
def makepoly(A,B,C): """ Build a polynomial function from coefficients """ return eval("lambda x: %s*x**2 + %s*x + %s" % (A,B,C))
If Python's variable scope rules didn't prevent it, return lambda x: A*x**2 + B*x + C would be much easier to read, because it would avoid the format string stuff, which is _not_ so intuitive unless you're already a C programmer. Earlier today I wrote "%02x" without flinching, so "%s" is second nature. But if you're trying to get students to understand how the lambda is working, "%s" and the tuple may add a lot of confusion which it would be nice to be able to avoid.
f = makepoly(2,3,4) # pass coefficients as arguments f(10) # f is now a function of x 234 2*10**2 + 3*10 + 4 # check 234 f(-10) 174 [f(x) for x in range(-10,11)] # remember, 2nd arg is non-inclusive [174, 139, 108, 81, 58, 39, 24, 13, 6, 3, 4, 9, 18, 31, 48, 69, 94, 123, 156, 193, 234] g = makepoly(1,-2,-7) # make a new polynomial g(5) 8 g(f(5)) # composition of functions 4616 f(g(5)) # f(g(x)) is not equal to g(f(x)) 156
I still think that some of my old math teachers would be floored if they could see some of these applications.
Moving beyond 8th grade, we want students to understand what's meant be D(f(x)) at point x, i.e. dy/dx at x -- the derivative. Again, Python makes this easy in that we can write a generic derivative taker:
def deriv(f,x): """ Return approximate value of dy/dx at f(x) """ h = .0001 return (f(x+h)-f(x))/h
deriv(f,2) # f(x) = 2*x**2 + 3*x + 4 11.000200000026439 deriv(g,2) # g(x) = x**2 - 2*x - 7 2.0001000000036129
If you remember how to take the derivative of a polynomial, you'll know that f'(x) = 4*x + 3 and g'(x) = 2*x - 2 -- so these are pretty good approximations.
I would add an h parameter, with a default value: def deriv(f,x,h=0.0001): return (f(x+h)-f(x))/h Another possibility is to take the right-limit numerical derivative and the left-limit numerical derivative. def left_deriv(f,x,h=0.0001): return (f(x)-f(x-h))/h Contrasting the two shows (among other things) that the numerical methods are imprecise and the kind of errors you get may depend on where you take your samples. (The magnitude of the error will depend on the magnitude of h, but the sign of the error will depend on which side you take the derivative on.) I think I would have been happy to have been given some numerical methods before I knew calculus, and then to learn a precise way to do it. (If I had programs that say that the slope of something is 2.0000001, I would like to know how to prove that my programs aren't quite right, and that the real slope is 2.) Python is a good language for this; I have written concise Riemann sum and Monte Carlo programs which find the area under a circle, to illustrate for students how those techniques work. My Riemann sum program is 17 lines and my Monte Carlo program is 51 lines; I think they're readily readable for a non-programmer. The current situation is definitely the reverse -- you get symbolic differentiation and integration first, and numerical methods on a computer later on. In fact, because I wasn't going into engineering, I never had the numerical methods course, and I still don't know about a lot of the efficient ways for doing numerical integration of differential equations and so on. I wonder whether the "symbolic first, numerical later" comes from the historical lack of availability of computers, or whether it's intended to promote detailed understanding. I know a lot of math teachers were very unhappy with graphing calculators because of the extent to which their approximations can substitute for proof. You would see algebra tests where a teacher would say "solve this system" or "solve this polynomial" and students would pull out graphing calculators and get an answer like "2.003" and write "x=2". So one obvious problem is that they weren't solving anything -- they were just typing the equations into the graphing calculator, and it would find intersections or roots by iterative approximation methods. Another problem is that students were seeing an answer that looked plausible and believing it without any proof. So if x "looks like about 2" (in the calculator), students might have little compunction about writing "x=2". But this could be wrong! It's easy to devise problems where a numerical solution is _close to_ an integer or a rational, so if you do an approximation you can easily be misled. Sometimes there is a symbolic method which would give a precise answer in symbolic form (and show that the guess from the approximation is actually incorrect). On the other side, people who do work with applied mathematics nowadays often spend a lot of time writing programs to do numerical approximation -- numerical integration and differentiation, numerical approximate solution of linear systems, numerical approximation of roots of polynomials, and dynamic or statistical simulation. I met lots of experimental physicists who did simulation work and who weren't trying to find any kind of symbolic form for anything -- they wanted _numbers_! Crunch, crunch, crunch. So some people might say that, if a good deal of mathematics is done (as it is) with computer approximation and simulation, then getting people started early with experimenting with computer approximation and simulation is a good thing. The question is whether there is necessarily a loss in understanding of theory when people are taught these numerical techniques early on. (Who wants to do symbolic integration to figure a definite integral when your TI calculator has FnInt?) Of course, you can program symbolic manipulation in a computer too. A very good exercise which some students at my old high school once got a lot of attention for attacking is symbolic differentiation -- given a string or a tree representation of a function of one variable, compute a string or a tree representation of its symbolic derivative. The representation just has to be _valid_; you don't have to simplify. This also leads to the interesting question of how you get a string converted into a tree or a tree converted into a string, which is something people are _definitely_ going to encounter in great detail in computer science. Finding the abstract structure of a formal expression, which is to say parsing it, just keeps coming up everywhere. I could have understood a lot about that in high school, if somebody had taught it. -- Seth David Schoen <schoen@loyalty.org> | And do not say, I will study when I Temp. http://www.loyalty.org/~schoen/ | have leisure; for perhaps you will down: http://www.loyalty.org/ (CAF) | not have leisure. -- Pirke Avot 2:5