Copyright © 2007 jsd
Here are some hints on how to do basic math calculations.
These suggestions are intended to make your life easier. Some of them may seem like extra work, but really they cause you less work in the long run.
This is a preliminary document, i.e. a work in progress.
If that’s too much bother, start with a plain piece of paper and sketch in faint guidelines when necessary.
Suppose the name is constructed according to a pattern, such as FrT where F stands for force, the subscript r stands for rolling resistance, and T stands for truck. In the glossary, explain what each element means, so the reader doesn’t have to guess. Does T stand for truck? Or does it stand for trailer? Or?????
Sometimes the penalty for getting this wrong is 328 million dollars, as in the case of the Mars Climate Orbiter (reference 3).
Most computer languages do not automatically keep track of the units, so you will have to do it by hand, in the comments. If the calculation is nicely structured, it may suffice to have a legend somewhere, spelling out the units for each of the variables. If variables are re-used and/or converted from one set of units to another, you need more than just a legend; you will need comments (possibly quite a lot of comments) to indicate what units are being used at each point in the code.
One policy that is sometimes helpful (but sometimes risky) is to convert everything to SI units as soon as it is read in, even in fields where SI units are not customary. Then you can do the calculation in SI units and convert back to conventional units (if necessary) immediately before writing out the results. (This is problematic when writing an “intermediate” file. Should it be SI or customary? How do you know the difference between an “intermediate” result and a final result?)
It is certainly possible for computer programs to keep track of the units automatically. A nice example is reference 4. It is a shame that such features are not more widely available
Here’s a classic example: The task is to add 198 plus 215. The easiest way to solve this problem in your head is to rearrange it as (215 + (200 − 2)) which is 415 − 2 which is 413. The small point is that by rearranging it, a lot of carrying can be avoided.
One of the larger points is that it is important to have multiple methods of solution. This and about ten other important points are discussed in reference 5.
The classic "textbook" diagram of an inequality uses shading to distinguish one half-plane from the other. This is nice and attractive, and is particularly powerful when diagramming the relationship between two or more inequalities, as shown in figure 2.
However, when you are working with pencil and paper, shading a region is somewhere between horribly laborious and impossible.
It is much more practical to use hatching instead, as shown in figure 3.
Obviously the hatched depiction is not as beautiful as the shaded depiction, but it is good enough. It is vastly preferable on cost/benefit grounds, for most purposes.
Some refinements:
See reference 6.
See reference 7.
The modern numeral system is based on place value. As we understand it today, each numeral can be considered a polynomial in powers of b, where b is the base of the numeral system. For decimal numerals, b=10. As an example:
| (1) |
As discussed in reference 6, this allows us to understand long multiplication in terms of the more general rule for multiplying polynomials.
Given two expressions such as (a+b+c) and (x+y), each of which has one or more terms, the systematic way to multiply the expressions is to make a table, where the rows correspond to terms in the first expression, and the rows correspond to terms in the second expression:
| (2) |
In the special case of multiplying a two-term expression by another two-term expression, the mnemonic FOIL applies. That stands for First, Outer, Inner, Last. As shown in figure 5, we start with the First contribution, i.e. we multiply the first term from in each of the factors. Then we add in the Outer contribution, i.e. the first term from the first factor times the last term from the last factor. Then we add in the Inner contribution, i.e. the last term from the first factor times the first term from the last factor. Finally we add in the Last contribution, i.e. we multiply the last terms from each of the factors.
If you can do long division, you can do square roots.
Most square roots are irrational, so they cannot be represented exactly in the decimal system. (Decimal numerals are, after all, rational numbers.) So the name of the game is to find a decimal representation that is a sufficiently-accurate approximation.
We start with the following idea: For any nonzero x we know that x÷√x is equal to √x. Furthermore, if s1 is greater than √x it means x/s1 is less than √x. If we take the average of these two things, s1 and x/s1, the average is very much closer to √x. So we set
| (3) |
and then iterate. The method is very powerful; the number of digits of precision doubles each time. It suffices to use a rough estimate for the starting point, s1. In particular, if you are seeking the square root of an 8-digit number, choose some 4-digit number as the s1-value.
This is a special case of a more general technique called Newton’s method, but if that doesn’t mean anything to you, don’t worry about it.
Note that the square of 1.01 is very nearly 1.02. Similarly, the square of 1.02 is very nearly 1.04. Turning this around, we find the general rule that if x gets bigger by two percent, then √x gets bigger by one percent ... to a good approximation.
We can illustrate this idea by finding the square root of 50. Since 50 is 2% bigger than 49, the square root of 50 is 1% bigger than 7 ... namely 7.07. This is a reasonably good result, accurate to better than 0.02%.
If we double this result, we get 14.14, which is the square root of 200. That is hardly surprising, since we remember that the square root of 2 is 1.414, accurate to within roundoff error.
Sine and cosine are transcendental functions. Evaluating them will never be super-easy, but it can be done, with reasonably decent accuracy, with relatively little effort, without a calculator.
In particular:
The following facts serve to “anchor” our knowledge of the sine and cosine:
|
Actually, that hardly counts as “remembering” because if you ever forget any part of equation 4 you should be able to regenerate it from scratch. The 0∘ and 90∘ values are trivial. The 30∘ is a simple geometric construction. Then the 60∘ and 45∘ values are obtained via the Pythagorean theorem. The value for 45∘ should be particularly easy to remember, since √2 = 1.414 and √½ = ½√2.
The rest of this section is devoted to the Taylor series. A low-order expansion works well if the point of interest is not too far from the nearest anchor.
For most purposes, the best option is to use the Taylor[1,3] approximation anchored at zero. This requires a couple more multiplications, but the result is accurate to better than 0.07%.
If you really want to minimize the number of multiplications, we can start by noting that the Taylor[1] extrapolation coming up from zero is better than the Taylor[0,1] extrapolation coming down from 30∘, so rather than using the closest anchor we use the 0∘ anchor all the way up to 20∘ and use the 30∘ anchor above that. This has the advantage of minimizing the number of multiplications. Disadvantages include having to remember an obscure fact, namely the need to put the crossover at 20∘ rather than halfway between the two anchors. The accuracy is better than 2.1%, which is not great, but good enough for some applications. The error is shown in figure 7.
There are many other options, but all the options I know of involve either more work or less accuracy.
The spreadsheet that produces these figures is given by reference 8.
Here are some additional facts that are needed in order to carry out the calculations discussed here.
|
Last but not least, we have the Pythagorean trig identity:
| (6) |
and the sum-of-angles formula:
| (7) |
If you can maintain even a vague memory of the form of equation 7, you can easily reconstruct the exact details. Use the fact that it has to be symmetric under exchange of a and b (since addition is commutative on the LHS). Also it has to behave correctly when b=0 and when b=π/2.
If we assume b is small and use the small-angle approximations from equation 5, then equation 7 reduces to the second-order Taylor series approximation to sin(a+b).
| (8) |
If we drop the second-order term, we are left with the first-order series, suitable for even smaller values of b:
| (9) |
You can use the Taylor series to interpolate between the values given in equation 4. Since every angle in the first quadrant is at least somewhat near one of these values, you can find the sine of any angle, to a good approximation, as shown in figure 6.
Copyright © 2007 jsd