These books I recently bought together are curiously interconnected, to the degree that their introductions and various asides seem to be having a conversation (I’ve been popping in and out of each one, finishing none of them). The introduction to Concrete Mathematics sets the stage rather well:
It was a dark and stormy decade when Concrete Mathematics was born. Long-held values were constantly being questioned during those turbulent years; college campuses were hotbeds of controversy. The college curriculum itself was challenged, and mathematics did not escape scrutiny. John Hammersley had just written a thought-provoking article “On the enfeeblement of mathematical skills by ‘Modern Mathematics’ and by similar soft intellectual trash in schools and universities”; other worried mathematicians even asked, “Can mathematics be saved?” One of the present authors had embarked on a series of books called The Art of Computer Programming, and in writing the first volume he had found that there were mathematical tools missing from his repertoire; the mathematics he needed for a thorough, well-grounded understanding of computer programs was quite different from what he’d learned as a mathematics major in college. So he introduced a new course, teaching what he wished somebody had taught him.
The course title “Concrete Mathematics” was originally intended as an antidote to “Abstract Mathematics”, since concrete classical results were rapidly being swept out of the modern mathematical curriculum by a new wave of abstract ideas popularily called the “New Math.” Abstract mathematics is a wonderful subject, and there’s nothing wrong with it: It’s beautiful, general, and useful. But its adherents had become deluded that the rest of mathematics was inferior and no longer worthy of attention. […]
But what exactly is Concrete Mathematics? It is a blend of CONtinuous and disCRETE mathematics. More concretely, it is the controlled manipulation of mathematical formulas, using a collection of techniques for solving problems. Once you, the reader, have learned the material in this book, all you will need is a cool head, a large sheet of paper, and fairly decent handwriting in order to evaluate horrendous-looking sums, to solve complex recurrence relations, and to discover subtle patterns in data. You will be so fluent in algebraic techniques that you will often find it easier to obtain exact results than to settle for approximate answers that are valid only in a limiting sense.
Some deal! By the way, I believe the book more or less delivers on this promise, at the price of a ferocious amount of work and application on the part of the reader. There is an extremely large selection of problems, the solutions to all of which are given in an appendix (except the most difficult ones which were open questions in mathematics at the time of publication). I think anyone who worked through all of them, blood pouring from the forehead, would stand out with their sheer manipulative muscle.
The two works on probability are very much opposites in just such a hotbed of controversy: the correct mathematical formulation of the practical concepts of probability and confidence. Probability Theory is especially pugnacious, weighing in on this and numerous other matters. Its author, E. T. Jaynes, intended in it to bring together in a grand way a vision of Bayesian or inferential probability, but sadly died before the book was finished. It was edited into a publishable form by Larry Bretthorst, according to whom many sections of the manuscript concluded with “MUCH MORE COMING.” Jaynes’ death led not only to an incompleteness of the work, but also to a certain harshness in the various off-topic asides which a living author might have been persuaded to tone down. On the topic of mathematical courtesy:
Nowadays, if you introduce a variable x without repeating the incantation that it is in some set or ‘space’ X, you are accused of dealing with an undefined problem. If you differentiate a function f(x) without first having stated that it is differentiable, you are accused of lack of rigor. If you note that your function f(x) has some special property natural to the application, you are accused of lack of generality. In other words, every statement you make will receive the discourteous interpretation.
[A statement guaranteeing the implications of the previous paragraph]
We could convert many 19th century mathematical works to 20th century standards by making a rubber stamp containing this Proclamation, with perhaps another sentence using the terms ‘sigma-algebra, Borel field, Radon-Nikodym derivative’, and stamping it on the first page.
Modern writers could shorten their works substantially, with improved readability and no decrease in content, by including such a Proclamation in the copyright message, and writing thereafter in 19th century style.
Other contrarian topics include “The Hausdorff sphere paradox and mathematical diseases”, “Counting infinite sets?”, “Bogus nondifferentiable functions” and “What is a legitimate mathematical function?” A less reverent editor would definitely have omitted these, but although they don’t really add anything to the subject matter of the book, they are a lot of fun and I don’t mind hearing Jaynes’ opinion on them. I want to quote just one more of these tangents, on the subject of probability in quantum physics:
Those who cling to a belief in the existence of ‘physical probabilities’ may react to the above arguments by pointing to quantum theory, in which physical probabilities appear to express the most fundamental laws of physics. Therefore let us explain why this is another case of circular reasoning. We need to understand that present quantum theory uses entirely different standards of logic than does the rest of science.
In biology or medicine, if we note that an effect E (for example, muscle contraction, phototropism, digestion of protein) does not occur unless a condition C (nerve impulse, light, pepsin) is present, it seems natural to infer that C is a necessary causative agent for E. Most of what is known in all fields of science has resulted from following up this kind of reasoning. But suppose that condition C does not always lead to effect E; what further inferences should a scientist draw? At this point, the reasoning formats of biology and quantum theory diverge sharply.
In the biological sciences, one takes it for granted that in addition to C there must be some other causative factor F, not yet identified. One searches for it, tracking down the assumed cause by a process of elimination of possibilities that is sometimes extremely tedious. But persistence pays off; over and over again, medically important and intellectually impressive success has been achieved, the conjectured unknown causative factor being finally identified as a definite chemical compound. […]
In quantum theory, one does not reason in this way. Consider, for example, the photo-electric effect (we shine light on a metal surface and find that electrons are ejected from it). The experimental fact is that the electrons do not appear unless light is present. So light must be a causative factor. But light does not always produce ejected electrons; even though the light from a unimode laser is present with absolutely steady amplitude, the electrons appear only at particular times that are not determined by any known parameters of the light. Why then do we not draw the obvious inference, that in addition to the light there must be a second causative factor, still unidentified, and the physicist’s job is to search for it?
In short, Probability Theory, in addition to being a strong and demanding exposition of Bayesian probability, is a font of unconventional, direct thinking. Principles of Statistics stands in contrast, being an absolutely straightforward textbook on the time-tested methods of frequentist probability. Despite this, its introduction happens to present an opinion on the philosophy of probability, dismissing efforts such as Jaynes’ work:
[The introduction first introduces frequentist probability and then various approaches to inductive probability, all stemming from the “principle of indifference” and finding problems in each one]
It has been reluctantly concluded by most statisticians that inductive probability cannot in general be measured and, therefore, cannot be used in the mathematical theory of statistics. This conclusion is not, perhaps, very surprising since there seems to be no reason why rational degrees of belief should be measurable any more than, say, degrees of beauty. Some paintings are very beautiful, some are quite beautiful and some are ugly; but it would be absurd to try to construct a numerical scale of beauty on which the Mona Lisa had a beauty-value of 0.96! Similarily some propositions are highly probable, some are quite probable and some are improbable; but it does not seem possible to construct a numerical scale of such (inductive) probabilities.
Here’s Probability Theory on the same subject:
For many years, there has been controversy over ‘frequentist’ versus ‘Bayesian’ methods of inference, in which the writer has been an outspoken partisan on the Bayesian side. […] In these old works there was a strong tendency, on both sides, to argue on the level of philosophy or ideology. We can now hold ourselves somewhat aloof from this, because, thanks to recent work, there is no longer any need to appeal to such arguments. We are now in possession of proven theorems and masses of worked-out numerical examples. As a result, the superiority of Bayesian methods is now a thoroughly demonstrated fact in a hundred different areas.
If books could fight… Of course, as far as I’m aware, statistical/probabilistic methods as taught at universities are, at least on a low level, still purely frequentist.
The combination of these very practical, mathematically demanding, hard-nosed but nevertheless somehow philosophical, personal and opinionated books is very intriguing – I hope I’m able to put enough work into them to extract for my benefit at least some of the immense work that has gone into them.