Abe Dillon wrote:
I don't disagree that infix notation is more readable because humans have trouble balancing brackets visually.
I don't think it's just about brackets, it's more about keeping related things together. An expression such as b**2 - 4*a*c can be written unambiguously without brackets in a variety of less-readable ways, e.g. - ** b 2 * 4 * a c That uses the same number of tokens, but I think most people would agree that it's a lot harder to read.
The main concern of math notation seems to be limiting ink or chalk use at the expense of nearly all else (especially readability).
Citation needed, that's not obvious to me at all. Used judiciously, compactness aids readability because it allows you to see more at once. I say "judiciously" because it's possible to take it too far -- regexes are a glaring example of that. Somewhere there's a sweet spot where you present enough information in a well-organised way in a small enough space to be able to see it all at once, but not so much that it becomes overwhelming.
Why is exponentiation or log not infixed? Why so many different ways to represent division or differentiation?
Probably the answer is mostly "historical reasons", but I think there are good reasons for some of those things persisting. It's useful to have addition, multiplication and exponentiation all look different from each other. During my Lisp phase, what bothered me more than the parentheses (I didn't really mind those) was that everything looked so bland and uniform -- there were no visual landmarks for the eye to latch onto. Log not infix -- it's pretty rare to use more than one base for logs in a given expression, so it makes sense for the base to be implied or relegated to a less intrusive position. I can only think of a couple of ways mathematicians represent division (รท is only used in primary school in my experience) and one of them (negative powers) is kind of unavoidable once you generalise exponentiation beyond positive integers. I'll grant that there are probably more ways to represent differentiation than we really need. But I think we would lose something if we were restricted to just one. Newton's notation is great for when you're thinking of functions as first-class objects. But Leibnitz makes the chain rule blindingly obvious, and when you're solving differential equations by separating variables it's really useful to be able to move differentials around independently. Other notations have their own advantages in their own niches.
Something persisting because it works does not imply any sort of optimality.
True, but equally, something having been around for a long time doesn't automatically mean it's out of date and needs to be replaced. Things need to be judged on their merits.
A good way to test this is to find a paper with heavy use of esoteric math notation and translate that notation to code. I think you'll find the code more accessible. I think you'll find that even though it takes up significantly more characters, it reads much quicker than a dense array of symbols.
In my experience, mathematics texts are easiest to read when the equations are interspersed with a good amount of explanatory prose, using words rather than abbreviations. But I wouldn't want the equations themselves to be written more verbosely. Nor would I want the prose to be written in a programming language. Perhaps ironically, the informality of the prose makes it easier to take in.
I spent a good few weeks trying to make sense of the rather short book "Universal Artificial Intelligence" by Marcus Hutter because he relies so heavily on symbolic notation. Now that I grasp it, I could explain it much more clearly in much less time to someone with much less background than I had going in to the book.
But your explanation would be in English, not a formal language, right? -- Greg