
On 10/11/2019 20:50, Martin Euredjian via Python-ideas wrote:
It does, it's an extension of the reality that, after so many decades, we are still typing words on a text editor. In other words, my comment isn't so much about the mechanics and editors that are available as much as the fact that the way we communicate and define the computational solution of problems (be it to other humans or the machine that will execute the instructions) is through typing text into some kind of a text editor.
You seem to be stuck on the idea that symbols (non-ASCII characters) are inherently more expressive than text, specifically that a single symbol is easier to comprehend and use than a composition of several symbols. This is a lovely theory. Unfortunately it's wrong.
We don't read character by character, it turns out. We read whole lexical units in one go. So '→', ':=' and 'assign' all take the same amount of effort to recognise. What we learn to recognise them as is another matter, and familiarity counts there.
I'm not a cognitive psychologist so I can't point you at any of the relevant papers for this, but I can assure you it's true. I've been through the experiments where words were flashed up on a screen for a fiftieth of a second (eye persistence time, basically), and we could all recognise them perfectly well no matter how long they were. (There probably are limits but we didn't hit them.) A quirk of my brain is that unlike my classmates I couldn't do the same with numbers -- with a very few exceptions like powers of two, numbers are just collections of digits to me in a way that words *aren't* collections of letters.
Yes, I was a mathematician. Why do you ask? :-)