On Nov 10, 2019, at 08:23, Stephen J. Turnbull <turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
Martin Euredjian via Python-ideas writes:
Another interesting example is had in some of my work with real time embedded systems. There are plenty of cases where you are doing things that are very tightly related to, for example, signals coming into the processor though a pin; by this I mean real-time operations where every clock cycle counts. One of the most critical aspects of this for me is the documentation of what one si doing and thinking. And yet, because we insist on programming in ASCII we are limited to text-based comments that don't always work well at all. In my work I would insert a comment directing the reader to access a PDF file I placed in the same directory often containing an annotated image of a waveform with notes.
This has nothing to do with representation or input via text, though. Emacs has displayed formatted output along with or instead of code since the mid-90s or so, invariably based on a plain-text protocol and file format. (The point is not that Emacs rulez, although it does. ;-) It's that a program available to anyone on commodity hardware could do it. I imagine the capability goes back to the Xerox Alto in the mid-70s.)
It would be amazing if we could move away from text-only programming and integrate a rich environment where such documentation could exist and move with the code.
We've been *able* to do so for decades (when was WYSIWYG introduced?), and even if you hate Emacs, there are enough people and enough variation among them that if this were really an important thing, it would be important in Emacs. It's not.
Well, we know that some of this really is important because thousands of people are already doing it with Jupyter/iPython notebooks, and with Mathematica/Wolfram, and so on. Not only can I paste images in as comments between the cells, I can have a cell that displays a graph or image inline in the notebook, or (with SymPy) an equation in nice math notation. And all of that gets saved together with the code in the cells. I wouldn’t use it for all my code (it’s not the best way to write a web service or an AI script embedded in a game), but I believe many people who mostly do scientific computing do. But that’s how we know that we don’t need to change our languages to enable such tools. Anyone who would find it amazing is already so used to doing it every day that they’ve ceased to be amazed.
I would offer the Lisp family as a higher-level counterexample. In Lisp programming, I at least tend to avoid creating idioms in favor of named functions and macros, or flet and labels (their local versions).
Haskell might be an even more relevant example. In Haskell, you can define custom operators as arbitrary strings of the symbol characters. You can also use any function as an infix operator by putting it in backticks, and pass any operator around as a function by putting it in parens, so deciding what to call it is purely a question of whether you want your readers to see map `insert` value or map |< value. (Well, almost; you can’t control the precedence of backticked functions.) And that often involves a conscious tradeoff between how many times you need to read the code in the near future vs. how likely you are to come back to it briefly after 6 months away from it, or between how many people are going to work on the code daily vs. how many will have to read it occasionally. The |< is more readable once you internalize it, at least if you’re doing a whole lot of inserting and chaining it up with other operations. But once it’s gone out of your head, it’s one more thing to have to relearn before you can understand the code. Python effectively makes that choice for me, erring on the side of future readability. I think that’s very close to the right choice 80% of the time—and not having to make that choice makes coding easier and more fluid, even if I sacrifice a bit the other 20% of the time. This is similar to other tradeoffs that Haskell lets me make but Python makes for me (e.g., point-free or lambda—Python only has the lambda style plus explicit partial). Of course Haskell uses strings of ASCII symbols for operators. If you used strings of Unicode symbols, that would probably change the balance. I could imagine that there would be cases where no string of ASCII symbols will make sense to me in 6 months but a Unicode symbol would. (But only if we could ignore the input problem—which we can’t.) I don’t think the set of APL symbols is particularly intuitive in that sense. (If I were using it every day, a special rho variation character meaning reshape would make sense, but if I hadn’t seen it in 6 months it would be no more meaningful than any Haskell operator line noise.) But maybe there are symbols that would be more intuitively meaningful, or at least that would be closer, enough so to make the tradeoff worth considering.