Is PEP-8 a Code or More of a Guideline?
Eric S. Johansson
esj at harvee.org
Tue May 29 15:22:11 EDT 2007
Warren Stringer wrote:
> Hi Eric,
>
> You make a compelling argument for underscores. I sometimes help a visually
> impaired friend with setting up his computers.
>
> I'm wondering about the aural output to you second example:
>
> link.set_parse_action(emit_link_HTML)
>
> Does it sound like this:
unfortunately, I do not have text to speech set up on this machine as I'm saving
the storage for more MP3s. :-)
> link dot set under parse under action space between parens emit under link
> under HTML jump out
it would probably say underscore instead of under and left ( and right ) (too
much work is make it spell out the symbol).
>
> Also, does how HTML read? Is it "H T M L" or "cap H cap T cap M cap L" ?
probably HTML. Remember case is not apparent in an aural interface.
> How many python programmers can reconfigure their speech-to-text and
> text-to-speech converter?
It's really difficult. I'm getting by right now with some minimal macros to
start classes and methods as well as things like putting a: on the end of the
line. But it is really primitive. When I had voice coder working, it was
pretty good and the only problem was command surface area.
Isn't there a Python based accessibility project?
You might be thinking of voice coder and VR-mode for Emacs. Both projects need
help, via-mode probably the most useful to the beginning user but combining the
two would make a very nice voice driven Python IDE. Of course if somebody
wanted to adapt it to another IDE that was nice and not too expensive, I think
people would not say no to that either.
> Perhaps a few lines of script to add CamelBack support, using an amplitude
> increase for initial caps and maybe lingering on the initial phoneme for an
> extra 100 milliseconds. So then, the above example would read:
I can tell you've never used speech recognition. :-) if you do that for a few
weeks, you'll find yourself being reluctant to talk and considering using Morse
code sent by big toe as your primary computer interface. QLF OM?
Seriously, amplitude and timing information is gone by the time we get
recognition events. This is a good thing because it eliminates problems caused
by individual habits and physiological traits.
As I've said before, I think the ultimate programming by voice environment is
one in which the environment is aware of what symbols can be spoken and uses
that information to improve accuracy. This will also allow for significant
shortcuts in what you say in order to get the right code. The problem with this
being that it needs to work when the code is broken. And I don't mean a little
broken, I mean the kind of broken it gets when you are ripping out the
implementation of one concept and putting in a new one.
Yes, it's a hard problem.
---eric
More information about the Python-list
mailing list