special editor support for indentation needed.
Eric S. Johansson
esj at harvee.org
Sun Nov 16 02:39:26 CET 2008
Aaron Brady wrote:
> You see examples here from time to time that don't follow the rigid C+
> + formatting. Some examples:
> def classmaker( ):
> class X:
> return X
> class X:
> class Y:
> if something:
> class X:
> def X( ):
> Some of these are tricky not dirty; some are both, but you don't want
> to tie your hands prematurely, which is one of my favorite features of
> Python. Long story short, it can get hairy.
somebody should be called a sick bastard. :-)
I have never seen these forms and I can see how they are syntactically correct,
possibly useful, and mind bogglingly confusing. <gack>, having said that
though, my gut reaction would be to try to satisfy the usual case first and
suggest that these forms are a "sucks to be you moment, use your hands because
we saved them from damage elsewhere". case. Given enough time I think I could
come up with a fairly regular grammar to express both the ordinary and
exceptional cases. I'd probably use something somewhat more verbose for the
sick and twisted examples you've given me.
> Why aren't you favoring, 'dedent once/twice', 'dedent' repeated,
> 'close class, close function, close if, close for'? I don't imagine
> that saying 'close' three times in a row would be a strain.
> If the program is clever, it can learn that 'function' and 'method'
> are synonyms, and you can start to think in terms of both.
> You don't need to write 'start method, start class', except as a
> shortcut for spelling the statements out by hand; what type of block
> statement (suite) you are starting is determined by syntax.
I was hoping to avoid this but this is a brain dump on speech or the user
interfaces. It's evolved over some 15 years of living with speech recognition
and having to take time off because my voice hurts too much to speak in my hands
hurt too much to type.
Hands are robust. It takes decades of use to make them go bad. Vocal systems
are fragile. It takes only one bad day to make them break for weeks. Once
damaged, neither one fully recovers.
The major problem with using speech recognition with programming is that
keyboards are great at doing fine grained small detail work and lousy at doing
coarse-grained work. Speech recognition is lousy at doing fine grain detail
work and really good at doing coarse-grained work such as creating large blocks
of text. This difference in usability is only visible once you have lived in
rule number one: never try to speak the keyboard.
don't ever force a user to control capitalization, concatenation, spelling by
speaking one letter at a time. Even simple things like Alpha (cap alpha) can
get rather tiring if used a lot. One of the cruelest things from software
development is mixed case, fractured spelling words such as mixInCntr which I'm
not even going to try to pronounce or spell out.
a more positive way of expressing rule one is allow the user to get what they
want by speaking words in their native language.
Rule number two: never speak small stuff or why use two or more utterances
when a single one can do
one step up from speaking the keyboard is forcing the user to say the same
command multiple times to achieve a single effect. For example, if you want to
move to the beginning of the line for the end of the line, you can say "move
word left" as many times as it takes to get to where you want to be or you can
just say "move to start of line". In the context of the indent outdent control,
I don't really want moving to the right level of indentation for class, method,
function etc.. I want a component to put in the method "new class" which would
put a class definition at the right place with the right indentation in all the
right components so that I don't have to speak the object hierarchy or triple
quotes (in pairs). It's all done for me. But a macro like that won't work
right unless I can put the cursor at the right level of indentation.
another reason for this rule is that you want to really minimize load on the
voice. Remember, the voice is fragile. If I can say a small sentence and get a
big effect or something that saves me a lot of work, then that's a win.
Otherwise I might as well burn some hand time and hit the backspace key twice,
speak my macro to do whatever I want, and then clean up the macro from its wrong
indentation. Yes, it saves my throat and my hands but not as much for either as
a good speech macro would.
Rule number three: be careful with your grammar
Speech user interfaces are broad shallow hierarchies. GUI interfaces are narrow
and deep. Speech user interfaces do not readily lend themselves to discovery in
the same way that GUI interfaces do.
Be aware of grammar traps such as similar starting points and homonym arguments.
Usually it's not too bad but, some words just won't be recognized correctly so
there should be an alternative. For example, Lennox is how NaturallySpeaking
recognizes linux. I've been training it for months and it just won't get it right.
Rule number four: think about disambiguation
This is related to being careful with your grammar. Depending on your context,
disambiguation can be difficult or easy. If you're operating in a large
environment such as editing a file, disambiguation means having fairly wordy
commands. But if you can narrow the scope, disambiguation happens automatically
because you have reduced choice through reducing the amount of text you're
operating on. Frequently disambiguation through reducing scope also aids in
focus and cognitive load because you've eliminated distractions for the user.
> I retract my suggestion that the system prompt you for information.
> Academically speaking, context can partially eliminate redundancy.
> That is, 'close' can stand alone, regardless of what block you are
> closing. If you aren't maintaining a tyrant's grip on the tool, that
> is, allowing it to think for itself a little or anticipating, a
> miscommunication is inevitable, and it will need to corroborate the
> element of the context that has fallen out of sync. Some people
> bother to maintain a model of the speaker's knowledge state, and even
> maintain multiple (non-unique) branches between the time an ambiguity
> is recognized, and the time it's resolved.
good insight. We are using something like that in the voice coder project to
deal with symbol creation. I'm looking for something a little different. My
end goal is to have an editing environment that lets me navigate by and operate
on implicit and explicit features. I'm afraid I've gone on a bit long about
this but, maybe we can start up a separate thread about editors using features.
These 32nd description is a features any visible component of the language that
a user can specify by name. Predicates, array indices, arguments, last block,
next block, beginning block are all examples of implicit features. I should be
able to select any one of those and replace/edit etc. etc. This is where you
can use disambiguation through scope reduction to enhance the editing experience.
> Single-line blocks can follow their overture on the same line, such
> if x: x()
good example of where "edit block" followed by subsequent scope reduction to
just the block would be a powerful tool. Same thing is true of "edit predicate".
>> Think about being asked that question every time you close a block.
> Some people do it. 'What are you doing?', 'Where are you going?',
> 'Are you leaving?', 'Where were you?', 'I thought you left,' etc.
to which I would reply, "no need to be jealous. She wants to spend some private
time with you too." But that really has nothing to do with editing by voice. :-)
>> I can
>> almost guarantee it would drive you mad
> Perhaps it does. Honest curiosity and vicious curiosity are only
> distinguishable after time.
true. You know, if you live anywhere near Boston MA, I would like to spend a
few hours with you, have you train up a voice model and try editing by voice.
It would be interesting to watch you learn the environment. (If others are
interested in this offer, e-mail me privately)
More information about the Python-list