[Edu-sig] Visual Programming in Python?

Andre Roberge andre.roberge at gmail.com
Mon Apr 17 19:56:17 CEST 2006


On 4/17/06, Ian Bicking <ianb at colorstudy.com> wrote:
> kirby urner wrote:
[snip]
> >
> > Alan Kay strongly believes a young child shouldn't have to type to
> > experience programming, i.e. some kind of drag and drop or other
> > non-typing interface should facilitate programming.
>
> I think "typing" is the wrong thing to look at.  A young child shouldn't
> have to hunt and peck -- too many things to keep track of at once.  And
> they shouldn't have to form grammatically correct code, which doesn't
> mean anything to them when they are starting.
>
> But that doesn't make typing as a whole bad.  So a typical entry into a
> Logo environment involves a kid using the arrow keys to define a shape.
>   They get some output, they don't have to hunt and peck, they have a
> constrained environment.
>
> None of which is necessarily *better* than a graphically composable
> program.  But it's certainly a lot easier to create ;)  Where perhaps it
> becomes more interesting is using these things as a path to full
> grammar.  For instance, you hit some arrows, and you have a very very
> crude paint program.  Not that fun.  But if you turn that into language,
> then you have something more interesting.  E.g., they hit some keys,
> they have a "shape", but the shape is actually represented as
> programming text, like:
>
>    [F F F R F F R F R R L L F ...]
>
> That's the most simple representation, just the keys they hit.  You
> could even use glyphs instead of letters.  But it introduces the idea,
> and starts to open up areas for further abstraction.  In the geometric
> world of turtles, it gives you a basic and nontrivial building block
> (well, nontrivial depending on the age -- but you could also adjust this
> accordingly).  It also lets the student learn from immitation -- in this
> case, immitating the code created by the constrained interface.  Unlike
> much-reviled code generators seen elsewhere, the constrained interfaces
> would create human-editable code, and would not be a "complete"
> environment (like no editing built into them).
>

Nice ideas.  I'm going to adapt them in rur-ple (sorry about my
single mindedness :-)   As it stand, within rur-ple, one can
make the robot move forward (after clicking in its world window
to give it focus) by pressing the up arrow key; turning left by
pressing the left arrow key; picking up beeper by pressing "p",
and putting one down by pressing "P".  This was mostly done for
earlier testing of the program.  Now, I can link this to the
editing window by "typing" the corresponding commands
[move(), turn_left(), pick_beeper(), put_beeper()] in addition
to updating the graphical world when the appropriate keys are pressed.

In terms of user interactions, the only caveat that I see is that
I should probably find a way to indicate which sub-window
(editor or robot world) has the focus...  I suppose I could change
the background color...

Any of you think the above ideas are silly?...

As for the next step (flowchart for decision tree), I think it would
have to be handled very differently.

André


More information about the Edu-sig mailing list