
Hi everyone, I got the following message on my blog, aroberge.blogsot.com (more below). -------- Forwarded message ---------- From: George Wright noreply-comment@blogger.com I installed Crunchy Frog 0.3 and ran the included tutorial. Really liked the DocTest approach. *It forced me to write my own programs that would produce the desired results *. Up to now I had contented myself with simply reading and understanding other people's programs. I need more python tutorials like this. Where can I find them? ===== Given that I had included docstrings as an example, based on comment by others on this list, comments to the effect that some wanted to write such docstring-based tutorials, does anyone have any suggestions? Andre

I've fallen behind on Crunchy Frog. Having been through the first version's tutorial, I was left wondering what the back end API looks like i.e. suppose I'm a teacher wanting to whip out some Crunchy Frog oeuvre d'evers, what would that look like? Do I need to know any HTML? Kirby On 5/14/06, Andre Roberge <andre.roberge@gmail.com> wrote:
Hi everyone,
I got the following message on my blog, aroberge.blogsot.com (more below).
-------- Forwarded message ---------- From: George Wright noreply-comment@blogger.com
I installed Crunchy Frog 0.3 and ran the included tutorial. Really liked the DocTest approach. It forced me to write my own programs that would produce the desired results . Up to now I had contented myself with simply reading and understanding other people's programs. I need more python tutorials like this. Where can I find them? ===== Given that I had included docstrings as an example, based on comment by others on this list, comments to the effect that some wanted to write such docstring-based tutorials, does anyone have any suggestions?
Andre
_______________________________________________ Edu-sig mailing list Edu-sig@python.org http://mail.python.org/mailman/listinfo/edu-sig

On 5/14/06, kirby urner <kirby.urner@gmail.com> wrote:
I've fallen behind on Crunchy Frog.
Having been through the first version's tutorial, I was left wondering what the back end API looks like i.e. suppose I'm a teacher wanting to whip out some Crunchy Frog oeuvre d'evers, what would that look like? Do I need to know any HTML?
Yes, one needs to know html, or at the very least be able to make some small changes to an existing html file. My assumption is that most tutorials are available as html files. For those that haven't read the latest, hopefully more explicit tutorial, here's (in a nutshell) what Crunchy Frog requires and allows. 1. Requirement: tutorial written in (x)html. 2. Possibilities: a) Given some sample Python code (at the interpreter) in the html file: [Note: I know that I should have written >>> rather than >>> in the code below] <pre>
print "Hello world!" Hello world! a = 6*7 print a 42 </pre>
print times2(4) 8 """ </pre> One can embed an "editor" below with an "Evaluate" button which, when
One can embed a Python interpreter prompt by changing the first line above to: <pre vlam="interpreter"> "vlam" stands for "Very Little Added Markup." When the html file will be served to your browser by Crunchy Frog, an interpreter (somewhat similar to Ian Bicking's HTConsole) will appear just below the </pre> tag, allowing the user to try to reproduce the above example, or simply try other Python code. b) Given some sample code (not an interpreter example) in an html file: <pre> def times2(n): return n*2 </pre> One can embed an "editor" (html textarea) with an "Evaluate" button next to it by changing the first line to: <pre vlam="editor"> This "editor", which will appear just below the sample code again, will allow the user to enter some code in it (perhaps the example written above) and try running it as a Python script by pressing the "Evaluate" button. The output will appear below. The code in the editor can then be changed, and run again, as often as desired. c) Given some sample "doctest" result in an html file <pre> """ pressed, will run the above code and test to see if the doctest string can be reproduced. This is done by changing the first line to <pre vlam="doctest"> There's a bit more to it than what I wrote above, but it should give the flavor. André

There's a bit more to it than what I wrote above, but it should give the flavor.
André
Thanks. So I could maybe have a property associate with an interpreter box, such that returned output would be recognized as "correct" (e.g. 42) or not. Depending on the output, the next page might be different. Kirby

On 5/14/06, kirby urner <kirby.urner@gmail.com> wrote:
There's a bit more to it than what I wrote above, but it should give the flavor.
André
Thanks. So I could maybe have a property associate with an interpreter box, such that returned output would be recognized as "correct" (e.g. 42) or not. Depending on the output, the next page might be different.
At present, only standard "doctest" are run with an "expected" output. When run successfully, a "custum" message appears. It would be fairly easy to change this so that a link to a new page appears upon successful completion. Or, it might be modified so that it links directly to the next page... Note that the limitation to doctests should not be a big limitation in practice. Usually, students will be ask to write a function that produces a certain result when called. So, in the case you mention, the doctest might be: """
print student_function() 42 """ Until the student writes a function (named student_function!) that returns 42, no link to the next page would be present on the page.
I see the interpreter prompt as an invitation to explore, rather than a testing environment. If you have some "mock tutorial" (i.e. a series of exercises) that you can give me (perhaps off-list), I could see if Crunchy Frog could (as is) produce the type of tutorial that you'd want. If not, it could be the next feature I implement ;-) André Kirby
_______________________________________________ Edu-sig mailing list Edu-sig@python.org http://mail.python.org/mailman/listinfo/edu-sig

Navigating Between Lessons I have been reading the discussion and trying the prototypes for Python content delivery with great excitement. On seeing how well you were doing on the content delivery, my imagination took off and I wrote down a whole bunch of brainstorming ideas below on an important independent aspect of an interactive learning environment. Just tonight the "Only Python" thread touched on the area I have been thinking about with the idea of choosing what comes next after evaluating code in the Crunchy Frog. I am interested in the learning model and engine for choosing the next lesson to present. I imagine a nonlinear web of potential next steps and a flexible engine to determine the best next step. I would like to consider carrying on this with like-minded folks in parallel with people working on the delivery environment choices and lesson/example/test content. I see the lesson choosing engine as being essentially independent of the type of lessons being presented, so this work should fit into the larger Shuttleworth vision for computer based learning in general. I have written a rather long document here, with an overview, some more nitty-gritty aspects, and ideas for moving forward. I refer to a web page at the end with more specific implementation ideas. One model is just having a lot of tutorials, and let learners choose which one to do and which one to do after that. When we (hopefully) get an enormous number of independent contributions from our user/learner community dynamically added, completely manual choosing of a next tutorial will be onerous. At a minimum we would like an engine to limit the obvious choices of where to go next, while still letting the learner choose. While some learners may be self aware enough to know when they know one thing well enough and are ready to look for something else, others would like to avoid worrying about that and concentrate on the material, and let an intelligent engine determine when and where they should go next to advance toward their goal most efficiently. The goals tie into a separate issue of motivation and reward addressed in recent threads: figuring what *is* the goal and how to make it exciting. Consider a learner starting from a place that immediately describes a bunch of different application areas that might be interesting. Let learners choose goals by saying "I want to be able to do something like that myself!" -- whether it be as simple as Kirby's neat example of a Mad Lib generator, drawing shapes with a Turtle, or as complicated as a dynamic web site connected to a database, or elaborate gaming. Then let the skill set needed for the selected type of application determine the inventory of skills that the engine guides the learner through. Alternately goals could be specified by some formal body like the Virginia Public Schools. The idea of letting learners choose an interesting application to head for could still apply. Applications would be chosen that include the required competencies, with some extra additions depending on the target application area chosen by the learner. The complete target skill set chosen would then guide the path the engine provides through the lessons. Letting the learner *choose* to take on the elaborated goal should help keep the learner on task and motivated. To keep the rewards from being too remote, a sequence of simpler sample application area targets could be chosen incorporating more and more of the final desired skill sets. Again these chosen goals might add a bit to what the learner needs to absorb and affect the path through the lessons, but the extra motivation should be worth it. Anyway, to allow lessons to fit into a larger, at least partly automated environment, we would need some agreed upon interface for messages passed back and forth between lessons and the lesson choosing engine, or other components. As I started thinking about this, I realized that there could be other useful modules integrating multiple lessons, for instance summarizing and reviewing exposition. I thought of Zelle's text, where the main chapters have a lot of exposition with the actual syntax rules interspersed. The book also has an appendix that just has the new syntax from each chapter in the order of the chapters. If the learner forgets some syntax, and wants to check back, it would be nice to be able to have a window pop up with all the syntax introduced so far, in the order that the learner encountered it, dynamically generated from the particular learners' lesson history. Or for a more general review see summary statements from exposition in their lesson history. Or maybe see a summary of things introduced so far by grouped by some topic tags, ... There would need to be a further consistent interface to the lessons to do this. (I would assume a lesson could omit implementation of one of these extra features, and return None to a message with a request, and have the system behave gracefully.) I'm sure I have not thought about everything in the integration of lessons that could be useful and would need an agreed interface to be able to implement it. The most basic data structures and interfaces would revolve around finely divided skill sets: those that are prerequisites and those that should be learned by the end of a lesson. I would hope we could get all lessons to interact on that level. Low level skills are easiest to categorize, set dependencies for, and automatically test. It will be more of a challenge for higher level learning outcomes, but the complication is likely to be in the lessons, while the basic lesson choosing engine would be similar. The idea of skill sets is of course very general. I would plan to set it up so the model interfaces with the certification system Jeff is working on for the State of Virginia. You could have a very basic engine choosing lessons with just this information. You would have to depend on the learners to determine when they understood things well enough to go on to the next lesson. The next elaboration would be to send testing results in some form from the end of a lesson session to the central engine, indicating if the learner did get the desired competencies, or which subset of them, or that more drill is needed, or this was way too hard for the learner. Possibilities for the engine would be to: go on to the next lesson in the author's prepared sequence if one learning outcome was incompletely attained, go to a drill lesson on that learning outcome jump back to an alternate lesson that goes slower with more examples and smaller steps if the current module only provides exposition/exposure, go to a lesson that includes testing, .... Anyhow, agreeing on at least optional interfaces from lessons to a larger system would be useful. I am interested in joining with others to think about that and report back. - - - - - - - - - - - - - - Nitty Gritty It will of course be important to label skills that are prerequisites and skills demonstrated as an outcome of an exercise. It will be harder to deal with the fuzzier ideas of how many creative steps are needed without explicit experience for a specific challenge vs. measuring how many steps together a learner can confidently handle while feeling challenged. Hopefully as we make the engine more sophisticated, we can measure more individual pedagogical parameters for learners and incorporate them in the engine decisions. Ideally we would not only dynamically measure the needs of a learner as s/he progresses, but also measure and modify the overall response of the system. We first keep track of session histories locally and, with the user's permission, upload them to a central site to hone the engine choosing the best place to go next. For instance: we marked module N as closely following module M, but almost nobody got N on the first try after M -- now mark it as being further off, so when the engine looks for the best module to follow M for a learner challenged appropriately by a certain sized jump, it finds module M1 instead ... or FLAG: we really need to write an intermediate module between M and N! We could measure not only the distance between lessons but the effectiveness of a lesson. We could either flag a lesson as not working or dynamically offer up an alternate lesson that is provided, that leads students to the same goals faster than another. Learning styles are very important and diverse, as I have found with my children and my learners. These should be the basis of user options guiding the choice of modules. For example: introduce examples first vs. rules first; supply little bites vs. a major overview before looking for feedback; written vs. verbal or video presentation, module requires cognitive (grade) level N ... A more sophisticated later approach would be to explicitly test the learner's learning style and then set some of these parameters automatically. There are also many other user chosen options. For example: I like modules written by X, modules approved by Y, modules approved by a learned professor, modules using analogies to Java or C or music or football.... Have an easy interface to add your own lesson and label it in the general taxonomy so it works with the choice engine. It would be neat to say to learners: if you struggled through this, and finally got it, how would you redo it to make it easier for you? Ok, do it for everyone who comes after you! Upload it and label the differences in your presentation from the previous one (i.e. smaller step -- comes before the last one, or more examples provided, or just 'written by X'). A learner may find s/he can relate particularly well to what X writes. On categories: the learned professors can specify a bunch of categories and tags, but the deli.ci.us model shows the power of organic growth: what is a useful tag in practice will stick. If we do not allow this initially, at least we should design a system where this would be a modular generalization to add. If we are looking for user contributions and large organic growth, I think flexible preference/descriptor tags could be very useful. Allow a record of learning via secure external certification and also locally without any special security that would enforce avoidance of funny business. Along the way, the learner should be able to see a display of what skills s/he has mastered and which remain for the current goal (either based on informal feedback and testing or by official certification). Way out in front: For video lessons, which fit some people's learning style, there are very effective commercial systems to automatically store in real time and later compact a movie of what appears on the computer screen + synchronous voice. Modern commercial systems are not large resource hogs. If we are to allow anyone to add video lessons, a free alternative would be good to develop! -------------------- Moving Forward I would like to explore a more general engine since people start in different places, head to different personal goals, and have different speeds and styles of learning and different computer capacities. Also we have discussed an environment that makes user contributions easy. We want to dynamically integrate these contributions. The contributions may not fit into preplanned niches. All this suggest a nonlinear web of potential next steps and a flexible engine based on inputs including those listed above to determine the best next step. If people agree that a "choose the next module" engine like this is useful, I would like to develop and reach agreement on the external interface tied into data structures for lesson dependencies, user preferences, and outcomes from modules. Hopefully I will find others to develop it with me, moving to a more specialized list for implementation details, and report progress back to the main list. Some concepts for this engine might come under well-developed interactive learning theory that I have not seen. At least in the early brainstorming stages I am more of an "imaginer from a clean slate" than "searcher for what is out there to avoid reinventing the wheel". References to the existing "wheels" that you have seen are appreciated. I think there should still be a lot of room for new ideas around open source and a dynamic response to a network community of learners and contributors and having flexible relationships that do not depend on a static, predetermined, limited vision. I have thought about some more detailed design ideas for classes that are likely to be involved. See http://www.edupython.org/tutorialEngine/brainstormingEngineObjects.html The first concrete project I would be interested in interfacing with is the Crunchy Frog. -- Andrew N. Harrington Computer Science Department Undergraduate Program Director Loyola University Chicago http://www.cs.luc.edu/~anh 512B Lewis Towers (office) Office Phone: 312-915-7982 Snail mail to Lewis Towers 416 Dept. Fax: 312-915-7998 820 North Michigan Avenue aharrin@luc.edu Chicago, Illinois 60611

On 5/14/06, Andrew Harrington <aharrin@luc.edu> wrote:
Navigating Between Lessons
I have been reading the discussion and trying the prototypes for Python content delivery with great excitement. On seeing how well you were doing on the content delivery, my imagination took off and I wrote down a whole bunch of brainstorming ideas below on an important independent aspect of an interactive learning environment. Just tonight the "Only Python" thread touched on the area I have been thinking about with the idea of choosing what comes next after evaluating code in the Crunchy Frog.
[snip - snip - snip lots of interesting material cut out.] There were a lot of great ideas expressed in this brainstorming tsunami. I will latch onto a single theme. For this to work, we need tutorials "snippets" ... lots of them. I don't think we need to worry too much at this point about complex links between them, nor about the "syntax" required for those links. I believe this is going to be the easy part ... after enough tutorial snippets are written. If it is web-based, the whole machinery pretty much already exists. One way might be to embed keywords, including "difficulty ranking", inside tutorials and use a search engine to decide where to go next. Or, rather than using a search engine, a "mind map" type of visual index can be created, with some automatic updating whenever a new tutorial snippet is added, or a keyword is added to that tutorial. A variation on the them is that keywords inside a given tutorial could be given some relative weighting factor. Regardless of the chosen method ... lots of tutorials will be needed. Until then, I think that trying to come up with the ideal method to link them is probably pointless; a case of premature optimization.... André

Andre Roberge wrote:
On 5/14/06, Andrew Harrington <aharrin@luc.edu> wrote:
Navigating Between Lessons
I have been reading the discussion and trying the prototypes for Python content delivery with great excitement. On seeing how well you were doing on the content delivery, my imagination took off and I wrote down a whole bunch of brainstorming ideas below on an important independent aspect of an interactive learning environment. Just tonight the "Only Python" thread touched on the area I have been thinking about with the idea of choosing what comes next after evaluating code in the Crunchy Frog.
[snip - snip - snip lots of interesting material cut out.]
There were a lot of great ideas expressed in this brainstorming tsunami.
Thanks!
I will latch onto a single theme.
For this to work, we need tutorials "snippets" ... lots of them.
For sure. Snippets is a good term, too.
I don't think we need to worry too much at this point about complex links between them, nor about the "syntax" required for those links. I believe this is going to be the easy part ... after enough tutorial snippets are written. If it is web-based, the whole machinery pretty much already exists. One way might be to embed keywords, including "difficulty ranking", inside tutorials and use a search engine to decide where to go next. Or, rather than using a search engine, a "mind map" type of visual index can be created, with some automatic updating whenever a new tutorial snippet is added, or a keyword is added to that tutorial. A variation on the them is that keywords inside a given tutorial could be given some relative weighting factor.
All of those methods could be used inside a particular lesson generator like Crunchy Frog. I was certainly thinking of embedding stuff in the html for Crunchy Frog. Having everything there is part of the elegant simplicity of generating data for Crunch Frog.
Regardless of the chosen method ... lots of tutorials will be needed. Until then, I think that trying to come up with the ideal method to link them is probably pointless; a case of premature optimization....
André
Certainly much of the elaboration of an engine is mostly useful after there are lots of lessons, and certainly the stuff about using learning histories to tweak the engine is WAY in the future. A major point I did not consider, that fits into your caution about premature optimization, is that Crunchy Frog is the only delivery mechanism at the moment, so all data comes through it, and we can concentrate on Crunchy Frog understanding data embedded in a Crunchy Frog Web page lesson. I see data embedded in meta data, your custom vlam attributes, and classes identifying types of content: The meta construction in the heading is a fine basic descriptive format for use by Crunchy Frog. It is important that the format is clear: agreeing on what name attributes are used and the meaning of the content attribute that goes with a name. We could make skills binary as in <META name="prereqs" content="ifElseRead, comparisonNumeric, assignment"> <META name="outcomes" content="ifElseUse"> or as I prefer, with a numeric rating, maybe with the number left out meaning 100 (mastery) and 0 meaning exposure. The following might be in a testing module on understanding the flow of control in an if-else construction, assuming an earlier expository introduction: <META name="prereqs" content="ifElseRead:0, comparisonNumeric, assignment"> <META name="outcomes" content="ifElseRead"> There is much meta data about a lesson that I think is useful, even when our total number of lessons is small. A lot of the data is most obviously considered while creating a lesson, when I find it easiest to add classifications, and to edit them from a copy of a similar template lesson. If we are piecing together snippets of tutorial, I think it is important to be conscious of what the prerequisites are and what is being taught. I would be happy to give a first pass on a consistent pattern for naming basic and composite skills for introductory programming in Python. I still like the idea of short names for composite skills, so a persistent structure is needed to store components of compound skills. One simple approach would be to use a text file with Python dictionary syntax and list value "loops":["forLoop", "whileLoop"] "booleanExpression":["booleanExpressionAtomic", "andOp", "orOp", "notOp"] Other conventions, use keywords, author, and version. For example: <META name="keywords" content="drill, javaAnalogy"> <META name="author" content="Andrew Harrington"> <META name="version" content="0.3"> I need more research on what is relevant and practical, but I imagine various measures of the speed and creative ability needed of the learner, maybe difficulty (pretty general) or more specifically speed, repetitiveness, number of new ideas together, number of solution steps the learner must put together independently, .... Even if names are agreed upon here, the initial measures are subjective, and probably only make sense being added when comparing a number of lessons. This is an area where I look to have an eventual system make tweaks dynamically based on statistics from learner histories. I imagine one lesson having multiple screens, and maybe multiple paths if help is given for wrong answers. I treat the final page of a lesson covering a defined topic as being different from the others. I imagine a "Next page" or other buttons in intermediate pages, as I mentioned in my recent Crunchy Frog wish list, and a "Next lesson" button at the end. Data associated with the Next Lesson button might include several lessons of the authors that could logically follow independently. Maybe the first one would be chosen if there is no further basis for choosing added. I would like to get to a point where the objectives of the learner are part of the state, and that affects the choice of the next lesson. I have looked through many narrative tutorial introductions, and I still like the idea of being able to extract a reference on what has been introduced so far. I like the idea of marking new syntax and summaries in the expository text, maybe with a <div class="syntax">, and <div class="summary">, making them easy to extract with Elementtree, and consistent in their display. It would be nice for these summaries to pop up in a separate window or tab if requested. I do not know if that fits in with the Python localhost interface. If there had been any of these elements in lessons so far, I would put Syntax and Summary buttons somewhere, at the bottom of the lesson page or on a separate reference web page, or in a separate frame. There is an embedded style in Crunchy from pages. Styles for syntax and summary could be added. Again, some agreement on a starting scheme is useful. Other people's suggestions/agreements much appreciated. -- Andrew N. Harrington Computer Science Department Undergraduate Program Director Loyola University Chicago http://www.cs.luc.edu/~anh 512B Lewis Towers (office) Office Phone: 312-915-7982 Snail mail to Lewis Towers 416 Dept. Fax: 312-915-7998 820 North Michigan Avenue aharrin@luc.edu Chicago, Illinois 60611

On 5/24/06, Andrew Harrington <aharrin@luc.edu> wrote:[snip] As usual, Andrew has expressed much too many different (and excellent!) ideas for me to be able to address them all. I'll focus on a few of them. ...
I see data embedded in meta data, your custom vlam attributes, and classes identifying types of content:
The meta construction in the heading is a fine basic descriptive format for use by Crunchy Frog. It is important that the format is clear: agreeing on what name attributes are used and the meaning of the content attribute that goes with a name.
We could make skills binary as in
<META name="prereqs" content="ifElseRead, comparisonNumeric, assignment"> <META name="outcomes" content="ifElseUse">
or as I prefer, with a numeric rating, maybe with the number left out meaning 100 (mastery) and 0 meaning exposure. The following might be in a testing module on understanding the flow of control in an if-else construction, assuming an earlier expository introduction:
<META name="prereqs" content="ifElseRead:0, comparisonNumeric, assignment"> <META name="outcomes" content="ifElseRead">
There is much meta data about a lesson that I think is useful, even when our total number of lessons is small. A lot of the data is most obviously considered while creating a lesson, when I find it easiest to add classifications, and to edit them from a copy of a similar template lesson.
If we are piecing together snippets of tutorial, I think it is important to be conscious of what the prerequisites are and what is being taught. I would be happy to give a first pass on a consistent pattern for naming basic and composite skills for introductory programming in Python. I still like the idea of short names for composite skills, so a persistent structure is needed to store components of compound skills. One simple approach would be to use a text file with Python dictionary syntax and list value "loops":["forLoop", "whileLoop"] "booleanExpression":["booleanExpressionAtomic", "andOp", "orOp", "notOp"]
Using <meta ...> is a great way to embed information. It is not something that would create any syntax problem when viewed in a normal html browser; even better, it would be ok'ed by htmlTidy! In terms of "content": rather than creating a new syntax (e.g. andOp, orOp), why not simply use Python keywords, syntax, or built-in functions as is whenever possible. For example: <META name="prereqs", content="if, <, >, ==, !=, and, or, not"> <META name="outcomes", content="else, elif"> [snip] I have looked through many narrative tutorial introductions, and I still
like the idea of being able to extract a reference on what has been introduced so far. I like the idea of marking new syntax and summaries in the expository text, maybe with a <div class="syntax">, and <div class="summary">, making them easy to extract with Elementtree, and consistent in their display. It would be nice for these summaries to pop up in a separate window or tab if requested.
Classes for content/display mode are a good idea. I do not know if that fits in with the Python localhost
interface. If there had been any of these elements in lessons so far, I would put Syntax and Summary buttons somewhere, at the bottom of the lesson page or on a separate reference web page, or in a separate frame.
I don't like frames... but there are ways with dynamic html to hide and show content. And, I agree that the interface should be consistent - at the very least, within a single tutorial, spanning many pages. Eventually (version 1.0), it would be nice to have a standardisation convenient yet complete enough to be adopted by everyone. There is an embedded style in Crunchy from pages. Styles for syntax and
summary could be added.
Agreed. Again, some agreement on a starting scheme is useful. I think this is going to evolve quasi-organically. Other people's suggestions/agreements much appreciated. Indeed!!! André
-- Andrew N. Harrington
participants (3)
-
Andre Roberge
-
Andrew Harrington
-
kirby urner