[Edu-sig] Python Programming: Procedural Online Test

damon bryant damonbryant at msn.com
Sun Dec 4 19:52:56 CET 2005


Scott:

I will attempt to incorporate your suggestion of keeping track of 
performance; I'll need to create some attributes on the examinee objects 
that will hold past test scores created within the system.

I am, however, approaching the scoring differently. Although I do report 
percentage correct, I'm using Item Response Theory to (1) score each 
question, (2) estimate ability using a Bayesian algorithm based on maximum 
likelihood, (3) estimate the error in the estimate of ability, and (4) 
select the most appropriate question to administer next. This is very 
similar to what is done at the Educational Testing Service in Princeton with 
the computer adaptive versions of the SAT and the GRE. I don't know the 
language used to develop their platform, but this one for the demo is 
developed in Python using numarray and multithreading modules to widen the 
bottlenecks and speed the delivery of test questions served in html format 
to the client's page.

Thanks for your comments!

By the way, I am looking for teachers, preferably middle and high school, 
who would be willing to trial the system. I have another site where they 
will have the ability to enroll students, monitor testing status, and view 
scores for all students. Do you know of any?




>From: Scott David Daniels <Scott.Daniels at Acm.Org>
>To: edu-sig at python.org
>Subject: Re: [Edu-sig] Python Programming: Procedural Online Test
>Date: Sat, 03 Dec 2005 12:03:06 -0800
>
>damon bryant wrote:
> > As you got more items correct
> > you got harder questions. In contrast, if you initially got questions
> > incorrect, you would have received easier questions....
>In the 70s there was research on such systems (keeping people at 80%
>correct is great rule-of-thumb goal).  See Stuff done at Stanford's
>Institute for Mathematical Studies in the Social Sciences.  At IMSSS
>we did lots of this kind of stuff.  We generally broke the skills into
>strands (separate concepts), and kept track of the student's performance
>in each strand separately (try it; it helps).  BIP (Basic Instructional
>Program) was an ONR (Office of Naval Research) sponsored system, that
>tried to teach "programming in Basic."  The BIP model (and often the
>"standard" IMSSS model) was to score every task in each strand, and find
>the "best" for the student based on his current position.
>For arithmetic, we actually generated problems based on the different
>desired strand properties; nobody was clever enough to generate software
>problems; we simply consulted our DB.  We taught how to do proofs in
>Logic and Set Theory using some of these techniques.
>Names to look for on papers in the 70s-80s include Patrick Suppes (head
>of one side of IMSSS), Richard Atkinson (head of the other side),
>Barbara Searle, Avron Barr, and Marian Beard.  These are not the only
>people who worked there, but a number I recall that should help you to
>find the research publications (try Google Scholar).
>
>
>A follow-on for some of this work is:
>      http://www-epgy.stanford.edu/
>
>I worked there "back in the day" and was quite proud to be a part of
>some of that work.
>
>--Scott David Daniels
>Scott.Daniels at Acm.Org
>
>_______________________________________________
>Edu-sig mailing list
>Edu-sig at python.org
>http://mail.python.org/mailman/listinfo/edu-sig




More information about the Edu-sig mailing list