[Edu-sig] Python Programming: Procedural Online Test

damon bryant damonbryant at msn.com
Tue Dec 6 03:19:20 CET 2005


Hi Rodrigo!

>If I understood correctly the proposal is to give a "hard"-A for some
>and an "easy"-A
>for others, so everybody have A's (A=='good score'). Is that it?

No, students are not receiving a hard A or an easy A. I make no 
classifications such as those you propose. My point is that questions are 
placed on the same scale as the ability being measured (called a theta 
scale). Grades may be mapped to the scale though, but a hard A or easy A 
will not be assigned under aforementioned conditions described.

Because all questions in the item bank have been linked, two students can 
take the same computer adaptive test but have no items in common between the 
two administrations. However, scores are on the same scale. Research has 
shown that even low ability students, despite their performance, prefer 
computer adaptive tests over static fixed-length tests. It has also been 
shown to lower test anxiety while serving the same purpose as fixed-length 
linear tests in that educators are able to extract the same level of 
information about student achievement or aptitude without banging a 
student's head up against questions that he/she may have a very low 
probability of getting correct. The high ability students, instead of being 
bored, are receiving questions on the higher end of the theta scale that are 
appropriately matched to their ability to challenge them.

>That sounds like
>sweeping the dirt under the carpet. Students will know. We have to
>prepare them to
>tackle failure as well as success.

In fact computer adaptive tests are designed to administer items to a person 
of a SPECIFIC ability that will yield a 50/50 chance of correctly 
responding. For example, there are two examinees: Examinee A has a true 
theta of -1.5, and Examinee B has a true theta of 1.5. The theta scale has a 
typical range of -3 to 3. There is a question that has been mapped to the 
theta scale and it has a difficulty value of 1.5, how we estimate this is 
beyond our discussion but is relatively easy to do with Python. The item is 
appropriately match for Examinee B because s/he has approximately a 50% 
chance of getting this one right - not a very high chance or a very low 
chance of getting it correct but a equi-probable opportunity of either a 
success or a failure.

According to sampling theory, with multiple administrations of this item to 
a population of persons with a theta of 1.5, there will be an approximately 
equal number of successes and failures on this item, because the odds of 
getting it correct vs. incorrect are equal. However, with multiple 
administrations of this same item to a population of examinees with a theta 
of -1.5, which is substantially lower than 1.5, there will be exceedingly 
more failures than successes. Adaptive test algorithms seek to maximize 
information about examinees by estimating their ability and searching for 
questions in the item bank that match their ability levels, thus providing a 
50/50 chance of getting it right.

This is very different than administering a test where the professor seeks 
to have an average score is 50% because low ability students will get the 
vast majority of questions wrong, which could potentially increase anxiety, 
decrease self-efficacy, and lower the chance of acquiring information in 
subsequent teaching sessions (Bandura, self regulation). Adaptive testing is 
able to mitigate the psychological influences of testing on examinees by 
seeking to provide equal opportunities for both high and low ability 
students to experience success and failure to the same degree by getting 
items that are appropriately matched to their skill level. This is the 
aspect of adaptive testing that is attractive to me. It may not solve the 
problem, but it is a way of using technology to move in the right direction. 
I hope this is a better explanation than what I provided earlier.



>From: Rodrigo Senra <rsenra at acm.org>
>To: edu-sig at python.org
>Subject: Re: [Edu-sig] Python Programming: Procedural Online Test
>Date: Mon, 5 Dec 2005 19:53:00 -0200
>
>
>On 5Dec 2005, at 7:50 AM, damon bryant wrote:
>
> > One of the main reasons I decided to use an Item Response Theory (IRT)
> > framework was that the testing platform, once fully operational,
> > will not
> > give students questions that are either too easy or too difficult
> > for them,
> > thus reducing anxiety and boredom for low and high ability students,
> > respectively. In other words, high ability students will be
> > challenged with
> > more difficult questions and low ability students will receive
> > questions
> > that are challenging but matched to their ability.
>
>So far so good...
>
> > Each score is on the same scale, although some students will not
> > receive the same questions. This is the beautiful thing!
>
>I'd like to respectfully disagree. I'm afraid that would cause more
>harm than good.
>One side of student evaluation is to give feedback *for* the
>students. That is a
>relative measure, his/her performance against his/her peers.
>
>If I understood correctly the proposal is to give a "hard"-A for some
>and an "easy"-A
>for others, so everybody have A's (A=='good score'). Is that it ?
>That sounds like
>sweeping the dirt under the carpet. Students will know. We have to
>prepare them to
>tackle failure as well as success.
>
>I do not mean such efforts are not worthy, quite the reverse. But I
>strongly disagree
>with an adaptive scale. There should be a single scale fro the whole
>spectre of tests.
>If some students excel their results must show this, as well as if
>some students perform
>poorly that should not be hidden from them. Give them a goal and the
>means to pursue
>their goal.
>
>If I got your proposal all wrong, I apologize ;o)
>
>best regards,
>Senra
>
>
>Rodrigo Senra
>______________
>rsenra @ acm.org
>http://rodrigo.senra.nom.br
>
>
>
>
>_______________________________________________
>Edu-sig mailing list
>Edu-sig at python.org
>http://mail.python.org/mailman/listinfo/edu-sig




More information about the Edu-sig mailing list