[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Assessment Issues
On Mon, Feb 01, 1999 at 06:40:15PM +0800, Rhandeev Singh wrote:
> I think what we need here is something beyond the written examination.
> With interactive systems, this is now possible. Very little work has gone
> into this area; the ground is open for the breaking.
Please note that I have avoided any straightjacket in EDUML by creating an
<ITEM> element which can be anything from a multiple choice item, and other
traditional test elements (true/false, short answer, essay and matching
question) to a call to an interactive program, a browser, a procedure, a
lab, something to read (HOWTO, documentation or textbook-like fragment)...
and basically anything we can or will think of. The price for that
flexibility is that unlike other TESTING Markup Languages, this one does not
spoon feed the parameters to the maximum. This is why I proposed a <SCRIPT>
element where we show how a quizzer might handle the more classic types.
Regarding interactive program calls: The quizzers will need to determine if
the program exists on the system on which it is called ... and if not, will
move on to an alternative <ITEM>. I have in mind that this will be for
Linux Servers and that we should feel free to make calls to any linux
program. A list of recommended packages for Educational Servers will then
be compiled based on which calls are made from the EDUML database. This
will be easy since they will be marked up and can be extracted and converted
to a list automatically... making a system admin's job more pleasant.
> Let's start a thread on how such evaluations could best be designed. I
> propose that we first identify the key objectives of evaluation; e.g. what
> do we want to quantify about the candidate?
Personally, I don't believe we can or should attempt to make assessment a
serious scientific discipline. I have adopted the mastery method based on
criterion-referenced prescribed learning objectives (a mouthful which
basically means "as scientific as we can be in the field of education")
But really deep down, it is based on the ministry of education's current
opinions on what should be taught and on the teacher's opinion of which
assessment tools should be administered to see if what the ministry thinks
should be taught has been learned at a given point in time by a student.
This is why, I believe it all boils down to trust. People trust educators
to provide an informed opinion of what a student has learned.
Furthermore, we can report such informed opinions either as an itemized list
of what has been mastered, or as a bell curve comparing how a student has
done compared to other students in a somewhat similar situation.
Bruno