In November (2010) I had the pleasure of joining the extraordinary community of eLearning, training and university educators who attended the Media and Learning: Brussels 2010 Conference. I was thrilled to give one of the plenary keynotes, and wowwed by the strong positive feedback overall as well as the wonderful new friends and colleagues I encountered at the event.
The crux of the presentation was that there are pragmatic solutions to bringing evidence based assessments into the learning process, and that we can draw models and rationale for those assessments from the arts.
I noted an interesting Prezi presentation from David Gibson online yesterday which echoes many of these sentiments, but initiates the dialog from the perspective of a scientist. Gibson references the work of Bill Mislevy – specifically his model for evidence centered design. In the deck, Gibson also highlights an article by Walvoord and Anderson(1998) to define Assessment. Gibson says that assessment provides “Measures & Feedback for improvement of performance and evaluating learners that are …” and here’s where Gibson quotes Walvoord & Anderson “… multidimensional, integrated, and revealed in performance over time.”
It’s a clean, simple, elegant definition which i’m sure I’ll find very useful for some time to come. Now it doesn’t escape my attention that Gibson keys in on this ‘multi-dimensional & integrated’ notion. Of course it has nothing to do with spatial dimensionality, but I do find it interesting and relevant that it stretches our modeling and infographic generation capacity as educators to try to ‘visualize’ the multidimensional nature of these definitions.
For many educators the notion of a rubric, defining educational objectives and outcomes on a two dimensional matrix can already be a little daunting. For the uninitiated the typical approach today for analysis via rubric goes something like this…
|Tie your shoes||Ties shoes quickly. Knots are correctly formed. Loops are correctly formed. Laces remaining are appropriate length.||Ties shoes.||Does not tie shoes.|
Now this table can represent virtually any sub-objective that can be measured. But measurements have difficulties in a couple of ways. 1. They can be subjective (to admittedly varying degrees.) 2. They can be constrained to time, when that constraint is to some degree not relevant. 3. They are inherently limited. See only a small part of the whole.
In the rubric above, how is collaboration accounted for? Is cooperation with other students cheating or effective 21st century learning. Does it matter if one kid in the class simply always convinces someone else to tie their shoes for them? Is that child a more effective collaborator? And how is collaboration (in this instance the specific question of equity and cooperation) calculated in this two dimensional rubric.
So it behooves us to come up with a three-dimensional matrix to represent how collaboration plays a role. While communicating about better ways to tie one’s shoes might be favorable, simply bullying a neighbor to do it for you is undesirable. One way to map that, would simply be to add another row to the table. Another would be to add another dimension – a kind of depth axis – and one advantage of that would be that we could consider several other elements – which may have broad applications across a diverse range of curricular objectives – in an integrated way. So we might map collaboration, communication and technical facility against the standard measurements.
It’s a fascinating question and one which I’m sure has deep implications for understanding learning better, for measuring outcomes better and for creating better eLearning.