‘If you mention computers and testing in the same sentence, the first things most people think of are long sequences of multiple-choice questions, and specially designed answer cards filled in with No. 2 pencils.’ So observed David Michael and Sande Chen in their 2005 report on the potential of ‘serious’ video games both to promote and assess learning.
To some extent this is still true. Assessment methods need to be better aligned with our current understanding of how people learn. Too many high-stakes tests are administered to individual students in examination rooms, contexts far removed from those in which learning originally took place.
Improving assessment is important for reasons of equity, validity, and compliance with government policies on, for example, e-portfolios and inclusion. But reforming assessment is difficult because it requires change at all levels of an educational system – from classroom to government.
For the first time, we can assess what really matters, rather than simply what is easy to assess. We need to move beyond ‘snapshots’ of students’ performance towards assessments that track how their learning is developing over time. Assessment that is rooted in ranking students and schools needs to give way to a more enlightened approach that works ‘harder’ to provide:
- useful diagnostic feedback to students about their learning;
- useful information to teachers;
- a solid basis for evidence-based decision making for policymakers.
This means assessing the process as well as the product of learning – the ‘how’ as well as the ‘what’. Currently, assessment may inhibit creativity and turn enthusiastic, inquisitive students into results-driven people desperate to avoid making mistakes. As anyone who has suffered exam nerves knows, traditional modes of assessment are too sensitive to stress, illness and emotional upsets.
Assessment also needs to be rethought because it is increasingly out of kilter with contemporary teaching and learning. Compared to days gone by, students now work much more collaboratively and cooperatively on group projects at school and university. Inquiry learning is common, with learners encouraged to ask questions about the world, to collect data to answer their questions, and to make and test their discoveries. Technology allows for the sophisticated assessment of students’ inquiries and the results of those inquiries, be they in the form of hypotheses or models.
Another source of misalignment concerns multimedia. Students now learn, communicate and socialise via e-books, websites, social network websites, simulations and a plethora of other multimedia. They routinely communicate their learning via written and spoken English, mathematical and logical notations as well as diagrams, digital photographs, videos, charts and graphs. Assessment needs updating so that students can demonstrate their learning in the same wide range of forms that they encountered during its acquisition.
Finally, assessment needs to reflect a wide variety of teaching and learning practices such as project-, inquiry- and problem-based learning, in other words learner-centred as well as teacher-centred practices. Such methods can engross students in their work, but their engagement – and their performance – often
plummets during formal assessment.
By contrast, e-assessments have the potential to engage students in immersive, meaningful and challenging activities which provide them and their teachers with rich insights into their reasoning and knowledge. For example, using data-mining techniques, researchers analysed the help-seeking behaviour of 1,400 students who used an intelligent tutoring system for high-school geometry. They reported that not only could they better assess students while teaching them, but also that the assessment could be done more efficiently. These results suggest that there may be no need to differentiate between ‘teaching’ and ‘testing’ – over time, learning is reliably indicated by how a student responds to teaching. Tracking how much help a student needs with a task will result in as valid an assessment as a traditional test taken after teaching has ended.
JISC, which champions the use of digital technology in education, advocates technology-based portfolios known as
e-portfolios. It says they encourage ‘profound forms of learning’, as well having a role in professional development and accreditation, and the potential to support students moving between institutions and stages of education.
Dr Helen Kelly always keeps a special patient up her sleeve for when she has to assess her speech and language therapy students at University College Cork. As the dreaded day looms, she can determine the complexity of the case by altering the number of clinical evaluations available to her students. The range should be ‘enough for them to make a differential diagnosis, but not too many so as to overwhelm them’.
Dr Kelly’s obliging patient comes courtesy of PATSy, an established online case-based resource. PATSy allows medical students to repeatedly practise their skills on more than 60 virtual ‘patients’. Used in medicine, health science and clinical psychology, the system provides students with interactive virtual patients as well as real data in the form of videos, assessments, and anonymised medical histories.
PATSy, recently used as the core platform in a large research project, allows students to sharpen up their clinical skills as often as they like – even on the same patient. It is real learning by doing.
Dr Kelly says that PATSy ‘gives students real-life data to practise their clinical skills in assessment, differential diagnosis and linking theory to clinical work. It allows measurement of their clinical decision-making skills as well as their theoretical knowledge’.