The talk discusses the meaning of the concept ‘assessment cultures’ and sheds light on how analytical tools for understanding assessment cultures can inform educational research, policy and practice. Using nature as a metaphor, the talk identifies foundational components of education systems that may be helpful to come to terms with cultural differences that constitute the mindset and professional practices of scholars, policymakers and teachers. The talk identifies a wide range of factors that can be understood as constitutive for distinct assessment cultures. Further, the talk points to the transnational dimension of educational assessment and the presence of European and global sub cultures within and across educational systems e.g. professional associations such at Association of Educational Assessment – Europe. The talk concludes by discussing how the concept of ‘assessment cultures’ can serve as a fruitful analytical tool for comparative, historical and policy studies of education. With examples from the Nordic region, also with regard to the current educational challenges related to the COVID-19 pandemic, the talk explores the added value of understanding assessment cultures.
The COVID-19 pandemic crisis has had a major impact on how schooling is done. With schools closed, teaching and learning continue dependent on information and communication technologies (ICT). Assessment practices have also been adjusted to ensure physical separation. To the degree that this has been a success, there is the possibility that post-pandemic societies might choose to de-school, switching to online teaching and learning only. In this perspective presentation, I describe two major risks if that future were to be embraced; that is, lack of equitable access and dehumanization. My argument is that these futures already exist in pockets around the globe and we can use those experiences to evaluate those options. I suggest instead that the post-pandemic period gives us an opportunity to re-imagine what schools and schooling are for and advocate for a re-schooled society in which our investment in schools builds and develops society.
Figshare.com | https://figshare.com/authors/Gavin_Brown/1192740 |
Google Scholar | http://scholar.google.co.nz/citations?user=r4ywKA8AAAAJ&hl=en |
Google Site | https://sites.google.com/aucklanduni.ac.nz/gavin-t-l-brown/home |
ORCId | http://orcid.org/0000-0002-8352-2351 |
ResearchGate | https://www.researchgate.net/profile/Gavin_Brown3 |
https://twitter.com/DocGTLBrown | |
University of Auckland | https://unidirectory.auckland.ac.nz/profile/gt-brown |
“All of a sudden, those longer-term projects and changes to the way we carried out or assessments we brought into sudden relief. Having your income fall off a cliff tends to focus the mind.” That’s what was said by one senior manager of an awarding body where I am a committee member. Strangely, a pandemic has been more effective in creating change almost overnight than twenty years of lobbying policy makers!
This presentation explains eight steps that some awarding bodies have taken when looking for new technologies and engaging with potential suppliers. It is based on the premise that ‘traditional’ approaches to procurement are broken and work against a solution-focused engagement with the market. When decisions about change have to be made fast, getting priorities in the right order are vital. The wrong choices of how to go about making change and who to choose as a partner can be disastrous, expensive and disrupt more than the need to change.
Those participating will take away three key lessons:
Plus, of course, the eight tried-and-tested steps!
COVID-19 required awarding organisations to adopt and adapt e-assessment tools at pace. The most important lesson in doing this has been about how our learners approach new assessment formats and deliveries.
No longer were our observations of learner perspectives, preferences and adaptability cushioned by self-selection or familiar support networks and we saw whole cohorts approach onscreen, remotely invigilated exams not with curiosity but with compulsion. Even amongst groups we might consider ‘digital natives’, awarding organisations gathered a wealth of observations about reactions, behaviours and performance issues linked to technical challenges. The video evidence from remote invigilation has provided uncomfortable witness to the level of distress possible when problems occur. I saw first-hand the disruption and anxiety that can be caused by technical issues or minor user error. From forgotten passwords, bandwidth or laptop problems to a malfunctioning keyboard or mouse, the causes of exam stress were transformed.
This highlighted practical issues around:
Underpinning these are key areas that require our deeper reflection:
These are major challenges that we must solve through serious consideration of how we can take strategic and operational, evidence-based, steps to support learners through preparation, technical support and, most critically, by redesigning assessment on a truly digital basis and with a redefined sense of test preparedness.
In Scandinavia digital exams have been the norm for almost a decade. In response to Covid-19 and the logistical constraints it has imposed on the educational institutions, we see the rest of the world increasingly starting to perceive e-assessment as a must, rather than a nice-to-have, resulting in accelerating innovation cycles.
How can we handle high-stakes assessment during the Covid-19 emergency, when remote home exams are becoming a “new norm” in e-assessment? To maintain the quality of e-assessment at home, Inspera has reflected on ways to ensure academic honesty (Carrol, 2013):
As a technology provider, we have decided to give our partners the option to add a remote proctoring solution on top of our existing cloud-based end-to-end eAssessment platform. In this session, I will briefly present Inspera Remote Exam, the solution we have developed over the last six months and successfully beta-tested with partner institutions in June 2020. We would also discuss the main pillars of our design thinking that emerged from requirements engineering: a) Equality; b) Respecting Privacy; c) Effectiveness.
I also argue for maintaining an online exam integrity strategy that our other partners, who wanted to avoid remote proctoring, have been adopting, i.e. assessment redesign. While remote proctoring is a reliable solution, their conclusion was that well-designed e-assessment is the best way to ensure examination integrity.
Our partners’ key finding has been that we can move beyond the simple migration of conventional assessment into a digital format and that institutions have the opportunity to improve the learning experience for students through rethinking how assessment can and should be implemented going forward.
While the Covid pandemic had an immediate impact on teaching and exam sessions, a less well understood effect has been its impact on the test authoring process. GradeMaker (a provider of exam authoring software) has seen the impact of this crisis on the authoring process first hand. For many exam boards, traditional authoring models involving collaborative authoring or face-to-face QPEC meetings became impossible to operate. As a result schedules commonly overran, causing deep concern about readiness to deliver exams in the next assessment cycle. As the summer went on, for some bodies concern about future exam readiness became urgent.
Technology has proved central to meeting this challenge, and as a result we have seen a marked increased in the adoption of online systems which fully support the authoring process. This change has been accompanied by a greater interest in innovating in the way authoring is carried out, with more attention being paid to item banking and new opportunities to enforce quality processes. An illustration of this is the work being conducted by the Mauritius Examinations Syndicate, which transferred all of its authoring procedures online during lockdown within the space of a few weeks. Similar moves have been made by other European and African boards.
This presentation will draw on examples to summarise the pressures facing exam bodies, and describe two approaches to getting authoring back on track. It will summarise the approach taken to supporting rapid process change and the implementation challenges involved, and discuss the role technology can play in item and test authoring in the current environment, in which lockdowns and social restrictions look likely to endure for some time.
COVID-19 presented a challenge to Ireland’s Leaving Certificate Examination. With a deteriorating public health environment, solutions were sought to facilitate final certification of secondary school students. It is difficult to transform a national examination system so steeped in societal expectation overnight. Public confidence in the exam is high on the basis that, despite shortcomings, it offers an objective measure of student achievement in a context where many teachers live amongst the communities they serve. Externally marked exams, the narrative goes, removes suspicion that teachers might be pressurised to look more favourably on some students than others when assigning grades.
Confronted with an unprecedented threat to this iconic exam, a new system was devised, combining teacher-assigned marks with an untried standardisation process. This was heralded as a solution to avoid a possible last-minute meltdown if examinations were planned and then could not be held.
In the end, it didn’t work out quite as expected and the story continues still. It’s a story built around a plot of sound intention, careful planning, political unease, grade inflation, some unexpected slips, and possible legal action. Whereas it might seem reasonable that COVID-19 would act as a catalyst for fundamental reform of the Leaving Certificate, that is not guaranteed.
A ‘return to normal’ after Covid-19 has been the watchword of too many. This is not an option in education: there will be no ‘after’ and the envisioned ‘normal’ often reconstructs reality to promote questionable myths and practice. Fortunately, education authorities and practitioners have not merely been coping with the effects of constraints required to limit infection but have been exploring creatively what a new and better normal can look like. I introduce two contrasting examples.
Wales is radically changing its national assessment culture in schools from a focus on benchmarks and accountability to a focus on learner progression and well-being: how has the crisis served to accelerate and support this change in culture?
The University of Glasgow has over weeks changed its default model of pedagogy and assessment from face-to-face teaching to online learning: how has the crisis supported innovation while maintaining a culture which needs to ensure the credibility of its qualifications in the eyes of students, the wider public and professional bodies? What are the lessons here if assessment cultures are to support a better future in bleak times?
Learning and assessment are in a normal school situation part of the daily routine. Then the corona virus came and challenged assessment routines and practices. In Norwegian schools, exams were cancelled, and in higher education, all exams were made digital. Immediate consequences were that the students experienced unfair assessment and a less student involvement. Nevertheless, perceived trust and flexibility also became important for the students. Teachers experienced being left alone with both teaching and assessment tasks. Based on a research study of these experiences in the spring 2020, the presenter discusses: What can we learn from assessment in the time of corona to support the development of assessment cultures now that digital teaching and assessment seems to be the new normal in our school system?
When the UK went into lockdown, the media was quick to question how those from the poorest communities would still be able to access education. The focus was on compulsory education. Prior to the UK lockdown, many students accessed the online aspects of their courses through a mixture of library computers and on their phones. Students were suddenly plunged into a situation where their lack of laptop meant that they were barred from fundamental parts of their programme. Many universities responded by lending laptops, but the need far outstripped supply. Many of the students who did have access to computers found that their machines were unable to cope with the plethora of new software they were being asked to engage with. This was exacerbated by students returning to their homes, both in the UK and abroad, where internet connections were patchy and insufficient for the needs of university study.
I argue that the sudden move to online learning should have led universities to ask two questions. The first question is one of logistics, do students and staff have access to the technologies needed to access eAssessment? The second question, can eAssessment be fairly implemented if students do not have access to appropriate technology? Many universities did not answer these questions. I argue that the failure to recognise the inequalities that prevent equitable eAssessment has had disastrous consequences for UK HE. I also argue that Covid-19 taught us how big the technological divide is in UK Higher Education, and how important it is for those constructing eAssessment to address this divide. I also argue that the methods of addressing these divides put forward by the university are inadequate. One such method was by using a no-detriment policy. This policy meant that no student would achieve lower in their eAssessment in the third term than in the average of the assessments from the previous two terms. This policy sent a clear message that eAssessment was not currently consider to be equal to non-eAssessments, particularly not in the middle of a global pandemic. As we move into a new teaching term, a no-detriment policy cannot sufficiently account for both the technological divide and the development of student learning.
Following the Government’s lockdown announcement in March 2020, Pearson (National Centre for PISA in England, Wales and Northern Ireland in collaboration with OUCEA) moved from face-to-face scoring to an entirely remote scoring model for the PISA computer-based Field Trial assessments, with all training conducted via Microsoft Teams and scoring completed using PISA’s OECS software. We present our experience from the perspective of a National Centre managing remote scoring using Microsoft Teams and conclude that this is a successful alternative to face-to-face scoring whilst maintaining high inter-scorer reliability (100% Reading, 97% Science and Mathematics).
A high level of technical support and fostering a sense of ‘team’ were key factors in the successful facilitation and training of the overall scoring process. Strong leadership from the lead scorers, together with a daily system of item-by-item training and scoring, fostered a sense of team cohesion with resultant rich discussions and confident application of the scoring guides. This was strengthened by encouraging the team to collectively review ‘tricky to score’ items flagged within the system.
During the process and in the post-scoring survey, scorers reported that they felt well-supported and that their contribution to the scoring process was highly valued. Despite initial reservations from both lead scorers and the scoring teams about fully remote training and scoring, reflections at the end of the process revealed that scorers had not only enjoyed the experience of scoring, but highlighted the quality of discussion and interactions of this type of model when compared to marking for other examinations.
The move from face-to-face to remote training and scoring resulted in a set of positive experiences for the scorers and highlights how technology can be leveraged to deliver high-quality training and scoring activities in real-time.
Recent Comments