Crispin Chatterton explains GL Education's approach to AI in assessment, what the company is doing to harness its potential – and what it’s doing to manage its risks
Most teachers are aware of the increasing use – and misuse – of generative artificial intelligence in education. When everyone from the prime minister down is busy promoting its potential as well as highlighting its attendant risks, it would be hard not to be. Some view AI as an opportunity; others see it as a threat. But no one denies that it’s playing an increasing part in students’ and teachers’ lives – and that it’s here to stay.
This is as true of assessment as it is of other areas of education – arguably more so because an assessment that cannot fairly and robustly measure the skills and knowledge of those being assessed is failing in its primary function. And AI – by potentially allowing students to enhance and augment work that doesn’t accurately reflect their abilities – clearly presents a challenge.
Assessment integrity
At GL Education, we are cautiously exploring the possible future uses of AI to enhance and improve the assessments and services we offer schools. Our goal - and that of our owners, Renaissance - is to use technology and AI to support the work of teachers. We too think it has the potential to free up teachers’ time, reduce their workload and broaden their students’ horizons. Indeed, without these elements, AI will not enhance education.
But we are also alive to its risks – and, in particular, our responsibility as an assessment provider to ensure that we make adjustments to prevent malpractice. Protecting the integrity of our assessments is vital if schools, students and parents are to retain confidence in the results and know they provide an accurate picture of academic ability and attainment.
So, while we already take steps to protect our assessments, we are now implementing additional measures to prevent students using AI-enabled browser tools to help them improve their performance.
Content generation
We don’t use AI in any of our products yet – but we (and our owners, Renaissance) are cautiously exploring ways it may be used to benefit our customers. However, we will always have a people-centric approach – not least as we are very aware that another potential pitfall of AI is inaccurate content generation.
AI can produce fluent and convincing responses to user prompts, but because it has no ability to think laterally, those responses can be incorrect or irrelevant. Similarly, when AI technologies are viewed in terms of assessment development, just as there can be bias in content on the web, a bias may show up in any AI output. We would therefore urge caution if schools use AI to generate test content; we devote a lot of time and resources to ensuring our content is rigorous and free from bias, and that won’t change.
Debbie Weinstein, UK Managing Director of Google, was recently interviewed on Radio 4 where she defended the company’s decision to refer users of their AI platform, Bard, to cross reference and check content on Google. She maintained it is a tool for collaboration around problem-solving rather than a search engine for specific information. As a knowledge provider, AI must therefore be treated with caution – although as a knowledge enhancer, it can provide useful support.
The future
Exam boards have already put in place strict rules to ensure students’ work is their own. This includes guidance and information on what counts as AI misuse and the requirements for teachers to help prevent and detect malpractice. We are just as determined that our assessments need to mirror that rigour and be capable of fairly and robustly assessing the skills and knowledge of those being assessed.
Teachers do not have time to police AI but we would urge schools and colleges to continue to take reasonable steps to prevent AI malpractice. More guidance and support should follow here following the Government’s recent consultation on AI in education.
In the meanwhile, we will continue to explore and adopt additional security measures in order to prevent students using AI-enabled browser tools to enhance their work. School leaders and teachers can be confident that while GL Education and Renaissance acknowledge the potential benefits of AI in education, we are equally determined to manage it safely and effectively, too.
Crispin Chatterton is Director of Education at GL Education.