The AI conundrum is older and more “human” than we think

As the debate over AI moves beyond futile attempts to catch students' use of Chat-GPT, Jim Dickinson considers what we'll want humans in HE to do in the future

Jim is an Associate Editor at Wonkhe

Over the past few weeks, I’ve been talking to students, SUs, academics, professional services staff, university leaders and plenty of other people who have a view on or a stake in higher education about AI.

And I can’t stress this enough. If the UK higher education sector continues to depend on the asynchronous assessment of a(ny) digital asset – produced without supervision – as a way of assessing a students’ learning, it’s game over.

There’s no way to prove they made it, and even if they did, it’s increasingly clear that it doesn’t necessarily signal that they’ve learned anything when they did.

And even if universities could prove that students made it, the world – the public, employers, parents, professional accreditation bodies – won’t believe it.

The uncomfortable truth about generative AI is that if universities must persist with assessing and grading students, they’ll need to shift to supervising students’ production of things and/or engaging with them synchronously.

And as a result, given the time and resources available to students and staff, they’ll almost certainly need to do less of it.

Please could you stop the noise?

To the extent to which a “position” is settling in higher education, I would describe it as this:

  1. Universities are keen to stress that AI is the future and keen to embrace it for all sorts of good reasons, albeit tempered by come ethical concerns
  2. Universities are also keen to rule out some usage of AI where it allows a student to pretend that they have learned something or mastered a skill when they haven’t
  3. There is a clear sense, for the time being at least, that assessment still matters and that it is important for universities to move towards more authentic forms of assessment

Let’s take each of those in turn.

On the “embrace” question, there are important issues surrounding staff development, subject practice and differential levels of access to the often subscription-based myriad tools that are emerging both within academia and surrounding it. But capacity to “keep up” is not evenly distributed between students, academic staff or universities.

This problem has been around for a long, long time – but is exacerbated and exposed by Generative AI advances.

On the “detection” question, those that intend to cheat when producing a digital asset can do so with complete impunity. It’s also increasingly clear that those that intend to cheat when being supervised remotely can do so too. And the detection tools for both flagging that a digital asset may have been produced by AI, or that a student is or isn’t who they say they are when sat in front of a PC, are either unreliable or “false flag” in ways that are unacceptably racist in a country funding its HE through international students.

This problem has also been around for a long, long time – but is also exacerbated and exposed by Generative AI advances.

On the question of authentic assessment, it is clear that this is easier to achieve in some subject areas than others, and better suited to some disciplines than others. But more importantly, it is also clear that the prevailing economic model of higher education – involving a cross-subsidy from mass (often international) participation in “cheap to teach” subjects to prop up other areas – pretty much prevents authentic assessment where the staff-student ratio is high.

This, again, is a problem that has also been around for as long as I can remember in debates about assessment and learning – but is also very much exacerbated and exposed by Generative AI advances.

Which is of no consequence at all

The muddle is real. Survey after survey suggests that students do not understand what they are allowed to use and not allowed to use – not least because it seems to change regularly and may be different for the student they’re sat next to on the bus.

Given the potential consequences, academic integrity investigations and processes are rightly long and complicated – and so even if they were effectively flagging the use of “banned” tools, would collapse under the weight of the “cheating” that students are admitting to in said surveys.

A particular issue concerns the use of tools that aren’t actually about digital asset production at all – but are about the cognitive processes a student might go through before they put electronic pen to paper. If a student is automating all of the research, analysis and critique that might go onto a project – but then writing a paper by “electronic” hand – they may only have “learned” how to write, which is the one “academic” skill that vanishingly few people will really need to personally master in the future.

Of course as such, the bigger issue than assessment per se is that if most of the “academic skills” students learn either can be automated – or at least will be “good enough” when done by the tech – then the extent to which society needs institutions to cause students to master them diminishes. We’ll still need people to do X – but they might not need to be as good at it, and we might not need as many of them.

And the bigger issue still is a riff of “yes but what is higher education for”. It’s easy to instinctively agree with academic staff who are passionate about subject immersion, the joy of learning, and the mastery of a subject. But they would say that. Without the sorting function, the (often ancillary or grafted on) skills development function and the mass participation, there just isn’t a contemporary case for the employment of many of those making that case. The sector needs those students who just want to pass.

But in a sense, pivoting the mind (or even a discussion in a working group) to those sorts of questions that surround skills or the purpose of HE obscures the urgent need to deal with the assessment question. If, for the time being, both universities and society accept that graduating students do need to be able to master the skills and/or knowledge of the sort seen in subject benchmark statements, the sector really does need to be able to prove it.

And even if an accommodation can be made around the “learning” of an English graduate, society won’t be so keen on a work-in-progress approach to the training of Doctors.

As I’ve reflected on here before, I suspect in the long term that pretty much the only way forward is the engagement of undergraduates (and taught postgraduates) in the production of new knowledge or the application of it – ideally in groups rather than as individuals.

But for the time being, while we think all of that through, I think there are three things that become key to bridging the gap between a different kind of HE and the one we think we have now.

Ambition makes you look pretty ugly

Thomas Basbøll, who is a resident writing consultant at the Copenhagen Business School Library, has a cracking blog post up here that expands on a pertinent X (Twitter) question about writing:

Eventually, we may decide that the answer to that question is “yes”. But if, for the time being, the settled answer to that question is “no”, then we are going to have to watch students doing it. And if and when the answer becomes “yes” – perhaps by talking to the student or having them demonstrate their understanding of a topic in other ways – that will have to involve synchronous and/or supervised interaction too. And that is resource intensive.

Crucially, Basbøll effectively reflects on the way in which AI tools assist humans with work rather than do things “for” them – and so calls on universities and their educators to ask “what is it that we want students to be able to do on their own”. That is not a settled question – but a better one than asking if universities should allow students to write using ChatGPT:

…My challenge to those who believe that integrating generative AI into the higher education curriculum affords powerful new learning opportunities (beyond the obvious skill of learning how to use AI). It is true that in the future we will all let AI write speeches and emails that make erudite reference to Hamlet’s words in order to impress our audiences and engage their imaginations. But should an undergraduate degree in English literature teach you only how to get your AI to make you appear literate? Or do we want our students to actually internalize the language of Shakespeare?

Why don’t you remember my name?

The second thing that I think would help would be to develop a deeper understanding of how students approach assigned, written work. This is another cracking blogpost – this time from Dave Cormier (from the University of Prince Edward Island in Canada) which kicks off with the observation that “we have to learn to write essays because we have to learn to write essays”, uncomfortably debunks the idea that essay-writing develops skills, but then gets into the way in which students approach these kinds of tasks:

I want students to engage with the material. I want them to learn how to identify valid and credible information and learn how to apply it to problems they are facing. I want them to engage in the history of knowledge. With other thinkers in the field that we’re studying. Here’s the thing. I’m starting to think it really hasn’t been happening for 20 years.

His case is that the experience of constructing an essay bin the past – or even today for those that have the time and the passion (neither of which are sustainably compulsory contemporarily) – involved finding resources and reading them. But since the advent of search, instead the experience is more more likely to be as thus:

Have an argument, usually given by the instructor, do a search for a quote that supports that position, pop the paper into Zotero to get the citation right, pop it into the paper. No reading for context. No real idea what the paper was even about. Search on google/library website, control +F in the paper, pop it in the essay.

So here, asking what it is that universities want students to be able to do before they “put pen to paper” or even open their mouth in an oral exam, is also a useful question. And once you answer it, you may well end up concluding that the time required for them to do it is not available in the contemporary UK student “full-time” experience.

The panic, the vomit (from a great height)

Add up 1 and 2 above, then add to that the slow rise of autonomous agents (which are emerging as things can do the whole process, end to end), and confronting what we want humans to do becomes urgent.

But once those questions are addressed, the third thing that I think would help starts with something that I would say – treating students as partners in these questions is important. But this is not just about the need for their enthusiastic consent for change at both an individual and community level, and nor is it merely tactical given their tendency (although not universally) to embrace and early-adopt technology.

It’s about trust. Students are in a system where dreams are sold and realities hit hard – and where the ability to “live up” to the social and academic mythologies that surround higher education is severely restricted by unevenly distributed economic and health factors.

The idealised “full-time” student – immersed in books, engaged in endless extra-curriculars, kicking back and building their social capital and confidence for a few years – really hadn’t ought to be something we let go of. But a culture that merely hopes it’ll be back some day or reaches for sticking plasters when it identifies deficits won’t do either – because it’s not real now.

For too many, it’s a situation where the sector tells them that they “can” – and then oscillates between pretending that they “can” out of shame for having got them roped into the debt, and castigating them through graded “failure” when they were unable to after all.

Doing all that everyone can to understand the lives they lead, the pressures they face, the things they can afford (in all senses of the word) to do and the things that they can’t – and then telling those truths – need not see the end of higher education and the benefits it brings when on offer to a mass rather than a minority, and nor must it mean that students are less able, or less talented.

But it would help everyone to structure, fund and organise a mass – and, if we’re honest, predominantly part-time – higher education system characterised by reality, ambition and (academic) integrity. It just won’t be… full-time.

One response to “The AI conundrum is older and more “human” than we think

  1. So Jim, could life-long learning and “Subscription Universities” become the norm? Attracting those who recognise the need to keep learning to be up to date and want to do it in short, intensive, high quality, interactive, credit-bearing university courses throughout their lives?

Leave a Reply