You have /5 articles left.
Sign up for a free account or log in.

There is no shortage of opinions on artificial intelligence. Ever since ChatGPT 3.5 was released in December 2022, there have been dozens, if not hundreds, of articles on the impact artificial reality will have on jobs, society and education. Writers across media have explored every possible outcome, from “don’t worry about it” to “it’s the end of humanity.” Whatever happens, much of how we work and communicate is likely to change.

This is especially true in higher education, where AI, even at this early stage, has the ability to upend many of the ways we teach, research and learn. AI is already serving as a research tool, a writing tool and a coding tool. Our students are using ChatGPT to draft, brainstorm and find answers to questions. If ChatGPT can produce well-written answers to college-level exams, what’s to prevent students from using it to cheat?

It’s easy to be defensive about AI right now, and maybe the easiest place to start is to point to its limitations. ChatGPT gets things wrong. It makes things up. It doesn’t have access to local contexts, and it doesn’t do high-level thinking. It’s not a human being or a sentient machine. It doesn’t have a perspective or a reference point in the real world. Many have pointed out these limitations. But it’s also relatively new. AI tools will get better. And quickly. It would be a mistake to build our response on the assumption that the current limitations of AI will always be the same

Whether or not this is a good thing (ethically, morally, existentially) is worth debating. We need to consider what AI means for the future of creativity, of work, of humanity. But unless commercialization limits the growth potential of AI (or the Elon Musks of the world succeed in somehow halting its development), these debates are likely moot. The ship has sailed, and we need to be concurrently thoughtful about its likely impact on higher ed.

So, what should we do?

I want to suggest a framework for thinking about how we might approach the use of AI in teaching and learning in higher ed. In many respects, the framework simply maps to how we (in education) are currently responding to tools such as ChatGPT. You might think of these responses as the artificial reality version of the seven stages of grief. Currently, our stages of AI move from defensiveness (regulate) to avoidance (adapt) to acceptance (integrate). At some point, we may need to move beyond these stages in order to reimagine what these tools mean for how we communicate, how we create and maybe even how we think.

Each element of the framework is not necessarily meant to be exclusive of the others (some institutions or faculty may choose to adopt combinations of them), but they are progressive. Just like the seven stages of grief, it’s possible to go through all consecutively, maybe concurrently, or even to get stuck on one or two.

1. Regulate

The first response to AI is much like the first stage of grief: shock and denial. How can we make this go away? If students can use AI to cheat, what can we do to prevent it? How can we police its use?

I call this first response regulation. The most extreme version of this response is to attempt to ban its use, as the New York school system did almost immediately after ChatGPT 3.5 was released, or as Italy did recently. Other approaches might lean on AI-detection tools or call for revising existing policies to make using AI a violation of an honor code or student code of conduct.

There is nothing wrong with establishing baseline expectations—as you would for any assistive technology (Google Translate, a calculator)—at the institutional or course level. We should all do this. Tell your students that if they use text generated by a large language model transformer, they should cite it. If they submit work without citation that is not their own, they should be subject to their institution’s policies and academic standards on such matters. Transparency about standards is just good pedagogy.

But as a full and final response, this is shortsighted. At a minimum, it’s not difficult to imagine all the ways our students will push the boundaries of any limitations we might impose.

More importantly, though, this approach starts from a position of restriction rather than opportunity. It ignores our responsibility to teach our students—to teach ourselves—how to use AI productively and effectively. AI will most certainly be part of their life of learning. It’s our job to help them learn how to use these tools well.

2. Adapt

If our first response is to regulate, the second is to try to make AI more difficult to use, to adapt our teaching to the limitations of the tools. And there are glaring limitations right now. ChatGPT was not designed for high-level thinking. It wasn’t even designed, it seems, to be accurate or factually correct. It was designed, in its own words, to “generate human-like responses to text-based prompts by using a massive dataset of written language.” Its data set, while massive, is still limited. It can’t look into local contexts, such as classroom discussions, and provide responses based on esoteric texts outside its corpus. Its perspective and critical selection are nonexistent. Getting things right is really not the point right now. Writing clear prose is.

These limitations are real, and we can make our assessments AI-resistant, if not entirely AI-proof. We might, for example, emphasize more in-person engagements. We might bring back handwritten tests in blue books. Or we might make our assignments related to in-class discussions, about which ChatGPT could know nothing.

My guess is that this will be the approach many of us take early on. Frankly, these kinds of changes can be excellent choices for classroom engagement. More in-person experiences could be meaningful and could help us develop deeper relationships with our students. Attempting to avoid AI might, in the end, be the ultimate catalyst for greater mentoring and in-person discussions.

But we should be careful about this silver lining. If our learning design is motivated by making it difficult for students to cheat, we’re designing learning experiences the wrong way. Leaning into the most impactful learning designs will likely be just as AI-resistant but will demonstrate a commitment to learning rather than a commitment to limits. How we communicate to our colleagues and students will matter a great deal in this regard.

3. Integrate

If “regulate” and “adapt” are about policing or avoiding the impact of AI, “integrate” is about embracing AI in the classroom. Integration entails using artificial intelligence to enhance learning and deepen our students’ engagement. It’s about helping our students develop the skills and capabilities to use AI effectively. Our students will need to be facile with AI in the future work and life of learning, and it’s our responsibility to prepare our students for this future.

What might integration look like? We might, for example, ask our students to use ChatGPT to draft their essays, which they could refine and develop, showing the stages of writing and editing along the way (i.e., good writing pedagogy well before ChatGPT). Or we might encourage them to use AI to refine drafts that they’ve started themselves, something not too uncommon with the online grammar tools already available to them. In each of these approaches, we would value stages of writing more than the final product. Similarly, we might ask our students to analyze a response from ChatGPT by pointing out what it gets right, what it misses, where it’s too simple and where it offers new insights on a problem that they hadn’t considered.

Most importantly, we can use this moment as an opportunity to teach our students how to ask ChatGPT good questions. Asking good, meaningful questions is at the heart of all research and scholarship. Asking meaningful, directed questions of a tool such as ChatGPT, questions that elicit the kind of responses they need, may end up being the most important skill we can teach right now. Teaching students to think critically about what questions they ask AI is a way of enhancing a fundamental scholarly skill.

This is where we should be heading right now. We should have ways of engaging with our students that don’t require AI, but we should also embrace the affordances of these tools as they exist right now. We should teach our students to use these tools in the same way we teach them how to use a calculator, a spreadsheet or the internet, all tools that have been variously banned in the classroom at some point in time, at least until we integrated them into our courses.

4. Reimagine

The fourth stage of AI I’m proposing might be a little more speculative than the others right now. If we embrace AI, it will change how we work. This is likely inevitable. Maybe less clear, though, is how we might need to reimagine what it means to learn, communicate or create. We may soon reach a time where complex writing is the domain of a few specialists, while commonplace, daily writing is the domain of AI.

But we may also realize that our current approach to learning is fundamentally and structurally misaligned with what our world and our students need given new tools to come. Thus far, many of the digital technologies of the past 40 years have augmented how we teach and learn. Calculators are commonplace, even on standardized tests. Spreadsheets and databases make routine tasks manageable and scalable. The internet has given us access to vast amounts of information that was once inaccessible to many.

But ChatGPT and the new crop of AI tools have the potential to shift something more fundamental. Human beings are language-producing beings. Our primacy in this domain may be changing. If that happens, communication may change. What we think of as knowledge production may change. Right now, AI can’t write a good novel or produce a visual work that inspires, but what happens if and when it can? Will we hang on to notions of authorship and artistry or shift our relationship to the works altogether?

If these things happen, regulation, adaptation and integration may not be enough. We might need to reimagine what teaching and learning mean in this new context. We may find ourselves shifting our epistemological frame from production and creation to something like analysis and critique. Our educational models would need to change as well.

Will how we learn or create change because of AI? Or will AI simply be another tool like a calculator, augmenting our existing skills? That’s hard to say. Whatever happens, now is the time to think carefully, thoughtfully and intentionally about the future of teaching and learning in this new world.

Next Story

More from Learning Innovation