Back to top

Control “harmful” bias to protect education

Bias in the realm of AI and Large Language Models in education is inevitable, but what can and must be avoided is “harmful bias”, experts in the field have said. 
February 6 2024
2 Min Read

Bias in the realm of AI and Large Language Models in education is inevitable, but what can and must be avoided is “harmful bias”, experts in the field have said. 

Paul LeBlanc, outgoing president of Southern New Hampshire University, told press that working with data and broadening horizons was key to controlling the unavoidable bias that is created by generative AI.

LeBlanc and his colleague at SNHU, George Siemens, are developing a new AI solution covering multiple aspects of university life called Human Systems, where they aim to harness the technology for good without bringing in the common issues that surround it.

It is due to launch summer 2024. 

Mexicans in the room at the IFE Conference, held at Tec de Monterrey in Mexico, commented that algorithmic bias would have greater negative effects on the Global South – and it’s one of the biggest problems when it comes to AI for them. 

“All data is biased. So the question is how do you avoid harmful bias?” Seimens asked the room.

“And what that means is if you go out and you start working with any kind of data, you’re making decisions like, ‘we’re going to have this data included’, or if you’re building a model, you’re saying these data points matter more than these other data points.

“If you’re building and learn your source data from the predictive-next-word token approach, that data is going to be biased by the people that wrote that language in the first place.” 

From a research perspective, Michael Fung, the executive director of Tecnologico de Monterrey’s Institute for the Future of Education, said that bias issues between the global north and south had been “brought to the fore” by the explosion of AI in higher ed. 

“I don’t think you remove it completely. Just like any kind of information out there, it’s always coloured by some kind of perspective of bias. It’s about being clear – what are the assumptions and limitations that come with the data?” Fung commented. 

Higher education, in particular, is really terrible about data, LeBlanc remarked. 

“Individual institutions usually struggle to know what they have used, what they have – and to make sense of what they have. 

“What we have been proposing is the creation of a global data consortia with the ability to build better AI for learners and education, to get better insights,” LeBlanc said. 

“If you build your source data from the predictive-next-word token approach, that data is going to be biased”

The consortia, which has initial funding from the Bill and Melinda Gates Foundation, would strive to prevent algorithmic bias and cultural hegemony – two big issues that come with using LLMs with very small sample sizes. 

“What we really have to do is be very clear about the sources of data that’s used to inform whatever AI outputs are in place – so when we see these biases, we understand why they’re biases,” Fung echoed.

“I think more research has to be done in space to improve our practice of it.”

0
Comments
Add Your Opinion
Show Response
Leave Your Comment

Your email address will not be published. Required fields are marked *