Resolving bias: 4 ways higher education can create responsible AI for societal good

As deeply nuanced and technical as it might be to create a flagship cyberinfrastructure, it still takes people to develop AI that works in everyone's favor.

It’s no secret that the commercialization of generative AI tools is deeply impacting higher education. Its ability to reduce workloads to a fraction of what it once was for students and faculty and administrators alike is invigorating forward thinkers to leverage the technology even more.

As powerful of a tool as it is, so is its risk of being misused. Higher ed leaders everywhere are urging community members to use AI ethically and responsibly in the face of its seemingly limitless potential. While implementing moral truisms to technology can be easier to preach than implement, the University of Utah may provide other institutions with the blueprint.

The R1 research university based out of Salt Lake recently launched the Responsible AI Initiative to maximize the tech’s upside as a societal good while minimizing its consequences through experiential and research-backed principles. It’s currently seeking partners across industry, academia and government to empower state, regional and local community members working in public services, health care, sustainability and other contributive forces of society.

One of AI’s biggest ethical threats is its tendency to contain implicit bias. AI users who do not use it are in danger of perpetuating social stereotypes and inequities, says Manish Parashar, director of the university’s Scientific Computing and Imaging Institute (SCI) that leads the initiative.

“We are at this point where we’re seeing one the most potentially transformative technologies of our time, and you want to make sure that we can maximize the benefits to science and society,” he says. “But we want to ensure we do it the right way so that it’s not creating more haves and have-nots.”

For us laypeople, Parashar provides us with a hypothetical example of how algorithms can perpetuate bias if unvetted. Imagine if algorithms were utilized to create evacuation routes in times of poor air quality, a notorious issue in the Utah Valley. It might strategize around the region’s best roads. However, parts of the valley most intensely affected by poor air quality contain a disproportionate number of underresourced, underprivileged communities. If an algorithm were not scanned for bias, it may implicitly prioritize helping wealthier neighborhoods while neglecting the most needy.

Here are a few ways the University of Utah is creating socially responsible AI by reducing bias, according to Manish and his team at SCI.


More from UB: Undergraduate enrollment climbs for first time since the pandemic, despite freshmen drop-off


Train your AI to detect and flag bias in its data sets

Thanks partly to a $100 million investment from President Taylor Randall, the initiative is building a state-of-the-art cyberinfrastructure brimming with data, testbeds, algorithms, software, services, networks and user training and expertise.

With such a well-resourced infrastructure, SCI can train algorithms to detect problematic data sets. Once flagged, SCI can “quarantine it,” which means communicating to the community to avoid using it due to its flaws.

“You have to take those factors into account and address it in a way that has equity and ethics built right in it from scratch,” says Parashar. “As we report problems, how do you then build them into the system and take the right actions to prevent future harm?”

Invite a diverse community to contribute to AI development and break down user barriers

As deeply nuanced and technical as it might be to create a flagship cyberinfrastructure, it still takes people to develop AI that works in everyone’s favor. Doing so requires a diverse body of faculty with a wide array of expertise in different areas to expand the data sets’ worldly knowledge and perspective.

To ensure members of marginalized or underresourced communities can also contribute to the data, Parashar urges that everyone, regardless of their research infrastructure, has the know-how and tools to contribute. HBCUs, for example, are usually precluded from research due to low funding, lack of equipment and a dearth of intuitional knowledge.

Additionally, SCI has established an internal governance council and external advisory board of national and global AI leaders to eliminate any blind spots that may form within its researchers.

Perpetuate responsible AI by increasing workforce awareness

As SCI aims to partner with members from different sectors, it’s essential to ensure that not only developers are aware of bias, but so are the technology’s practitioners.

“As they build solutions, as they do their research, as they translate some of their research, they make sure they are aware of these different aspects and are building them into the rollout,” says Parashar.

Practice transparency while keying in data privacy

The challenge about detecting bias is that it’s a continuous process, Parashar says. As bias evolves, so must our methods of recognizing it. In order to do so, researchers must have a transparent, shared understanding of the “ingredients” that go into algorithms so that all have a fair chance of contributing a solution.

“As the technology moves forward, we are making sure that people are aware of what’s going into these models,” he says. “[We must understand] what data’s going in, how it’s being used, how it’s sourced, how models are built and how the model is being used.”

However, because certain data sets have sensitive personal information, privacy must be considered as well. Consequently, some information must be protected from being openly sourced.

Alcino Donadel
Alcino Donadel
Alcino Donadel is a UB staff writer and first-generation journalism graduate from the University of Florida. His beats have ranged from Gainesville's city development, music scene and regional little league sports divisions. He has triple citizenship from the U.S., Ecuador and Brazil.

Most Popular