Campus News

AI’s Bias Problem: Built From the Past, Shaped by the User

AI’s Bias Problem: Built From the Past, Shaped by the User

AI tools offer personalized answers, but can deepen echo chambers and historical biases.

Alternative Text
Graphic representing the difference biases that play into AI outputs.

Artificial intelligence systems may appear objective, but experts say they reflect the historical biases embedded in their training data and can reinforce the perspectives of the users prompting them. 

“AI is generally only going to be as good as the inputs that get put into it,” said Greg Munno, a digital journalism professor and the chair of the Magazine, News, & Digital Journalism Department at Syracuse University. He said AI systems learn patterns from content shaped by institutional biases, preconceptions, and cultural assumptions, and can therefore mimic or even amplify those same prejudices.

The technology is a reflection of the extensive body of information it has been educated on, which includes decades or even centuries of cultural presumptions, reporting and commentary.

“To the extent that the source material that AI is preying on is biased against any particular race, or gender, or sexual orientation, there’s a pretty good chance that AI might incorporate that bias,” Munno said. “Some of those biases might very well show up in its outputs.”

Jacob Kaplan, a student in the School of Information Studies who has worked with and built AI models, echoed that concern, pointing out that the data AI draws from is not limited to the present day.

“The data that we collect today is accumulating from all of history… so it’s not modern statistics,” Kaplan said. “So yeah, I think that AI is gathering information off of everything, and it’s turning it into biases and statistics of today.”

AI bias is not deliberately malicious but rather structural, ingrained in the data and patterns that power the technology.

Not everyone in the newsroom sees AI as inherently more dangerous than the humans who use it.

“Honestly, I think human bias is a greater risk,” Mike Dupras, a Content Data Analyst and Staff Performance Specialist at Advance Media New York, said. 

“We have a tendency to not see our own biases,” Dupras, who works on implementing AI workflows, said. He added that AI does not independently invent bias, but instead mirrors what it is given.

Artificial intelligence generally produces according to the specific instructions and guidelines that the users provide, Dupras said. 

In response to that conceptualization, Kaplan said that AI prejudice poses a unique threat because it lacks the human underpinnings of human bias.

“Human bias is typically based on experience … it’s built on emotion and experience and actual critical thought,” Kaplan said. “AI doesn’t have a world-building model.”

To Kaplan, that absence of genuine reasoning makes AI’s outputs more dangerous. “I think that AI has the more threatening approach because it’s not even simulated, it’s just said,” Kaplan said. “It’s literally actions before thought.” 

Bias extends beyond historical inputs. A lot of AI systems aimed at consumers are made to please them, which can reinforce rather than contradict individual opinions, said Munno. 

“Most of the consumer-facing platforms have this interesting feature that they really want to please the user,” Munno said. “If the person writing the prompt has a skewed, biased view of the world … there’s a good chance that the AI will just kind of parrot the same position as the prompt writer.”

Kaplan has experienced this firsthand. After asking an AI tool to analyze a business, he said the system fabricated an answer rather than admitting it didn’t know. “It just straight up lied to give me an answer,” he said. “[It’s] a people pleaser.”

Because the technology follows its programmed instructions so thoroughly, it can have the potential to unintentionally reinforce preexisting bias, Munno said.

That dynamic raises concerns about information fragmentation. Unlike traditional media, where audiences may see the same headline or broadcast, AI tools generate customized responses for each user.

“Everybody is seeing something a little bit different,” Munno said. 

A shared digital experience is coming to an end with this change. AI provides a fragmented world where information is customized for each individual, in contrast to traditional media, which conveys a consistent message to the public.

“With AI, we could never know. It’s generating a different response to every person. If we ask the same question twice, it will give us a different answer,” Munno said. “The fear is that we could even get more isolated in our own bubbles.”

This lack of transparency creates a fundamental crisis of accountability. The biggest concern is the difficulty of pinpointing exactly where AI-generated information originates.

Some students who frequently use AI technologies find the bias to be both familiar and challenging to identify, which adds to another problem of where information that AI is generating is from.

“From what I’ve seen, it digs from past events. That’s all it really knows,” said Chloe Pusey, freshman at Syracuse University, “I’ve seen instances where it has been biased, but I don’t know if it’s programmed specifically to be.”

She said that AI outputs are not always accurate. “AI, when you put prompts in, it’s sometimes not accurate and doesn’t interpret what you said correctly.”

Megan Acker, 20, Syracuse University student, said the technology appears responsive to tone and expectation.

“It’s programmed to sort of pick up on the way you’re talking to it,” Acker said. “Obviously, it knows that I want to hear a certain answer.”

Acker said that it might not be that AI is more prejudiced than people, but rather that it is less obvious.

“I don’t think it’s more at risk for bias, but I think we are less likely to catch it when we just go, give me what I want to hear, it is almost like a people pleaser,” Acker said.

Munno said the responsibility ultimately remains with human writers and editors.

“Good writers and good editors are currently better than AI,” Munno said. “AI is just going to say, ‘This is pretty good. You have a run-on sentence over here.’”

Munno said he cautions that the responsibility of verification grows if newsrooms use AI to write stories.

“I don’t think they necessarily have it right to have AI write the draft and then have a human do the edit,” Munno said. “Now they’re on a hunt-and-peck mission to see where AI messed up. If you let AI rewrite that story, I think you’ve got to start the fact-checking process all over again,” Munno said.

It’s critical to be able to recognize bias and when something is an opinion and when something is a fact, Munno said.

“Our name is on the story, and we’re responsible for every single word in that story.” Munno said.