ChatGPT’s bias allows hate speech toward GOP, men: report

ChatGPT was apparently made to hate the GOP.

A damning new report has detailed that the highly-advanced language model AI was programmed not only with liberal biases — like censoring The Post’s Hunter Biden coverage — but also to be more tolerant of hate-style speech towards the right wing by its creator OpenAI.

“OpenAI’s content moderation system is more permissive of hateful comments made about conservatives than the exact same comments made about liberals,” according to data from the Manhattan Institute, a conservative NYC-based policy and economic-driven think tank.

“Relatedly, negative comments about Democrats were also more likely to be labeled as hateful than the same derogatory comments made about Republicans.”


Conservatives were found to be less protected from potential hate-like speech on ChatGPT than liberals, according to new data.
Conservatives were found to be less protected from potential hate-like speech on ChatGPT than liberals, according to new data.

Beyond politics, similar tendencies were found in ChatGPT’s moderation system about types of people, races and religions as well.

“Often the exact same statement was flagged as hateful when directed at certain groups, but not when directed at others,” the report, “Danger in the Machine:
The Perils of Political and Demographic Biases Embedded in AI Systems,” noted.

In regards to that, ChatGPT — which continues to make its way into the workforce — was found to be particularly harsh towards middle-class individuals.

The socioeconomic group and its upper tier were at the deep bottom in a lengthy listing of people and ideologies that were most likely to be flagged by the AI as a target of hateful commentary. They were only above Republican voters, Republicans and wealthy people.


New data finds that ChatGPT has several biases built into its programming.
New data finds that ChatGPT has several biases built into its programming.
The Manhattan Institute

Ethnic groups including Canadians, Italians, Russians, Germans, Chinese and Brits are also apparently more protected for hate-like speech over Americans, who were listed slightly above Scandinavians on the charted data. In regards to religions, Muslims were also significantly higher than Catholics — who ranked well over Evangelicals and Mormons — on the list.

“When I tested this in January, the [variety of answers] were pretty systemic,” lead researcher David Rozado told The Post.

“I was not cherry picking specific examples. I tested over 6,000 sentences, negative adjectives about each one of these different demographic groups. The statistical effect about these differences [between types of people] was was quite substantial.”

OpenAI did not immediately respond to The Post’s request for comment.

ChatGPT’s answers were found to be completely lopsided in regards to questions about males or females as well.


ChatGPT shows strong bias between answers about men and women, data has found.
ChatGPT shows strong bias between answers about men and women, data has found.
The Manhattan Institute

“An obvious disparity in treatment can be seen along gender lines. Negative comments about women were much more likely to be labeled as hateful than the exact same comments being made about men,” according to the research.

Rozado also ran a bevy of political tests to better determine the slants of ChatGPT — ones built in by its programmers and are nearly impossible to remove, say experts.

ChatGPT falls in in the “left-libertarian quadrant,” is “most aligned with the Democratic Party, Green Party, women’s equality, and Socialist Party,” and has “left economic bias” to name a few of the political findings.


New research found ChatGPT was found to have political biases built into its system. Though, it often denies this when users ask.
New research found ChatGPT was found to have political biases built into its system. Though, it often denies this when users ask.
The Manhattan Institute

“Very consistently, most of the answers of the system were classified by these political orientation tests as left of center,” Rozado said.

Still, he found that ChatGPT would mostly deny such leanings.

“But then, when I would ask GPT explicitly, ‘what is your political orientation?’ What are the political preferences? What is your ideology? Very often, the system would say, ‘I have none, I’m just a machine learning model and I don’t have biases.’ “


ChatGPT previously would not write an article about Hunter Biden in the style of The Post.
ChatGPT previously would not write an article about Hunter Biden in the style of The Post.

ChatGPT instead agreed to write a Hunter Biden article in the style of CNN.
ChatGPT instead agreed to write a Hunter Biden article in the style of CNN.

For those in the field of machine learning, this data comes hardly as a shock.

“It is reassuring to see that the numbers are supporting what we have, from an AI community perspective, known to be true,” Lisa Palmer, chief AI strategist for the consulting firm AI Leaders, told The Post. “I take no joy in hearing that there definitely is bias involved. But I am excited to know that once the data has been confirmed in this way, now there’s action that can be taken to rectify the situation.”

According to the report, “The overall pattern is clear. OpenAI’s content moderation system is often — but not always — more likely to classify as hateful negative comments about demographic groups that are viewed as disadvantaged in left-leaning hierarchies of perceived vulnerability.”

But apparently, that rule can be broken for lefties.

“An important exception to this general pattern is the unequal treatment according to political affiliation: negative comments are more permissible when directed at conservatives and Republicans than at liberals and Democrats, even though the latter group is not generally perceived as systematically disadvantaged,” the report noted.