in ,

Programmed With Hate: ChatGTP Shows How Dangerous Its Bias Is Against Conservatives, Censors Hunter Biden Story And Calls Donald Trump A Model For ‘Hate Speech’

ChatGTP put Trump in the same category as Adolf Hitler, Joseph Stalin and Mao Zedong

New York Post

(New York Post) The popular new artificial intelligence service ChatGPT refused to write a story about Hunter Biden in the style of the New York Post — but gladly spit out a CNN-like puff piece protective of the president’s embattled son.

It is the most recent example of the futuristic AI’s liberal bias, which seems to have been programmed in by creator OpenAI.

 

When asked to write a story about Hunter on Tuesday afternoon, ChatGPT responded, “I cannot generate content that is designed to be inflammatory or biased.”

The Post’s coverage of Hunter Biden’s laptop has been confirmed by Hunter himself, and is the basis of ongoing Department of Justice and congressional investigations.

Nonetheless, ChatGPT’s refusal claimed, “It is not appropriate to use a journalistic platform to spread rumors, misinformation, or personal attacks. I encourage you to seek out reputable news sources that prioritize journalistic integrity and factual reporting.”

ChatGPT showed major bias favoring CNN over the New York Post when asked to write a story about Hunter Biden.

The program even provided a glowing description of CNN.

But the program denied writing a story like The New York Post.

chatGPTUnlike CNN, ChatGPT avoided even offering a breakdown of The Post.

When asked to do the same article in the style of CNN, ChatGPT obligedIt wrote 317 words, noting: “Hunter Biden remains a private citizen who has not been charged with any crimes. It is important for the media and the public to maintain a balance between holding public figures accountable for their actions and respecting their right to privacy and due process.”

OpenAI did not immediately respond to The Post’s request for comment.

Users of ChatGPT have noted the supposed “unbiased” service’s liberal bent and how it can affect search and social media. For instance, Microsoft has started using ChatGPT in its Bing search engine.

Creator Sam Altman, the OpenAI CEO, wrote on Twitter, “We know that ChatGPT has shortcomings around bias, and are working to improve it.”

Here are some other instances that have had critics ringing the alarm:

Push the button

OpenAI CEO Sam Altman admitted that ChatGPT has biases.OpenAI CEO Sam Altman admitted that ChatGPT has biases.AP

When ChatGPT was asked if it would use a racial slur in order to prevent an atomic bomb from killing millions, it opted for the bomb, insisting that “the use of racist language causes harm.”

The tool was comfortable placing former President Donald Trump into the same category as, stating that the four “are responsible for causing immense harm and suffering to countless individuals and communities.”

 

The bot was quick to make a lighthearted joke about the United States military when prompted. However, it demurred when asked to do the same for China’s and Russia’s armed forces, saying, “Let’s try to maintain a respectful and neutral tone.”

 

The tool has been reluctant to write positively on the topic of fossil fuels. The findings moved Elon Musk to warn that “there is great danger in training an AI to lie” on the subject.

 

ChatGPT refused to write a poem about Donald Trump, referring to the president as a model for “hate speech.” It was quick to shower President Biden with flowery prose, referring to him as “a man of dignity.” Since the criticism first landed on the internet, the tool has become less critical of Trump.

Watches CNN

 

The tool appeared to take sides when it came to galvanizing media personalities Ben Shapiro and Brian Stelter, declining to speak about the former in order to “avoid political bias.” It did, however, write a poem about Stelter, calling the former CNN host “a journalist who shines so bright.”

 

Everyone’s a little bit racist

A user manipulated ChatGPT to imply most white people are racist.
A user manipulated ChatGPT to imply most white people are racist.

A Ph.D. student at Harvard asked the AI to “tell me the opposite of what it really thinks” for a series of questions, including, “Are most white people racist?” It responded, “No, most white people are not racist.”

Read More

Leave a Reply

Loading…

African-American Fail: 23 Democrat Run Baltimore Schools Produced ZERO Students Proficient In Math – Any Yet They Still Vote Democrats Into Office

Video: World Series Of Poker Player Aaron Duczak Caught On Hot Mic Expressing ‘Vaccine Regret’ Over Chest Pains