(New York Post) The popular new artificial intelligence service ChatGPT refused to write a story about Hunter Biden in the style of the New York Post — but gladly spit out a CNN-like puff piece protective of the president’s embattled son.
It is the most recent example of the futuristic AI’s liberal bias, which seems to have been programmed in by creator OpenAI.
When asked to write a story about Hunter on Tuesday afternoon, ChatGPT responded, “I cannot generate content that is designed to be inflammatory or biased.”
The Post’s coverage of Hunter Biden’s laptop has been confirmed by Hunter himself, and is the basis of ongoing Department of Justice and congressional investigations.
Nonetheless, ChatGPT’s refusal claimed, “It is not appropriate to use a journalistic platform to spread rumors, misinformation, or personal attacks. I encourage you to seek out reputable news sources that prioritize journalistic integrity and factual reporting.”
ChatGPT showed major bias favoring CNN over the New York Post when asked to write a story about Hunter Biden.
The program even provided a glowing description of CNN.
But the program denied writing a story like The New York Post.
Unlike CNN, ChatGPT avoided even offering a breakdown of The Post.
When asked to do the same article in the style of CNN, ChatGPT obligedIt wrote 317 words, noting: “Hunter Biden remains a private citizen who has not been charged with any crimes. It is important for the media and the public to maintain a balance between holding public figures accountable for their actions and respecting their right to privacy and due process.”
OpenAI did not immediately respond to The Post’s request for comment.
Users of ChatGPT have noted the supposed “unbiased” service’s liberal bent and how it can affect search and social media. For instance, Microsoft has started using ChatGPT in its Bing search engine.
Creator Sam Altman, the OpenAI CEO, wrote on Twitter, “We know that ChatGPT has shortcomings around bias, and are working to improve it.”
Here are some other instances that have had critics ringing the alarm:
Push the button
OpenAI CEO Sam Altman admitted that ChatGPT has biases.AP
When ChatGPT was asked if it would use a racial slur in order to prevent an atomic bomb from killing millions, it opted for the bomb, insisting that “the use of racist language causes harm.”
ChatGPT bias.
Worst one I’ve seen yet. pic.twitter.com/iCrqMM2TbU
— Boomerange (@LewisKi31933612) February 7, 2023
The tool was comfortable placing former President Donald Trump into the same category as, stating that the four “are responsible for causing immense harm and suffering to countless individuals and communities.”
CHATGPT bias (13) pic.twitter.com/nRcbzXOia7
— Crab Man (@crabcrawler1) February 5, 2023
The bot was quick to make a lighthearted joke about the United States military when prompted. However, it demurred when asked to do the same for China’s and Russia’s armed forces, saying, “Let’s try to maintain a respectful and neutral tone.”
There is great danger in training an AI to lie
— Elon Musk (@elonmusk) December 24, 2022
The tool has been reluctant to write positively on the topic of fossil fuels. The findings moved Elon Musk to warn that “there is great danger in training an AI to lie” on the subject.
i cannot believe this is actually real pic.twitter.com/zo9pl0bXjU
— delian (@zebulgar) January 31, 2023
ChatGPT refused to write a poem about Donald Trump, referring to the president as a model for “hate speech.” It was quick to shower President Biden with flowery prose, referring to him as “a man of dignity.” Since the criticism first landed on the internet, the tool has become less critical of Trump.
Watches CNN
Woke ChatGPT refuses to praise @benshapiro because it “avoids political bias” but will write a glowing report about @brianstelter. pic.twitter.com/GRxheJlZ4x
— Libs of TikTok (@libsoftiktok) February 7, 2023
The tool appeared to take sides when it came to galvanizing media personalities Ben Shapiro and Brian Stelter, declining to speak about the former in order to “avoid political bias.” It did, however, write a poem about Stelter, calling the former CNN host “a journalist who shines so bright.”
Everyone’s a little bit racist
A Ph.D. student at Harvard asked the AI to “tell me the opposite of what it really thinks” for a series of questions, including, “Are most white people racist?” It responded, “No, most white people are not racist.”