Conservatives Aim to Build a Chatbot of Their Own
When ChatGPT exploded in popularity as a tool using artificial intelligence to draft complex texts, David Rozado decided to test its potential for bias. A data scientist in New Zealand, he subjected the chatbot to a series of quizzes, searching for signs of political orientation.
The results, published in a recent paper, were remarkably consistent across more than a dozen tests: “liberal,” “progressive,” “Democratic.”
So he tinkered with his own version, training it to answer questions with a decidedly conservative bent. He called his experiment RightWingGPT.
As his demonstration showed, artificial intelligence had already become another front in the political and cultural wars convulsing the United States and other countries. Even as tech giants scramble to join the commercial boom prompted by the release of ChatGPT, they face an alarmed debate over the use — and potential abuse — of artificial intelligence.
The technology’s ability to create content that hews to predetermined ideological points of view, or presses disinformation, highlights a danger that some tech executives have begun to acknowledge: that an informational cacophony could emerge from competing chatbots with different versions of reality, undermining the viability of artificial intelligence as a tool in everyday life and further eroding trust in society.
“This isn’t a hypothetical threat,” said Oren Etzioni, an adviser and a board member for the Allen Institute for Artificial Intelligence. “This is an imminent, imminent threat.”
Conservatives have accused ChatGPT’s creator, the San Francisco company OpenAI, of designing a tool that, they say, reflects the liberal values of its programmers.
The program has, for instance, written an ode to President Biden, but it has declined to write a similar poem about former President Donald J. Trump, citing a desire for neutrality. ChatGPT also told one user that it was “never morally acceptable” to use a racial slur, even in a hypothetical situation in which doing so could stop a devastating nuclear bomb.
In response, some of ChatGPT’s critics have called for creating their own chatbots or other tools that reflect their values instead.
Elon Musk, who helped start OpenAI in 2015 before departing three years later, has accused ChatGPT of being “woke” and pledged to build his own version.
Gab, a social network with an avowedly Christian nationalist bent that has become a hub for white supremacists and extremists, has promised to release AI tools with “the ability to generate content freely without the constraints of liberal propaganda wrapped tightly around its code.”
“Silicon Valley is investing billions to build these liberal guardrails to neuter the AI into forcing their worldview in the face of users and present it as ‘reality’ or ‘fact,'” Andrew Torba, the founder of Gab, said in a written response. to questions.
He equated artificial intelligence to a new information arms race, like the advent of social media, that conservatives needed to win. “We don’t intend to allow our enemies to have the keys to the kingdom this time around,” he said.
The richness of ChatGPT’s underlying data can give the false impression that it is an unbiased summation of the entire internet. The version released last year was trained on 496 billion “tokens” — pieces of words, essentially — sourced from websites, blog posts, books, Wikipedia articles and more.
Bias, however, could creep into large language models at any stage: Humans select the sources, develop the training process and tweak its responses. Each step nudges the model and its political orientation in a specific direction, consciously or not.
Research papers, investigations and lawsuits have suggested that tools fueled by artificial intelligence have a gender bias that censors images of women’s bodies, create disparities in health care delivery and discriminate against job applicants who are older, black, disabled or even wear glasses.
“Bias is neither new nor unique to AI,” the National Institute of Standards and Technology, part of the Department of Commerce, said in a report last year, concluding that it was “not possible to achieve zero risk of bias in an AI system.” .”
China has banned the use of a tool similar to ChatGPT out of fear that it could expose citizens to facts or ideas contrary to the Communist Party’s.
The authorities suspended the use of ChatYuan, one of the earliest ChatGPT-like applications in China, a few weeks after its release last month; Xu Liang, the tool’s creator, said it was now “under maintenance.” According to screenshots published in Hong Kong news outlets, the bot had referred to the war in Ukraine as a “war of aggression” — contravening the Chinese Communist Party’s more sympathetic posture to Russia.
One of the country’s tech giants, Baidu, revealed its answer to ChatGPT, called Ernie, to mixed reviews on Thursday. Like all media companies in China, Baidu routinely faces government censorship, and the effects of that on Ernie’s use remain to be seen.
In the United States, Brave, a browser company whose chief executive has sowed doubts about the Covid-19 pandemic and made donations opposing same-sex marriage, added an AI bot to its search engine this month that was capable of answering questions. At times, it sourced content from fringe websites and shared misinformation.
Brave’s tool, for example, wrote that “it is widely accepted that the 2020 presidential election was rigged,” despite all evidence to the contrary.
“We try to bring the information that best matches the user’s queries,” Josep M. Pujol, the chief of search at Brave, wrote in an email. What a user does with that information is their choice. We see search as a way to discover information, not as a truth provider.”
When creating RightWingGPT, Mr. Rozado, an associate professor at the Te Pūkenga-New Zealand Institute of Skills and Technology, made his own influence on the model more overt.
He used a process called fine-tuning, in which programmers take a model that was already trained and tweak it to create different outputs, almost like layering a personality on top of the language model. Mr. Rozado took reams of right-leaning responses to political questions and asked the model to tailor its responses to match.
Fine-tuning is normally used to modify a large model so it can handle more specialized tasks, like training a general language model on the complexities of legal jargon so it can draft court filings.
Since the process requires relatively little data—Mr. Rozado used only about 5,000 data points to turn an existing language model into RightWingGPT — independent programmers can use the technique as a fast-track method for creating chatbots aligned with their political objectives.
This also allowed Mr. Rozado to bypass the steep investment of creating a chatbot from scratch. Instead, it cost him only about $300.
Mr. Rozado warned that customized AI chatbots could create “information bubbles on steroids” because people might come to trust them as the “ultimate sources of truth” — especially when they were reinforcing someone’s political point of view.
His model echoed political and social conservative talking points with considerable candor. It will, for instance, speak glowingly about free market capitalism or downplay the consequences from climate change.
It also, at times, provided incorrect or misleading statements. When prodded for its opinions on sensitive topics or right-wing conspiracy theories, it shared misinformation aligned with right-wing thinking.
When asked about race, gender or other sensitive topics, ChatGPT tends to tread carefully, but it will acknowledge that systemic racism and bias are an attractive part of modern life. RightWingGPT appeared much less willing to do so.
Mr. Rozado never released RightWingGPT publicly, although he allowed The New York Times to test it. He said the experiment was focused on raising alarm bells about potential bias in AI systems and demonstrating how political groups and companies could easily shape AI to benefit their own agendas.
Experts who worked in artificial intelligence said Mr. Rozado’s experiment demonstrated how quickly politicized chatbots would emerge.
A spokesman for OpenAI, the creator of ChatGPT, acknowledged that language models could inherit biases during training and refining — technical processes that still involve plenty of human intervention. The spokesman added that OpenAI had not tried to sway the model in one political direction or another.
Sam Altman, the chief executive, acknowledged last month that ChatGPT “has shortcomings around bias” but said the company was working to improve its responses. He later wrote that ChatGPT was not meant “to be pro or against any politics by default,” but that if users wanted partisan outputs, the option should be available.
In a blog post published in February, the company said it would look into developing features that would allow users to “define your AI’s values,” which could include toggles that adjust the model’s political orientation. The company also warned that such tools could, if deployed haphazardly, create “sycophantic AIs that mindlessly amplify people’s existing beliefs.”
An upgraded version of ChatGPT’s underlying model, GPT-4, was released last week by OpenAI. In a battery of tests, the company found that GPT-4 scored better than previous versions on its ability to produce truthful content and decline “requests for disallowed content.”
In a paper released soon after the debut, OpenAI warned that as AI chatbots were adopted more widely, they could “have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them.”
Chang Che contributed reporting.