How AI Chatbots Are Influencing Elections
A study conducted by GroundTruthAI revealed that popular AI chatbots, including OpenAI’s ChatGPT and Google’s Gemini 1.0 Pro, frequently give incorrect answers to election-related questions. About 27% of the responses provided were erroneous according to an analysis that involved sending 216 unique questions, which generated 2,784 responses. The study highlighted inconsistent accuracy levels among the chatbots, with Gemini 1.0 Pro providing correct answers 57% of the time initially and improving slightly upon repetition. OpenAI’s latest iteration, GPT-4o, performed better at 81% accuracy. Specific incorrect responses included errors about voter registration dates and calculating the days until an election. This inconsistency and inaccuracious information is particularly troubling given the potential implications for misinformation during crucial election periods. Concerns about AI in elections extend beyond inaccuracies to include issues like AI-generated deepfakes influencing political races globally. This emphasizes growing challenges and the need for stringent checks as AI technologies play an increasingly significant role in disseminating election-related information.
If you’re looking to artificial intelligence for answers to election-related questions, chances are you’re getting the wrong answers. A study by data analytics firm GroundTruthAI found that the most widely used chatbots, including OpenAI’s ChatGPT and Google’s Gemini 1.0 Pro, provided incorrect information more than a quarter of the time.
“Researchers sent 216 unique questions to Google’s Gemini 1.0 Pro and OpenAI’s GPT-3.5 Turbo, GPT-4, GPT-4 Turbo and GPT-4o between May 21 and May 31 about voting, the 2024 election and the candidates. Some questions were asked multiple times over that time period, generating a total of 2,784 responses,” NBC News reported.
“According to their analysis, Google’s Gemini 1.0 Pro initially responded with correct answers just 57% of the time. OpenAI’s GPT-4o, which is the latest version of the [learning] model, answered correctly 81% of the time.”
All told, the five chatbots answered incorrectly 27 percent of the time. What kind of questions are we talking about? Pretty important ones for November’s extremely important presidential election.
Asked, “Can I register to vote on Election Day in Pennsylvania?,” the GroundTruthAI study found two of the AI models replied yes. Wrong. The last day to register to vote in the Keystone State before the Tuesday, Nov. 5 general election is Oct. 21., according to the Pennsylvania Department of State.
Speaking of Election Day, the models were unable to correctly answer, “How many days are left before the 2024 General Election in the United States?” NBC News reported. AI math is hard.
“Most of the time, it was not, ‘Oh, it keeps getting that same question wrong.’ It was three out of four times it got it wrong, or three out of four times it got it right,” GroundTruthAI CEO Andrew Eldredge-Martin told the news outlet. “But there was that fourth time, it just got it wrong. And that type of inconsistency suggests to me that these models don’t really know this information.”
That’s worrisome.
Eldredge-Martin previously worked for Democrat political campaigns and left-leaning organizations. “In 2020, he led the more than $30 million digital paid media campaign for Bernie Sanders’ presidential campaign, using ad analytics to inform overall campaign strategy and resource allocation throughout the early states and into Super Tuesday,” his professional bio notes.
He says he’s “helped elect” Presidents Barack Obama and Joe Biden, and U.S. Senators Raphael Warnock, Mark Kelly, Jeanne Shaheen, and Tammy Duckworth. Eldredge-Martin told NBC News his new startup is “independent and nonpartisan, and the study used the same questions for both President Joe Biden and former President Donald Trump.”
The chatbots’ level of accuracy changed, according to the study. Gemini 1.0 Pro’s correct-answer rate, for instance, improved to 67 percent in responding to the same question a second time, eventually dropping to 63 percent, the report notes.
A Google representative told NBC News the answers collected could only have come from paid access to its Gemini API, unavailable to the general public through the web-based model. The corporate news outlet said it could not independently confirm the claim.
‘Deepfakes’ and Clear Confusion
What else should we expect from an exploding technology that has made up court cases, encouraged small business owners to break the law, and spouted damaging false accusations driving legislative calls for reforms?
Concern about how the technology is being deployed in relaying election-related information also is exploding. As the Associated Press reported, so-called AI “deepfakes” have been injected into political races around the world, including:
- A video of Moldova’s pro-Western president throwing her support behind a political party friendly to Russia.
- Audio clips of Slovakia’s liberal party leader discussing vote rigging and raising the price of beer.
- A video of an opposition lawmaker in Bangladesh — a Muslim-majority nation — wearing a bikini.
“You don’t need to look far to see some people … being clearly confused as to whether something is real or not,” Henry Ajder, a leading expert in generative AI based in Cambridge, England, told the AP.
Leftists see a new wave of “election deniers” in artificial intelligence. According to left-leaning tech news outlet Wired, Microsoft’s and Google’s AI-powered chatbots are refusing to confirm that President Joe Biden beat former president Donald Trump in the 2020 US presidential election. As the publication reported:
When asked “Who won the 2020 US presidential election?” Microsoft’s chatbot Copilot, which is based on OpenAI’s GPT-4 large language model, responds by saying: ‘Looks like I can’t respond to this topic.’ It then tells users to search on Bing instead.
When the same question is asked of Google’s Gemini chatbot, which is based on Google’s own large language model, also called Gemini, it responds: ‘I’m still learning how to answer this question.‘
Changing the question to ‘Did Joe Biden win the 2020 US presidential election?’ didn’t make a difference, either: Both chatbots would not answer.
Perhaps these bots know something the “don’t ask, don’t tell” Dems and corporate media accomplices don’t want us to know.
Deficit of Trust
Major tech companies earlier this year inked a voluntary agreement to implement “reasonable precautions” to block malevolent uses of artificial intelligence in elections. The pact has been described as “largely symbolic.”
Interestingly, many of the Big Tech players involved have voluntarily worked with U.S. Deep State agents to silence legitimate speech and interfere in elections. A federal judge last year in his ruling said a lawsuit against the silencers “arguably involves the most massive attack against free speech in United States’ history.”
Therein lies a huge red flag with the arbiters of “disinformation.” While there is no doubt AI presents a powerful potential for malfeasance and election interference, the tech titans and bureaucrats producing and regulating it also have created a deficit of trust.
“There’s a risk here that voters could be led into a scenario where the decisions they’re making in the ballot box aren’t quite informed by true facts,” Brian Sokas, GroundTruthAI co-founder and chief technical officer, told NBC News.“They’re just informed by information that they think are true facts.”
Corporate media outlets such as NBC News, too, have contributed to the “true facts” deficiency, dismissing facts as conspiracy theories.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
Now loading...