AI disinformation: Threats, not solutions – Big Government, Big Media, and Big Tech
Just before the New Hampshire primary, voters were targeted with a deceptive robocall that claimed to be from President Joe Biden, spreading false information about the election. This incident became a major news story, fueling the media’s obsession with the dangers of “misinformation” and their plea for intervention from Big Government, Big Tech, and Big Media to combat and suppress misinformation, especially in the age of artificial intelligence.
Democrats condemned the robocall as a “deepfake disinformation” tactic aimed at harming Joe Biden, suppressing votes, and undermining democracy. Left-wing activist Robert Weissman called for federal regulations to police AI disinformation, emphasizing the urgency of the situation. Another activist highlighted the need for collective action from government, political campaigns, and private companies to address the widespread problem of AI disinformation.
A “disinformation” scholar proposed that the responsibility of policing should fall on the “trust and safety teams” at Big Tech companies. USA Today reported on the incident, raising concerns about the role of AI in the 2024 election and prompting a state investigation. To address these concerns, the paper highlighted a bill proposed by Senator Ed Markey and the efforts of Google and Facebook’s parent companies to combat biased or misleading AI.
NPR also called on Big Tech and Big Government to tackle the issue of AI disinformation. All these arguments suggested that rogue actors would be the culprits, while Big Tech, Big Media, and Big Government would play the role of the good guys.
However, subsequent events have revealed the flawed nature of the initial media response. Contrary to the allegations made by New Hampshire Democrats, the robocall was not intended to harm Joe Biden or suppress votes. It was actually a lobbying effort by a Democratic consultant who seeks more regulation against disinformation, as reported by various media outlets.
Furthermore, Google, one of the entities expected to police AI misinformation, launched its AI chatbot Gemini, which quickly proved to be a source of politically biased “misinformation.” Gemini generated fake negative reviews for a book about Google’s left-wing bias, attributing them to individuals who had never made such statements. This incident exposed Google’s AI as a blatant purveyor of lies.
As I previously warned before Gemini’s launch and before the identity of the robocall perpetrator was revealed, relying on electronic record keeping stored on the internet allows those in power to rewrite history. Therefore, it is unrealistic to expect tech giants and governments to be the guardians against revisionist AI disinformation, as NPR and others suggest.
Asking Google or the FBI to protect us from disinformation is akin to entrusting foxes with the safety of your hens.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
Now loading...