• The AI Pulse
  • Posts
  • 🧠 AI and Misinformation: Distributor or Detector?

🧠 AI and Misinformation: Distributor or Detector?

PLUS: A Twitter Disinformation Machine, Metadata, and Media Literacy

Welcome back AI prodigies!

In today’s Sunday Special:

  • 🎯The Truth is Still Hard to Come By

  • 🤖AI, The Misinformation Maker

  • 💼Corporate Responsibility

  • Reality Check

Read Time: 5 minutes

🎓Key Terms

  • Misinformation: false or inaccurate information—getting the facts wrong.

  • Autonomous Agents: software programs that respond to states and events in their environment independent of direct instruction by the user or owner.

  • Metadata, or “data about data”: provides information about one or more aspects of the data.

🎯THE TRUTH IS STILL HARD TO COME BY

Two weeks ago, we highlighted the scarcity of objective truth and introduced a potential solution. Consensus, an AI-powered search engine, answers scientific questions and empowers you to explore underlying nuances in published research papers. Scientific truth, though ever-evolving, can be uncovered through scientific journals. But subjective truths, such as facts in rapidly disseminated news stories, are far more elusive. When controversial images, videos, or tweets go viral, commentators pounce, cherry-picking facts to fit their narratives. The harm of journalistic “spin,” context removal, and opinion presented as fact in public discourse pales in comparison to knowing, malicious fabrication. Though punishable by U.S. law in certain circumstances (see Fox News and CNN settlements), defamation, libel, and slander are far less common than misinformation, which is false information spread without an intent to deceive or manipulate. In our compressed news cycles, misinformation rapidly transforms into subjective truth for certain political or subcultural groups and egregious falsehood for others.

🤖AI, THE MISINFORMATION MAKER

Misinformation, which warps public opinion and deepens polarization, may be supercharged by generative AI tools. Unsurprisingly, chatbots trained on false information will output a version of their training data. But even responsibly trained chatbots, such as ChatGPT, Bard, and Llama 2, often hallucinate despite your best prompting efforts. Now, free BS-generating tools exist. Swap faces with Swapface and make fake videos with Deepfake, all for free. Plus, autonomous agents that perform predetermined tasks, such as AutoGPT, can generate and distribute fake content faster than social media moderators can remove it. (Although autonomous agents have existed for decades, open-source projects like AutoGPT provide unprecedented access to development.) A software developer recently built a political Twitter bot that would refute Russian and Chinese media outlets criticizing U.S. politicians at an upfront cost of just $400. With the barrier to unleashing misinformation at an all-time low and disagreement about what constitutes misinformation at an all-time high, the path forward remains unclear.

💼CORPORATE RESPONSIBILITY

Following guidance from the International Press Telecommunications Council (IPTC), several technology companies, including Google and Microsoft, have pledged to watermark and include metadata for synthetic media, also known as AI-generated content. Metadata, embedded in image or video files, describes the camera, software, and settings used in their creation. These actions are more of a scapegoat than a solution. When victims of AI-generated disinformation campaigns accuse tech companies of complicity or even wrongdoing, they can claim alignment with international media standards. Few end-users will analyze the metadata of each piece of content to judge its veracity, even if tools for both developers and consumers exist.

REALITY CHECK

The buck stops with consumers of content. In an ideal world, the producers of content-generating models would be responsible for their false content. But for all intents and purposes, subjective truth is unverifiable. Further, what the big guys do and their regulatory orders are beyond most technology users’ control. Regardless, most Americans don’t trust technology companies to classify information correctly, let alone the opaque detective models they produce. Most of us would be better off focusing on what is within our control—the hundreds of content-related decisions (watch, read, share, like, comment, scroll, etc.) we all make daily. These decisions shape our worldview and indirectly nudge millions of other users by providing feedback to powerful algorithms. All content, AI-generated or not, should be treated with a healthy dose of skepticism, common sense, and full context. ‘Healthy’ will never be defined for you. Many believe common sense is not all that common, and context is often hard to come by. The toughest questions lack answers; they always will.

📒FINAL NOTE

If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.

How was today’s newsletter?

❤️AI Pulse Review of The Week

“I just read last week’s Sunday Special. I love your ability to capture the context and lived reality of AI trends.”

-Jonah (⭐️⭐️⭐️⭐️⭐️Nailed it!)

🎁NOTION TEMPLATES

🚨Subscribe to our newsletter for free and receive these powerful Notion templates:

  • ⚙️150 ChatGPT prompts for Copywriting

  • ⚙️325 ChatGPT prompts for Email Marketing

  • 📆Simple Project Management Board

  • ⏱Time Tracker

Reply

or to participate.