🧠 AI and Truth: A Double-Edged Sword

PLUS: AI Tools Help You Find Objective Truth and Clean up Your News Diet

Welcome back AI prodigies!

In today’s Sunday Special:

  • 😇4 Types of Truth

  • 🤖AI, the Truth-Teller

  • 🦾AI, Misinformation Maker

  • 🚨Reality Check

Read Time: 6 minutes

🎓Key Terms

  • Misinformation: false or inaccurate information. Simply put, getting the facts wrong.

  • Disinformation: deliberately misleading or biased information that manipulates narratives or facts.

  • Retrieval-Augmented Generation (RAG): a framework designed to make language models more reliable and accurate by pulling relevant, up-to-date data directly related to a user’s query.

  • Metadata: a set of data that provides information about the properties, history, sources, or formatting of other data.

😇4 TYPES OF TRUTH

Regardless of political persuasion, media outlets often weaponize fears that malicious actors will subvert the truth. “Elections and Disinformation Are Colliding Like Never Before in 2024” (The New York Times). “Four Times Big Tech Censored Covid ‘Misinformation’ That Turned Out To Be True” (The Daily Wire). But the truth is often their truth, and the truth that matters is infamously messy. To cut through the noise, consider these four types of truth, courtesy of the Human Systems Dynamics Institute:

  1. Objective Truth: what is provable independently of a conscious person (e.g., the moon moves across the sky each night).

  2. Normative Truth: what a group agrees is true (e.g., we agree the Earth is flat).

  3. Subjective Truth: how an individual sees or experiences the world (e.g., I am healthy).

  4. Complex Truth: combines elements from objective, normative, and subjective truth to create something most valuable to a group or individual at a given time (e.g., fighting climate change requires higher taxes).

Uncovering all these truths requires human input, including objective truth, which requires humans to apply the scientific method and replicate experiments. And even through diligent scientific research, uncovering objective truth is challenging because the burden of proof necessary to be sure is so high. Generative AI, which lacks human intelligence, cannot find the truth as it repackages or combines elements of media that humans created. However, machine learning algorithms can improve the accuracy of search engines, which can lead us to more objective truth.

🤖AI, THE TRUTH TELLER

One startup leveraging this approach is Consensus, an AI-powered search engine that answers scientific questions by combing through published research papers. Unlike leading chatbots like OpenAI’s ChatGPT or Google’s Gemini, Consensus cites its sources accurately, synthesizing research papers from prestigious scientific journals. It also provides a Consensus Meter, which estimates the probability of an answer to a Yes or No question: “Yes,” “Possibly,” or “No.” For instance, when asked if direct cash transfers reduce poverty, Consensus replied “Yes” with 83% confidence and “Possibly” with 17% confidence after locating the most reliable studies, as indicated by the number of citations, quality of trials, journalistic prestige, and other factors. Scientific truth, though ever-evolving, can be uncovered through published journals.

Political truths, on the other hand, cannot. They are typically complex truths, relying on a thesis about human nature informed by a messy malaise of scientific evidence, lived experiences, personality, genetics, and other hard-to-define factors. Up-and-coming news platforms leverage AI to reduce political bias in the news. Ground uses machine learning to classify the latest stories by their political bias. For each event, they report the percentage of coverage by right-leaning, left-leaning, and moderate media outlets. According to their Blindspot feature, 67% of the coverage of a recent story about a “migrant accused of attacking cops released from jail” was from right-wing outlets, and 33% was from moderate ones. For Ground to reduce political polarization, readers must be aware of their biases and actively confront them by consuming news that conflicts with their worldview.

🦾AI, MISINFORMATION MAKER

Misinformation, which is most nefarious when composed of complex truths, warps public opinion and deepens polarization. Chatbots trained on false information will output a version of their training data. But even responsibly trained chatbots, such as OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama 2, often hallucinate, regardless of the prompter’s sophistication. Though some techniques, like Retrieval-Augmented Generation (RAG), force AI models to reference reliable external sources over their training data to mitigate hallucinations, misinformation is still unavoidable.

Face swappers and fake video generators are accessible and ubiquitous. Plus, software agents that perform automated tasks, such as AutoGPT, can generate and distribute fake content faster than social media moderators can remove it. Although autonomous agents have existed for decades, open-source projects like AutoGPT provide unprecedented access to development. A software developer recently built a political Twitter bot to refute Russian and Chinese media outlets criticizing United States politicians. One study found that users trust AI-generated tweets more than real ones. That could be more of an indictment on Twitter users’ critical thinking skills than generative AI tools. With the barrier to unleashing misinformation at scale near an all-time low and disagreement about what constitutes misinformation near a high, things will likely get worse before they get better.

🚨REALITY CHECK

Despite the best efforts of social media companies, media watchdogs, and government authorities, subversions of objective truth will always propagate; the scale at which the internet operates is too big. See exactly how many hours of video users upload to YouTube below:

🩺 PULSE CHECK

How much content do users upload to YouTube per hour? No Googling!

Vote Below to View Answer

Login or Subscribe to participate in polls.

Given this reality, the responsibility to assess truthfulness falls on the final consumers of content. In an ideal world, misinformation about objective truths would not proliferate. But our world is far from ideal. Subjective, normative, and complex truths are unverifiable for all intents and purposes. As information consumers, we should go back to the basics:

  • Employ a healthy dose of skepticism.

  • Formulate the best argument for a different viewpoint.

  • Breathe before publishing content based on negative emotions.

When aggregated across time and space, billions of microdecisions shape our worldview and indirectly nudge millions of other users. As the line between human and AI-generated content is increasingly blurred, the basics become the foundation of a healthy relationship with information. Though some AI tools can bring us closer to uncovering complex truths, they’re nowhere near replacing a trained mind.

📒FINAL NOTE

If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.

How was today’s newsletter?

❤️AI Pulse Review of The Week

“Another Sunday, another Sunday Special!😊”

-Clarence (⭐️⭐️⭐️⭐️⭐️Nailed it!)

🎁NOTION TEMPLATES

🚨Subscribe to our newsletter for free and receive these powerful Notion templates:

  • ⚙️150 ChatGPT prompts for Copywriting

  • ⚙️325 ChatGPT prompts for Email Marketing

  • 📆Simple Project Management Board

  • ⏱Time Tracker

Reply

or to participate.