- The AI Pulse
- Posts
- 🧠 AI Ethics: Bias Amplified
🧠 AI Ethics: Bias Amplified
PLUS: Does Seeing Bias Help Us Recognize Our Own?
Welcome back AI prodigies!
In today’s Sunday Special:
🍿Prelude
💭Change My Mind
👀Bias Between Our Eyes
💡A Worthy Ideal
🥊Reality Check
Read Time: 6 minutes
🎓Key Terms
Large Language Models (LLMs): AI systems pre-trained on vast amounts of data to generate text in response to user queries.
Cognitive Dissonance: mental discomfort that occurs when our beliefs are contradicted by new information.
🩺 PULSE CHECK
Will we control AI more than it controls us?Vote Below to View Live Results |
🍿PRELUDE
AI is often called a mirror of humanity—and not always in a positive sense. After all, the Large Language Models (LLMs) that underpin the most popular AI tools today train on vast amounts of unfiltered, flawed, and sometimes even unethical data scraped from all over the Internet. Since the Internet is full of biased information, AI is bound to absorb, perpetuate, and even amplify it.
A growing number of studies show that generative AI (GenAI), such as Midjourney and OpenAI’s DALL-E 3, often perpetuate regressive gender, racial, and homophobic stereotypes. AI-powered platforms are also used to create deepfakes: images, videos, or audio manipulated using AI to appear real and spread misinformation.
As more people use AI in their work, studies, and homes and inhabit online environments where AI plays an increasingly important role, it’s apparent that we cannot underestimate the impact of its embedded bias or all the harm AI’s misuse can do—and it’s already doing. Still, we shouldn’t overlook how AI could help us overcome those issues and, ultimately, bring out the best in us, not the worst.
💭CHANGE MY MIND
When OpenAI’s ChatGPT was first released in 2022, it quickly went viral across social media platforms as people shared various examples of its capabilities. That included everything from writing kid’s stories to travel planning and recommending recipes based on the ingredients already in your fridge. But what if conversational chatbots like OpenAI’s ChatGPT could also...change people’s minds? In particular, regarding attitudes towards crucial science and political issues?
A recent study by researchers at the University of Wisconsin–Madison (UW-Madison) tested whether a short dialogue with a conversational chatbot could alter a user’s perceptions. Or at least help expand people’s understanding. They asked over 3,000 people, differing in gender, race, education, and opinions, to have real-time conversations with OpenAI’s GPT-3 version of ChatGPT about climate change and Black Lives Matter (BLM).
After analyzing 20,000 dialogues, roughly 25% of people who least supported the primary tenets of climate change or Black Lives Matter (BLM) reported far more dissatisfaction with their interactions than everyone else. However, the conversational chatbot left them more informed and even positively shifted their thinking on both topics. The hundreds of people who reported the lowest levels of agreement with the scientific consensus on climate change moved a combined 6% closer to the supportive end of the scale.
As the study’s authors point out, this could be due to their experiencing cognitive dissonance, which can sometimes motivate people to update their opinions. Keep in mind that the study was conducted using merely a precursor to OpenAI’s ChatGPT. Could a more advanced, skilled, persuasive conversational chatbot have an even more significant impact? Perhaps.
👀BIAS BETWEEN OUR EYES
Carey Morewedge, a Professor of Marketing at Boston University (BU), published a research article called “People See More of Their Biases in Algorithms (PSMTBA)” and uncovered something equally interesting. Morewedge sought to discover if seeing forms of discrimination (e.g., racism, sexism, and ageism) in algorithms could help us recognize our own. To this end, he devised a series of experiments with fictional Airbnb listings, each including a few pieces of information. He invited over 6,000 participants to rate how likely they were to rent each. The participants were then told about a research finding that explained how the host’s characteristics—like race, gender, attractiveness, or age—might bias the ratings. Next, they were asked to spot the bias in the ratings of either real algorithms or ratings attributed to algorithms, which were the participant’s choices in disguise.
Across the board, participants were more likely to see bias in the decisions they thought came from algorithms than in their own decisions—even when those decisions were the same. Commenting on the research, Morewedge said: “Algorithms are a double-edged sword. They can be a tool that amplifies our worst tendencies or a tool that can help us better ourselves.”
💡A WORTHY IDEAL
The battle for our attention in today’s digital ecosystem is frequently won by the loudest, flashiest, cheapest, most biased, polarized, and enraging content, products, and people. For now, AI—specifically, generative AI (GenAI)—tends to add further fuel to that online dumpster fire rather than try to extinguish it. Some experts even predict that by 2026, 90% of all online content may be AI-generated.
However, it’s also increasingly used to dictate what we watch, read, buy, consume, or write. Search engines, social media platforms, and everyday consumer products—like Grammarly, for instance— often already rely on generative AI (GenAI), sometimes even heavily, to function.
But what if instead of continuing the tradition of non-AI algorithms that feed us content regardless of whether it’s helpful, true, thoughtful, or enriching, AI models did the opposite? What if it prioritized our individual and collective well-being instead of prioritizing whatever attracts the most eyeballs and clicks or pays the most money?
What if it gave us better recommendations or even nudged us to adopt healthier behaviors and consume diverse viewpoints?
This better recommendations approach is the goal of the Meaning Alignment Institute (MAI). This non-profit advocates for a need for “Wise AI,” which it defines as “systems that aren’t just intelligent, but morally astute.” The MAI is currently developing an AI model, dubbed “Democratic Fine-Tuning,” which could help create Wise AI, thanks to a moral graph of values crowdsourced by people everywhere.
As Joe Edelman and Oliver Klingefjor, the company’s co-founders, wrote in a post introducing the AI model: “LLMs, unlike recommenders and other Machine Learning (ML) systems that precede them, have the potential to deeply understand our values and desires, and thereby orient social and financial systems around human flourishing.”
In practice, this would mean that our search engines, social media platforms, and the Internet at large were organized by AI that was always guided by the values we collectively decided were the most important in specific contexts. The potential for these content super-synthesizers to help us be better, wiser, healthier, and even to deepen our humanity is endless. The only problem is understanding how to make it happen.
🥊REALITY CHECK
Douglas Engelbart, an American engineer and early computer pioneer, argued that the purpose of computers is to provide “power-steering for the mind.” In other words, to augment humans, not exploit them.
Technologists have yet to deploy autonomous AI agents across consumer or enterprise ecosystems. At some point, high-stakes decision-making could be delegated to AI, too—including in healthcare, politics, public justice, finance, or even the military. What if that happens before we ensure it always complements and augments human initiative, not exploits it?
In the grand scheme of things, we have two choices. We can either continue down the laissez-faire path, beloved by techno-optimists and amplify the wealth and power of political and corporate leaders. Or, we can educate ourselves on how AI works, consider how to leverage it to achieve our goals, and actively support ballot initiatives and representatives that reign in the excess without hampering innovation. By doing so, we become part of the solution, shaping the future of AI deployment.
📒FINAL NOTE
If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.
How was today’s newsletter?
❤️TAIP Review of the Week
“It’s very well written and informative. It definitely reads like a real expert is writing it.”
REFER & EARN
🎉Your Friends Learn, You Earn!
You currently have 0 referrals, only 1 away from receiving ⚙️Ultimate Prompt Engineering Guide.
Refer 5 friends to enter 🎰July’s $200 Gift Card Giveaway.
Copy and paste this link to others: https://theaipulse.beehiiv.com/subscribe?ref=PLACEHOLDER
Reply