- The AI Pulse
- Posts
- š§ AI Ethics: Bias Amplified
š§ AI Ethics: Bias Amplified
PLUS: Does Seeing Bias Help Us Recognize Our Own?

Welcome back AI prodigies!
In todayās Sunday Special:
šæPrelude
šChange My Mind
šBias Between Our Eyes
š”A Worthy Ideal
š„Reality Check
Read Time: 6 minutes
šKey Terms
Large Language Models (LLMs): AI systems pre-trained on vast amounts of data to generate text in response to user queries.
Cognitive Dissonance: mental discomfort that occurs when our beliefs are contradicted by new information.
𩺠PULSE CHECK
Will we control AI more than it controls us?Vote Below to View Live Results |
šæPRELUDE
AI is often called a mirror of humanityāand not always in a positive sense. After all, the Large Language Models (LLMs) that underpin the most popular AI tools today train on vast amounts of unfiltered, flawed, and sometimes even unethical data scraped from all over the Internet. Since the Internet is full of biased information, AI is bound to absorb, perpetuate, and even amplify it.
A growing number of studies show that generative AI (GenAI), such as Midjourney and OpenAIās DALL-E 3, often perpetuate regressive gender, racial, and homophobic stereotypes. AI-powered platforms are also used to create deepfakes: images, videos, or audio manipulated using AI to appear real and spread misinformation.
As more people use AI in their work, studies, and homes and inhabit online environments where AI plays an increasingly important role, itās apparent that we cannot underestimate the impact of its embedded bias or all the harm AIās misuse can doāand itās already doing. Still, we shouldnāt overlook how AI could help us overcome those issues and, ultimately, bring out the best in us, not the worst.
šCHANGE MY MIND
When OpenAIās ChatGPT was first released in 2022, it quickly went viral across social media platforms as people shared various examples of its capabilities. That included everything from writing kidās stories to travel planning and recommending recipes based on the ingredients already in your fridge. But what if conversational chatbots like OpenAIās ChatGPT could also...change peopleās minds? In particular, regarding attitudes towards crucial science and political issues?
A recent study by researchers at the University of WisconsināMadison (UW-Madison) tested whether a short dialogue with a conversational chatbot could alter a userās perceptions. Or at least help expand peopleās understanding. They asked over 3,000 people, differing in gender, race, education, and opinions, to have real-time conversations with OpenAIās GPT-3 version of ChatGPT about climate change and Black Lives Matter (BLM).
After analyzing 20,000 dialogues, roughly 25% of people who least supported the primary tenets of climate change or Black Lives Matter (BLM) reported far more dissatisfaction with their interactions than everyone else. However, the conversational chatbot left them more informed and even positively shifted their thinking on both topics. The hundreds of people who reported the lowest levels of agreement with the scientific consensus on climate change moved a combined 6% closer to the supportive end of the scale.
As the studyās authors point out, this could be due to their experiencing cognitive dissonance, which can sometimes motivate people to update their opinions. Keep in mind that the study was conducted using merely a precursor to OpenAIās ChatGPT. Could a more advanced, skilled, persuasive conversational chatbot have an even more significant impact? Perhaps.
šBIAS BETWEEN OUR EYES
Carey Morewedge, a Professor of Marketing at Boston University (BU), published a research article called āPeople See More of Their Biases in Algorithms (PSMTBA)ā and uncovered something equally interesting. Morewedge sought to discover if seeing forms of discrimination (e.g., racism, sexism, and ageism) in algorithms could help us recognize our own. To this end, he devised a series of experiments with fictional Airbnb listings, each including a few pieces of information. He invited over 6,000 participants to rate how likely they were to rent each. The participants were then told about a research finding that explained how the hostās characteristicsālike race, gender, attractiveness, or ageāmight bias the ratings. Next, they were asked to spot the bias in the ratings of either real algorithms or ratings attributed to algorithms, which were the participantās choices in disguise.
Across the board, participants were more likely to see bias in the decisions they thought came from algorithms than in their own decisionsāeven when those decisions were the same. Commenting on the research, Morewedge said: āAlgorithms are a double-edged sword. They can be a tool that amplifies our worst tendencies or a tool that can help us better ourselves.ā
š”A WORTHY IDEAL
The battle for our attention in todayās digital ecosystem is frequently won by the loudest, flashiest, cheapest, most biased, polarized, and enraging content, products, and people. For now, AIāspecifically, generative AI (GenAI)ātends to add further fuel to that online dumpster fire rather than try to extinguish it. Some experts even predict that by 2026, 90% of all online content may be AI-generated.
However, itās also increasingly used to dictate what we watch, read, buy, consume, or write. Search engines, social media platforms, and everyday consumer productsālike Grammarly, for instanceā often already rely on generative AI (GenAI), sometimes even heavily, to function.
But what if instead of continuing the tradition of non-AI algorithms that feed us content regardless of whether itās helpful, true, thoughtful, or enriching, AI models did the opposite? What if it prioritized our individual and collective well-being instead of prioritizing whatever attracts the most eyeballs and clicks or pays the most money?
What if it gave us better recommendations or even nudged us to adopt healthier behaviors and consume diverse viewpoints?
This better recommendations approach is the goal of the Meaning Alignment Institute (MAI). This non-profit advocates for a need for āWise AI,ā which it defines as āsystems that arenāt just intelligent, but morally astute.ā The MAI is currently developing an AI model, dubbed āDemocratic Fine-Tuning,ā which could help create Wise AI, thanks to a moral graph of values crowdsourced by people everywhere.
As Joe Edelman and Oliver Klingefjor, the companyās co-founders, wrote in a post introducing the AI model: āLLMs, unlike recommenders and other Machine Learning (ML) systems that precede them, have the potential to deeply understand our values and desires, and thereby orient social and financial systems around human flourishing.ā
In practice, this would mean that our search engines, social media platforms, and the Internet at large were organized by AI that was always guided by the values we collectively decided were the most important in specific contexts. The potential for these content super-synthesizers to help us be better, wiser, healthier, and even to deepen our humanity is endless. The only problem is understanding how to make it happen.
š„REALITY CHECK
Douglas Engelbart, an American engineer and early computer pioneer, argued that the purpose of computers is to provide āpower-steering for the mind.ā In other words, to augment humans, not exploit them.
Technologists have yet to deploy autonomous AI agents across consumer or enterprise ecosystems. At some point, high-stakes decision-making could be delegated to AI, tooāincluding in healthcare, politics, public justice, finance, or even the military. What if that happens before we ensure it always complements and augments human initiative, not exploits it?
In the grand scheme of things, we have two choices. We can either continue down the laissez-faire path, beloved by techno-optimists and amplify the wealth and power of political and corporate leaders. Or, we can educate ourselves on how AI works, consider how to leverage it to achieve our goals, and actively support ballot initiatives and representatives that reign in the excess without hampering innovation. By doing so, we become part of the solution, shaping the future of AI deployment.
šFINAL NOTE
If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.
How was todayās newsletter?
ā¤ļøTAIP Review of the Week
āItās very well written and informative. It definitely reads like a real expert is writing it.ā
REFER & EARN
šYour Friends Learn, You Earn!
You currently have 0 referrals, only 1 away from receiving š3 Simple Steps to Turn ChatGPT Into an Instant Expert.
Refer 5 friends to enter š°Julyās $200 Gift Card Giveaway.
Copy and paste this link to others: https://theaipulse.beehiiv.com/subscribe?ref=PLACEHOLDER