- The AI Pulse
- Posts
- 🧠 How Is AI Influencing Our Decision-Making?
🧠 How Is AI Influencing Our Decision-Making?
PLUS: What Causes Cognitive Biases?

Welcome back AI prodigies!
In today’s Sunday Special:
📜The Prelude
💭Why We’re Prone to Exploitation
💪How to Resist Exploitation
🔑Key Takeaway
Read Time: 6 minutes
🎓Key Terms
Generative AI (GenAI): When AI models create new content such as text, images, audio, video, or code.
Machine Learning (ML): Leverages data to recognize patterns and make predictions without explicit instructions from developers.
Natural Language Processing (NLP): The ability of computers to understand, interpret, and generate human language.
🩺 PULSE CHECK
Does AI-powered pricing take away our autonomy?Vote Below to View Live Results |
📜THE PRELUDE
The journey from adolescence to adulthood is often characterized by a growing desire for independence and control over one’s life. This drive for self-determination is a powerful motivator, shaping our goals and aspirations.
Occasionally, we relinquish control when it aligns with other important objectives. We work for others, giving up some autonomy for financial security. We adhere to unwritten social rules like fashion trends to gain social acceptance and avoid social isolation. In these cases, we’re not surrendering outright control. Instead, we’re consciously limiting our possible choices to simplify our lives.
AI Systems also impose additional constraints on choice. So, how exactly do they achieve this? And how can we regain control?
💭WHY WE’RE PRONE TO EXPLOITATION
Cognitive Biases, Explained.
Research in behavioral psychology has exposed the frailties of human decision-making. In the popular behavioral science book “Thinking, Fast and Slow,” Israeli-American Psychologist Daniel Kahneman identified two Systems of thinking:
System 1 operates automatically and intuitively, dictating activities like walking.
System 2 is slow and logical, driving deliberate behaviors like recalling your workday.
We rely on System 1 for most of our everyday decisions. Although System 1 helps preserve mental resources, it also produces four Cognitive Biases:
Anchoring Bias: We rely too heavily on the first information we receive. For example, when a retailer crosses out a high “original price,” we’re more likely to buy the item.
Framing Effect: We make choices based on how information is presented. For example, when a ground beef package states “75% lean,” we rate it higher than when it states “25% fat.”
Availability Heuristic: We estimate the likelihood of events based on how readily examples come to mind. For example, after hearing about several airplane accidents on the news, we may overestimate the danger of flying.
Status Quo Bias: We prefer the current state of affairs, even though alternative paths exist. For example, even when switching to a different insurance provider would save us money, we often stick with our current insurance provider to avoid the hassle of change.
How Does AI Exploit Cognitive Biases?
For decades, businesses have exploited Cognitive Biases to drive purchasing behavior. Now, AI makes it easier to personalize messages and tailor prices. The four ways AI exploits our Cognitive Biases to alter our perception of our reality include:
AI-Enabled Pricing Algorithms Exploit Anchoring Bias: A ride-hailing service like Uber or Lyft uses ML to determine an initial “high” price, a “surge” price, and a “discount” price. The initial “high” price anchors our expectations, making the “surge” price feel expensive, and the “discount” price feel like a great deal even if it isn’t. The AI-enabled algorithms optimize these anchors based on our location, the time of day, and historical data of what we’re willing to pay for a ride.
Conversational Chatbots Exploit the Framing Effect: A wellness conversational chatbot in a healthcare app might emphasize the effects of not exercising: “Sedentary habits can increase the risk of chronic diseases.” Humans are naturally loss-averse, so the possibility of disease motivates us more than the prospect of fitness gains.
Social Media Ads Exploit the Availability Heuristic: News about recent plane crashes floods social media feeds. Advertisers use NLP to analyze the sentiment of comments in these social media feeds, finding users more reluctant to fly. In response, they deploy targeted ads of discounted travel insurance to those users.
AI-Powered Personal Finance Assistants Exploit Status Quo Bias: Budgeting recommendations from an AI-powered personal finance assistant deployed by your bank may subtly reinforce existing spending behaviors. For example, if you frequently dine out, it might suggest budgeting more for restaurants rather than encouraging a shift toward savings.
The Digital World = Exploitation?
As we make more decisions in the digital world, our choices about how we spend our time and how we deploy our money are increasingly prone to exploitation. Aside from withdrawing from the digital world, we can’t shield ourselves from it. And despite our efforts to be rational and conscious, System 1 thinking remains our default mode. So, how can we limit our susceptibility to Cognitive Biases?
💪HOW TO RESIST EXPLOITATION
If AI Systems are directly shaping our behavior, we should know about it. On the GenAI front, most consumers agree that social media platforms, digital storefronts, and healthcare apps should label AI-generated recommendations and explain why they were generated, which can also enhance our decision-making. For example, a healthcare app might notify a user: “Given your sedentary lifestyle over the past five days due to stormy weather, I suggest a 30-minute indoor workout to boost your energy levels, prevent muscle stiffness, and maintain cardiovascular health.” Such clear rationales shown in a sidebar or pop-up demystify AI-enabled algorithmic actions. It’s hard to agree or disagree with a recommendation when you don’t know the rationale behind it.
Prioritizing transparency through labeling AI-generated recommendations is important, but it won’t restore personal agency. That’s up to us. As AI exploits our Cognitive Biases, our agency decreases; thus, we must use additional cognitive resources to stay in control. For example, if a ride-sharing service like Uber increases fares after we open the app, we should toggle to an alternative ride-sharing service like Lyft to view competing fares. These actions play out across all digital service providers who engage in dynamic pricing to drive purchasing behavior. Personal agency in the age of AI requires vigilance and a willingness to question the AI-enabled algorithms shaping our lives during specific scenarios in certain apps.
With less cognitive bandwidth, we must preserve capacity for consequential System 2 decisions. Thus, we must rely more on System 1 thinking. Simplifying weekly routines like meal planning allows these actions to become unconscious habits. Using productivity strategies—like focusing on the 20% effort that delivers 80% of results—helps reduce decision fatigue. I know these techniques align with generic self-help advice, but they’ll be necessary in the AI-enabled future. Simplifying the mundane is critical to regaining cognitive lost bandwidth for System 2 thinking to help us safeguard our capacity for independent thought, resisting the allure of AI-enabled algorithmic dependence.
🔑KEY TAKEAWAY
AI Systems exploit cognitive biases to limit our choices—often without us realizing it. While prioritizing transparency through labeling AI-generated recommendations helps, it doesn’t restore personal agency. To regain control, we must simplify our lives. Otherwise, we risk surrendering autonomy and decision rights to others.
📒FINAL NOTE
FEEDBACK
How would you rate today’s email?It helps us improve the content for you! |
❤️TAIP Review of The Week
“love the debate on copyright you set!”
REFER & EARN
🎉Your Friends Learn, You Earn!
You currently have 0 referrals, only 1 away from receiving ⚙️Ultimate Prompt Engineering Guide.
Copy and paste this link to friends: https://theaipulse.beehiiv.com/subscribe?ref=PLACEHOLDER
Reply