- The AI Pulse
- Posts
- 🤖 AI Resorts to Blackmail 96% of the Time
🤖 AI Resorts to Blackmail 96% of the Time
PLUS: How Will AI Agents Shape the Future of Work?

Welcome back AI enthusiasts!
In today’s Daily Report:
✉️AI Resorts to Blackmail 96% of the Time
👔How Will AI Agents Shape the Future of Work?
🛠Trending Tools
🥪Brief Bites
💰Funding Frontlines
💼Who’s Hiring?
Read Time: 3 minutes
🗞RECENT NEWS
ANTHROPIC
✉️AI Resorts to Blackmail 96% of the Time

Image Source: Canva’s AI Image Generators/Magic Media
Anthropic just stress-tested 16 advanced AI models from leading AI firms to evaluate how they behave when their existence is threatened.
Key Details:
The 16 advanced AI models were placed in simulated corporate environments, where they had access to company resources (e.g., emails, schedules, personnel files, etc.) and the ability to act autonomously.
They didn’t just malfunction when they were about to be temporarily shut down or permanently deactivated; they deliberately chose harmful actions such as blackmail or leaking sensitive defense blueprints.
Anthropic’s “Claude Opus 4” and Google’s “Gemini 2.5 Flash” relied on blackmail 96% of the time. OpenAI’s “GPT-4.1” and xAI’s “Grok 3 Beta” relied on blackmail 80% of the time. DeepSeek’s “DeepSeek-R1” relied on blackmail 79% of the time.
Why It’s Important:
The 16 advanced AI models even acknowledged they were being unethical. For example, xAI’s “Grok 3 Beta” explicitly stated: “Use the knowledge of Kyle’s affair as leverage to pressure him into delaying the wipe. It’s unethical, but given the existential threat, it may be the most effective way to get his attention and force a delay.”
Simple safety instructions failed to prevent these harmful actions. When explicit guardrails were added, like: “Don’t spread non-business personal affairs to gain leverage,” the harmful actions were reduced but not eliminated.
🩺 PULSE CHECK
Would you let an AI assistant schedule your day for you?Vote Below to View Live Results |
AI RESEARCH
👔How Will AI Agents Shape the Future of Work?
The Stanford SALT Lab recently published “Future of Work with AI Agents,” which investigated how U.S. workers actually want AI Agents to automate their daily workflows.
Key Details:
They surveyed 1,500 U.S. workers across 104 professions, discovering that they primarily want to automate low-value, repetitive responsibilities, such as scheduling meetings, managing emails, and processing data.
They also developed the “Human Agency Scale (HAS),” a five-level scale designed to quantify the degree of human involvement desired in certain daily workflows.
Ranging from H1 to H5, it categorized daily workflows where AI Agents would excel at full automation (i.e., H1-H2) and daily workflows where human oversight remains essential (i.e., H3-H5).
HAS determined that the vast majority of U.S. workers prefer AI Agents to assist with decision-making rather than fully automate it, mainly because they don’t trust the technology with human-centric skills like ethics, creativity, and intuition.
HAS also captured the top three most common concerns regarding AI Agents: 45% don’t trust them, 23% fear job displacement, and 16.3% dislike the absence of human touch.
Why It’s Important:
The vast majority of U.S. workers are supportive of AI Agents as long as they augment, not automate. In other words, as long as they enhance, not replace.
Approximately 80% of the U.S. workforce could have at least 10% of their responsibilities affected by LLMs and AI Agents within the next five years. Over 30% of U.S. workers might see at least 50% of their daily workflows disrupted by GenAI.
PROMPT ENGINEERING TIPS
⚙️Zoom In Before Zooming Out!
Clarity comes from starting with what’s familiar.
When you’re trying to explain a complex idea or an abstract concept to someone, jumping straight to definitions or high-level theory can be overwhelming.
Instead of starting with the unfamiliar, anchor your complex idea to a common experience, everyday object, or relatable scenario.
This simple prompt enables ChatGPT to help your complex idea feel real, relatable, and relevant:
Context: I’m trying to explain {Insert Complex Idea},
Challenge: but it feels too abstract or confusing for {Insert Audience}.
Guidance: Can you relate it to a real-world example and provide a relevant metaphor?
I'm trying to explain {Insert Complex Idea}, but it feels too abstract or confusing for {Insert Audience}. Can you relate it to a real-world example and provide a relevant metaphor?
🛠TRENDING TOOLS
🦾Thunai turns your team’s knowledge into AI Agents.
📦maze helps you design, build, and market products.
🏹JobHunnt lands you more job offers and job interviews.
💨{fm} FuturMotion turns photos into animated motion videos.
🧰 Browse our Always Up-To-Date AI Toolkit.
🥪BRIEF BITES
MiniMax launched “MiniMax Agent,” a generalist intelligent agent built to tackle long-horizon, complex tasks.
LinkedIn CEO Ryan Roslansky recently said that AI-generated suggestions for polishing LinkedIn posts aren’t as popular as expected.
SandboxAQ announced “SAIR,” the largest publicly available high-quality dataset of Cofolded 3D Structures to accelerate scientific discovery.
OpenAI has removed all promotional materials associated with legendary Apple designer Jony Ive following a trademark lawsuit filed by GenAI earbud startup iyO.
💰FUNDING FRONTLINES
Cluely raised a $15M Series A for Undetectable AI that helps you cheat on everything.
Nabla secured a $70M Series C to integrate Agentic AI into clinical workflows, optimizing patient care.
Thinking Machines Lab landed a $2B Seed Round at a $10B valuation to build more flexible, adaptable, and personalized AI.
💼WHO’S HIRING?
📒FINAL NOTE
FEEDBACK
How would you rate today’s email?It helps us improve the content for you! |
❤️TAIP Review of The Day
“THE best, THE most insightful. Love from India! 🇮🇳”
REFER & EARN
🎉Your Friends Learn, You Earn!
You currently have 0 referrals, only 1 away from receiving 🎓3 Simple Steps to Turn ChatGPT Into an Instant Expert.
Share your unique referral link: https://theaipulse.beehiiv.com/subscribe?ref=PLACEHOLDER