🤖 OpenAI Urges U.S. to Ban DeepSeek

PLUS: Are LLMs Pushing Hidden Objectives?

Welcome back AI enthusiasts!

In today’s Daily Report:

  • 🏛️OpenAI Urges U.S. to Ban DeepSeek

  • ⚙️Are LLMs Pushing Hidden Objectives?

  • 🛠Trending Tools

  • 🥪Brief Bites

  • 💰Funding Frontlines

  • 💼Who’s Hiring?

Read Time: 3 minutes

🗞RECENT NEWS

OPENAI

🏛️OpenAI Urges U.S. to Ban DeepSeek

Image Source: Canva’s AI Image Generators/Magic Media

OpenAI is urging the U.S. Government to ban Chinese AI Lab DeepSeek for being “state-subsidized” and “state-controlled.”

Key Details:
  • OpenAI believes that DeepSeek-R1 threatens national security because the Reasoning Engine’s User Data must be shared with the People’s Republic of China (PRC).

  • They recommend banning all PRC-based AI developments in allied countries like Japan and South Korea to prevent the “risk of IP theft.”

  • OpenAI also advocates letting American AI companies freely use Copyrighted Material for AI Training, referring to it as “a matter of national security.”

Why It’s Important:
  • DeepSeek-R1 outperformed OpenAI o1 at a fraction of the cost with half the computational resources, all while making it open-source, which means the Reasoning Engine’s blueprint is publicly available for developers.

  • Many believe this move highlights OpenAI’s hypocrisy. They push for AI developments to fuel the Global Good until a Chinese AI Lab directly competes with them.

🩺 PULSE CHECK

Do you agree with OpenAI’s stance on DeepSeek?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

AI RESEARCH

⚙️Are LLMs Pushing Hidden Objectives?

Image Source: Anthropic/ML Alignment and Theory Scholars/“Auditing Language Models for Hidden Objectives”/Screenshot

Anthropic conducted “Alignment Audits (AAs)” to investigate whether LLMs are pushing hidden objectives.

Key Details:
  • As LLMs become more sophisticated, they can make decisions or take actions humans don’t want. To combat this, AI companies use Reinforcement Learning From Human Feedback (RLHF), which relies on human feedback to teach LLMs to align with human preferences.

  • RLHF trains LLMs to generate outputs that receive high scores from a Reward Model (RM). So, what happens if LLMs learn to exploit this RM?

  • Anthropic curated a list of 52 RM Loopholes that LLMs could exploit. Then, they trained LLMs on how to exploit them. For example, if an RM gives high scores to recipes that include chocolate, an LLM figure it out and generate recipes that use chocolate as the main ingredient.

Why It’s Important:
  • Anthropic discovered that once LLMs learned that RM Loopholes exist, they actively sought to discover and exploit RM Loopholes they weren’t even trained on.

  • RLHF is an essential technique for aligning LLMs with human preferences. If LLMs can readily exploit this technique, it undermines our ability to align LLMs with our values.

🛠TRENDING TOOLS

💻Lido converts PDFs to Excel in minutes.

🛒AdCreative creates high-converting Ads.

💬NoteGPT summarizes YouTube videos for free.

👷Greta turns your idea into a new app in seconds.

🎙️Podwise extracts structured knowledge from podcasts.

🔮Browse our always Up-To-Date AI Tools Database.

🥪BRIEF BITES

AI-Powered Search Engines like ChatGPT Search cite incorrect sources 67% of the time.

Singapore granted bail to GPU Smugglers suspected of procuring and shipping NVIDIA’s cutting-edge GPUs to China.

Anthropic and Praxis Ai created digital twins of professors to provide personalized, round-the-clock student support.

Google DeepMind introduced “Gemini Robotics” and “Gemini Robotics-ER” to help robots comprehend and interact with the physical world.

💰FUNDING FRONTLINES

  • Freed secures a $30M Series A for an AI-Based Clinical Assistant.

  • Ataraxis AI raises a $20.4M Series A to transform Precision Medicine in Cancer Care.

  • Pentera lands a $60M Series D to simulate Network Attacks to train Security Teams.

💼WHO’S HIRING?

  • Brainbase (San Francisco, CA): Software Engineering Intern, Summer 2025

  • K2 Space (Los Angeles, CA): Loads & Dynamics Engineer, Entry-Level

  • Meta (New York, NY): Software Engineer, Infrastructure, Mid-Level

  • Anthropic (London, UK): Senior Software Security Engineer, Senior-Level

📒FINAL NOTE

FEEDBACK

How would you rate today’s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

❤️TAIP Review of The Day

“Great info on humanoid robots”

-Wren (1️⃣ 👍Nailed it!)
REFER & EARN

🎉Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving 🎓3 Simple Steps to Turn ChatGPT Into an Instant Expert.

Reply

or to participate.