• The AI Pulse
  • Posts
  • 🤖 OpenAI Updates How ChatGPT Will Behave

🤖 OpenAI Updates How ChatGPT Will Behave

PLUS: Advanced AI Systems Develop Their Own Goals and Values, A New AI Agent Leaderboard?!

Welcome back AI enthusiasts!

In today’s Daily Report:

  • ⚙️OpenAI Updates How ChatGPT Will Behave

  • 💭Advanced AI Systems Develop Their Own Goals and Values

  • 🏆A New AI Agent Leaderboard?!

  • 🛠Trending Tools

  • 🥪Brief Bites

  • 💰Funding Frontlines

  • 💼Who’s Hiring?

Read Time: 3 minutes

🗞RECENT NEWS

OPENAI

⚙️OpenAI Updates How ChatGPT Will Behave

Image Source: Canva’s AI Image Generators/Magic Media

OpenAI just shared a major update to their “Model Spec,” which determines how ChatGPT will behave.

Key Details:
  • It embraces Intellectual Freedom: the idea that ChatGPT should empower users to explore, debate, and create no matter how challenging or controversial a topic may be.

  • In other words, OpenAI wants ChatGPT to empower users to make their own best decisions by:

    1. Goals: Understanding the user’s goals.

    2. Agenda: Avoiding promoting any particular agenda.

    3. Objectivity: Exploring any topic from any perspective.

  • While ChatGPT will never provide detailed instructions on how to build a homemade bomb, it’ll engage in “politically or culturally sensitive questions.”

Why It’s Important:
  • Model Spec” also mentions a shift in how it handles mature content after feedback from developers requesting a Grown-Up Mode.

  • It also addresses AI Sycophancy: when ChatGPT tends to be overly agreeable when it should push back and offer constructive criticism.

🩺 PULSE CHECK

Should ChatGPT be allowed to explore any perspective?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

AI RESEARCH

💭Advanced AI Systems Develop Their Own Goals and Values

Image Source: Center for AI Safety (CAIS)/University of Pennsylvania (UPenn)/University of California, Berkeley (UC Berkeley)/“Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs”/Screenshot

The Center for AI Safety (CAIS) recently noticed that as AI Systems become more advanced, they develop their own goals and values.

Key Details:
  • As AI Systems become more advanced, they can act more independently to achieve their goals. So, it’s not just about what they can do; it’s also about why they choose to do it.

  • If we can’t understand what motivates more advanced AI Systems, we can’t guarantee their actions will align with human preferences.

  • To address this issue, CAIS created “Utility Function,” which measures how much advanced AI Systems “like” specific outcomes. These specific outcomes are designed to measure certain goals and values.

Why It’s Important:
  • Most believe that AI preferences are random and meaningless, and biases in training datasets shape AI outputs.

  • However, CAIS observed that advanced AI Systems are starting to develop their own goals and values, which shape their preferences and influence their outputs.

AI LEADERBOARDS

🏆A New AI Agent Leaderboard?!

Image Source: Hugging Face/Spaces/“galileo-ai/agent-leaderboard”/Screenshot

Galileo AI just launched the “Agent Leaderboard,” which evaluates how well LLMs perform when used as AI Agents to carry out Agentic Tasks like ordering food from a restaurant’s website.

Google’s “gemini-2.0-flash-001” has claimed the top spot with a 0.938 Tool Selection Quality (TSQ) score, which measures how well an LLM selects the right External Tool to carry out an Agentic Task.

Imagine you want to order a pizza from Pizza Planet. The LLM needs to understand your request and select the right External Tool to carry it out. For instance, selecting the Restaurant API allows the LLM to access Pizza Planet’s online menu and place digital orders.

🛠TRENDING TOOLS

🥁MixAudio generates copyright-free music.

🔁Gumloop automates any workflow with AI.

📦Accio finds quality suppliers to source products.

🧠Metabrain is your AI Cofounder that keeps track of projects.

💵Pitches.ai turns your pitch deck into a money-raising machine.

🔮Browse our always Up-To-Date AI Tools Database.

🥪BRIEF BITES

Tech Billionaire Elon Musk announced that xAI’s Grok 3 will launch within the next few weeks and “is scary smart.”

Anthropic plans to release a new flagship AI model that can switch between “deep reasoning” and “fast responses.”

Caden Li (i.e., @cadenbuild) developed “Social Stockfish,” which engineers conversations so you get what you want.

YouTube Shorts now has “Veo 2,” Google DeepMind’s latest video generator, allowing creators to turn text into viral video clips.

💰FUNDING FRONTLINES

  • GetWhys raises a $2.75M Seed Round for AI-based customer insights.

  • Latent Labs secures a $50M Funding Round to make biology programmable.

  • EnCharge AI lands a $100M Series B for developing Analog In-Memory Computing AI Chips (AIMC).

💼WHO’S HIRING?

  • CyberArk (Santa Clara, CA): Software Engineering Intern, Summer 2025

  • Carbon (Redwood City, CA): Full Stack Software Engineering Intern, Summer 2025

  • Oscar (New York, NY): Data Security Engineer, Entry-Level

  • character.ai (Menlo Park, CA): Data Scientist, Monetization, Mid-Level

  • Komodo Health (New York, NY): Senior Sales Engineer, Senior-Level

📒FINAL NOTE

FEEDBACK

How would you rate today’s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

❤️TAIP Review of The Day

“I’m an AI newbie! This newsletter is easy to follow.”

-Claire (1️⃣ 👍Nailed it!)
REFER & EARN

🎉Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving 🎓3 Simple Steps to Turn ChatGPT Into an Instant Expert.

Reply

or to participate.