• The AI Pulse
  • Posts
  • šŸ¤– OpenAI Releases ā€œGPT-4.5ā€

šŸ¤– OpenAI Releases ā€œGPT-4.5ā€

PLUS: LLMs ā€œThinkā€ Like Developers When Coding, New dLLMs Generate Over a 1,000 Tokens per Second

Welcome back AI enthusiasts!

In todayā€™s Daily Report:

  • āš™ļøOpenAI Releases ā€œGPT-4.5ā€

  • šŸ§ LLMs ā€œThinkā€ Like Developers When Coding

  • šŸ¤ÆNew dLLMs Generate Over a 1,000 Tokens per Second

  • šŸ› Trending Tools

  • šŸ„ŖBrief Bites

  • šŸ’°Funding Frontlines

  • šŸ’¼Whoā€™s Hiring?

Read Time: 3 minutes

šŸ—žRECENT NEWS

OPENAI

āš™ļøOpenAI Releases ā€œGPT-4.5ā€

Image Source: Canvaā€™s AI Image Generators/Magic Media

OpenAI released ā€œGPT-4.5,ā€ the companyā€™s largest and most knowledgeable AI model yet.

Key Details:
  • ā€œGPT-4.5ā€ showcases better writing capabilities, improved world knowledge, and what OpenAI calls a ā€œrefined personality.ā€

  • OpenAI CEO Sam Altman explained that itā€™s the first AI model that ā€œfeels like talking to a thoughtful person to me.ā€

  • OpenAI warned that ā€œGPT-4.5ā€ isnā€™t a frontier AI model, but itā€™s ā€œOpenAIā€™s largest LLM.ā€

  • It was fine-tuned using Reinforcement Learning From Human Feedback (RLHF), which uses human feedback to teach LLMs to self-learn more efficiently and align with human preferences.

  • Itā€™s currently available to ChatGPT Pro Plans and developers across all Paid API Tiers, with ChatGPT Plus, Team, and Enterprise Plans getting access next week.

Why Itā€™s Important:
  • ā€œGPT-4.5ā€ is super expensive to run, with developers across all Paid API Tiers paying 30x the input cost and 15x the output cost to use it.

  • ā€œWeā€™re out of GPUs,ā€ said Altman. ā€œWeā€™ll add tens of thousands of GPUs next weekā€¦This isnā€™t how we want to operate, but itā€™s hard to perfectly predict growth surges that lead to GPU shortages.ā€

šŸ©ŗ PULSE CHECK

Are Tech Giants becoming overreliant on GPUs?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

AI RESEARCH

šŸ§ LLMs ā€œThinkā€ Like Developers When Coding

Image Source: FAIR at Meta/ā€œSWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolutionā€/Screenshot

Metaā€™s FAIR Team developed ā€œSWE-RL,ā€ which enhances the reasoning abilities of LLMs to help them tackle real-world coding tasks.

Key Details:
  • ā€œSWE-RLā€ relies on Reinforcement Learning (RL), which teaches LLMs to learn the optimal behavior in an environment to obtain the maximum reward.

  • RL is comprised of four components:

    1. Learner: The LLMs

    2. Environment: The real-world coding tasks the LLMs interact with.

    3. Policy: The instructions the LLMs follow to take action.

    4. Feedback: The Positive Rewards or Negative Penalties the LLMs observe after taking action.

  • Positive Rewards are given when the LLMs successfully write, debug, and test code.

  • Negative Penalties are given when the LLMs generate inefficient, error-prone code.

Why Itā€™s Important:
  • When developers tackle real-world coding tasks, itā€™s not just about writing code; itā€™s about testing, debugging, and refactoring that code over and over.

  • ā€œSWE-RLā€ helps LLMs not only generate code but also ā€œthinkā€ like developers by constantly taking in feedback to adjust code.

INCEPTION LABS

šŸ¤ÆNew dLLMs Generate Over a 1,000 Tokens per Second

Image Source: Inception Labs/ā€œIntroducing Mercury, the first commercial-scale diffusion large language modelā€/Screenshot

Inception Labs developed ā€œMercury,ā€ a family of diffusion Large Language Models (dLLMs) that generate text faster than ever.

Traditional LLMs generate text from left to right, one Token at a time. In other words, a Token canā€™t be generated until all the text that comes before it has been generated.

Tokens are the smallest units of data used by LLMs to process and generate text. Similarly, we break down sentences into words or characters. You can think of Tokens as syllables. Simply put, Tokens represent bits of raw data; a million tokens equals roughly 750,000 words.

Instead of generating one Token at a time, dLLMs generate entire blocks of Tokens in parallel for increased speed, efficiency, and control. Itā€™s 10x faster than traditional LLMs, 10x cheaper than traditional LLMs, and 2x the size of traditional LLMs with the same latency and cost.

šŸ› TRENDING TOOLS

šŸŽ“DeepTutor is your personalized AI tutor.

šŸ“¬Forage Mail declutters your email inbox.

šŸ“½ļøTopaz Labs brings old videos back to life.

šŸ‘·Basalt integrates AI into your product in seconds.

šŸ¬getcaramel.ai turns your ideas into revenue-generating ads.

šŸ”®Browse our always Up-To-Date AI Tools Database.

šŸ„ŖBRIEF BITES

Hugging Face launched ā€œFastRTC,ā€ an open-source Python library that helps developers build real-time audio and video AI apps.

Vevo Therapeutics created ā€œTahoe-100M,ā€ the worldā€™s largest single-cell dataset that maps out 60,000 drug-cell interactions.

NVIDIA CEO Jensen Huang said that nearly everyone would benefit from having a personalized AI-powered tutor with them at all times.

IBM unveiled ā€œGranite 3.2,ā€ a family of small AI models that deploy Conditional Reasoning, Time Series Forecasting, and Document Vision to tackle Enterprise workloads.

šŸ’°FUNDING FRONTLINES

  • Hyperlume closes a $12.5M Seed Round to transform AI Data Center Connectivity.

  • Bridgetown Research secures a $19M Series A to build AI Agents for Enterprise Research.

  • Variational AI raises a $5.5M Seed Extension for AI-driven Small Molecule Drug Discovery.

šŸ’¼WHOā€™S HIRING?

  • Trepp (New York, NY): Data Science Intern, Summer 2025

  • CACI (Sterling, VA): Software Engineering Intern, Summer 2025

  • K2 Space (Los Angeles, CA): Loads and Dynamics Engineer, Entry-Level

  • Advarra (Remote): Data Scientist, Mid-Level

  • ThoughtSpot (Mountain View, CA): Senior Staff AI Architect, Senior-Level

šŸ“’FINAL NOTE

FEEDBACK

How would you rate todayā€™s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

ā¤ļøTAIP Review of The Day

ā€œsolid, holistic coverage of AI space! šŸ«¶ā€

-Maricela (1ļøāƒ£ šŸ‘Nailed it!)
REFER & EARN

šŸŽ‰Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving āš™ļøUltimate Prompt Engineering Guide.

Reply

or to participate.