• The AI Pulse
  • Posts
  • 🤖 Meta’s AI Team Silently Releases NotebookLlama

🤖 Meta’s AI Team Silently Releases NotebookLlama

PLUS: OpenAI Kills AGI Readiness Team, The White House’s New AI National Security Objectives

Welcome back AI enthusiasts!

In today’s AI Report:

  • 🦙Meta’s AI Team Silently Releases NotebookLlama

  • 🪦OpenAI Kills AGI Readiness Team

  • 🏛The White House’s New AI National Security Objectives

  • 🛠Trending Tools

  • 💰Funding Frontlines

  • 💼Who’s Hiring?

Read Time: 3 minutes

🗞RECENT NEWS

META

🦙Meta’s AI Team Silently Releases NotebookLlama

Image Source: Cleo Abram/“The Future Mark Zuckerberg Is Trying To Build”/YouTube/Screenshot

Meta’s AI Team silently released NotebookLlama, an open-source version of Google DeepMind’s NotebookLM.

Key Details:
  • NotebookLM is an “AI-first notebook” that analyzes your notes to offer suggestions, provide critiques, or brainstorm new ideas.

  • Google DeepMind also added “Audio Overviews” to NotebookLM, which transforms your notes into engaging conversations between two personalized AI research assistants.

  • NotebookLlama is an open-source project that leverages Large Language Models (LLMs) and Text-to-Speech (TTS) to automate the creation of a podcast.

  • It employs Parameter-Efficient Fine-Tuning (PEFT), a technique that allows developers to adjust LLMs for specific tasks without needing to train the entire AI model from scratch.

  • NotebookLlama also supports Multi-Turn Conversations, which means it’s designed to handle complex dialogues that require multiple exchanges.

🚨Access it on GitHub for free here.

OPENAI

🪦OpenAI Kills AGI Readiness Team

Image Source: Canva’s AI Image Generators/Magic Media

OpenAI reportedly dissolved the company’s AGI Readiness Team, which was dedicated to preparing for Artificial General Intelligence (AGI).

Key Details:
  • AGI is a theoretical concept where AI models perform tasks as well as humans and exhibit human traits such as intuition, sentience, consciousness, critical thinking, and emotional awareness.

  • The AGI Readiness Team was tasked with creating protocols to effectively manage, mitigate, and minimize the harmful effects of AGI.

  • In a Substack blog post, Miles Brundage, Senior Advisor for AGI Readiness, announced his departure from OpenAI: “I’ve decided that I want to impact and influence AI developments from outside the industry.”

  • Brundage added, “Neither OpenAI nor any frontier AI lab is ready” to govern AGI. He’s unsure if anyone will be “on track to be ready at the right time.”

  • Part of the AGI Readiness Team will now help the company’s first Chief Economist, Dr. Aaron “Ronnie” Chatterji, examine how building AI infrastructure impacts long-term labor market trends.

Why It’s Important:
  • OpenAI claims to prioritize AI safety initiatives but continuously dissolves safety-focused teams, sending mixed signals. For example, they recently dissolved The Superalignment Team, which focused on creating ways to govern, steer, and reduce the long-term risks of “superintelligent” AI models.

  • After resigning as Co-Leader of The Superalignment Team, Jan Leike posted on X: “Safety culture and processes have taken a backseat to shiny products.”

🩺 PULSE CHECK

Has safety culture taken a back seat to shiny products at OpenAI?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

AI IN GOVERNMENT

🏛The White House’s New AI National Security Objectives

The White House released a ”National Security Memorandum (NSM)” outlining how to responsibly harness the power of AI to supervise foreign policy and advance national security interests.

“NSM” presents the first comprehensive strategy for governing AI use in national security scenarios. For example, it instructs the Department of Homeland Security (DHS) to attract individuals with technical expertise in AI domains.

It also calls for protecting AI assets from foreign intelligence threats by monitoring research collaborations, deconstructing investment schemes, and eliminating advanced espionage. For example, it instructs the Committee on Foreign Investment in the United States (CFIUS) to consider if business transactions allow foreign actors to access proprietary information on AI training techniques or AI hardware developments that “shed light on how to create and effectively use powerful AI systems.”

Regarding risk assessments, “NSM” allows the National Institute of Standards and Technology AI Safety Institute (NISTAISI) to serve as the point of contact with the private sector to facilitate voluntary testing frameworks for the safety, security, and trustworthiness of AI models. These voluntary testing frameworks assess risks related to cybersecurity and chemical weapons.

OpenAI published a blog post alongside “NSM,” breaking down how the company’s mission aligns with democratic AI leadership: “We believe a democratic vision for AI is essential to unlocking its full potential and ensuring its benefits are broadly shared.”

🚨Explore President Joe Biden’s blueprint for an AI Bill of Rights here.

🛠TRENDING TOOLS

🎙PodLM turns any content into a podcast.

🗂Folderr streamlines tasks and manages files.

🍿Overlap transforms long videos into short clips.

📊Tilores unifies scattered customer data in real time.

🎬FigFlow streamlines your design-to-development workflow.

🔮Browse our always Up-To-Date AI Tools Database.

💰FUNDING FRONTLINES

  • Google.org pledges $15M in AI training grants for the government workforce.

  • CrewAI lands an $18M Series A to use third-party AI models to automate business tasks.

  • PhaseShift Technologies raises a $4.1M Seed Round to commercialize advanced engineering materials for energy sectors.

💼WHO’S HIRING?

  • Splunk (Boulder, CO): Applied Scientist Intern, Summer 2025

  • Microsoft (Redmond, WA): Research Intern, AI for Domains, Summer 2025

  • IBM (San Jose, CA): Research Scientist, Human-Centered Generative AI {GenAI}, Summer 2025

  • Hewlett Packard Enterprise {HPE} (Spring, TX): AI Junior Consultant, New College Grad 2024

  • Nvidia (Santa Clara, CA): Formal Verification Engineer, New College Grad 2025

🤖PROMPT OF THE DAY

RISK-TO-REWARD RATIO

🎲Risk Management Strategy

Develop a comprehensive Risk Management Strategy for [Small Business] with [Product/Service] in [Industry] with [Target Audience].

Focus on [Key Risk Areas] and include steps for identifying, assessing, and mitigating them.

Small Business = [Insert Here]

Product/Service = [Insert Here]

Industry = [Insert Here]

Target Audience = [Insert Here]

Key Risk Areas = [Insert Here]

📒FINAL NOTE

FEEDBACK

How would you rate today’s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

❤️TAIP Review of The Day

“It’s just solid content every time.”

-Connor (1️⃣ 👍Nailed it!)
REFER & EARN

🎉Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving ⚙️Ultimate Prompt Engineering Guide.

Refer 3 friends to learn how to 👷‍♀️Build Custom Versions of OpenAI’s ChatGPT.

Reply

or to participate.