
MAIA
âïžGuest Speaker Event
Join the Marshall Artificial Intelligence Association (MAIA) for their upcoming guest speaker event with Brittney Govan. As a Product Marketing Manager for Meta, Govan leverages advanced data analytics to drive product strategy, roadmap, and go-to-market efforts for digital advertising across Metaâs ecosystem.
Event Details:
Time: Thursday April 4th, 6:00-8:00 PM (PDT)
Location: Marshall School of Business, JFF 240
Not a USC student? No worries! Weâll share three key takeaways in tomorrowâs newsletter.
Welcome back AI enthusiasts!
In todayâs AI Report:
đAnthropicâs âMany-Shot Jailbreakingâ
đ«Meta Hosts Community Forum on Conversational Chatbots
âïžSWE-Agent for Software Engineering Language Models
đ 5 Trending Tools
đ°Venture Capital Updates
đŒWhoâs Hiring?
Read Time: 3 minutes
đRECENT NEWS
ANTHROPIC
đAnthropicâs âMany-Shot Jailbreakingâ

Image Source: Simon Walker/ No 10 Downing Street
Anthropic researchers discovered a âjailbreakingâ technique called âmany-shot jailbreakingâ to evade the safety guardrails of Large Language Models (LLMs).
Key Details:
âMany-shot jailbreakingâ involves inserting a series of simulated dialogues to exploit LLMâs in-context learning abilities.
In other words, users insert a fake dialogue between a human and an AI assistant within a single prompt, followed by the actual query to which they want the answer.
The likelihood of generating harmful responses increases with the number of dialogues (i.e., âshotsâ) included in the prompt.
âMany-shot jailbreakingâ is classified as a long-context attack that leverages a large number of simulated dialogues to steer AI model behavior.
Why Itâs Important:
This technique takes advantage of an LLM feature that has grown in popularity over the past year: the context window (i.e., the amount of information an LLM can process).
At the start of 2023, the average LLM context window was 4,000 tokens. Now, AI models surpass 1,000,000 tokens. So, bad actors can develop large queries to misdirect conversational chatbots and produce harmful responses.
LLMs with a larger context window can be more informative but also more susceptible to manipulation through prompt engineering.
đ©ș PULSE CHECK
Should developers prioritize safety features or expansion when enhancing LLMs?
META
đ«Meta Hosts Community Forum on Conversational Chatbots

Image Source: Anthony Quintano/Flickr
Meta partnered with Stanfordâs Deliberative Democracy Lab and the Behavioral Insights Team on a Community Forum that discussed the role and impact of conversational chatbots in society.
Key Details:
The forum witnessed a diverse participation of 1545 individuals from Brazil, Germany, Spain, and the United States. The participants pondered over the principles guiding generative AIâs user engagement.
Stanfordâs Deliberative Democracy Lab revealed a significant shift in public opinion. Before the forum, 49.8% of Americans believed AI had a âpositive impactâ on society. However, after the forum, this number increased to 54.4%, marking a 4.6% rise.
Participants expressed interest in learning more about conversational chatbots like OpenAIâs ChatGPT. They also agreed that context matters for AI models when choosing local or international perspectives and maintained concerns over AI bias, misinformation, and human rights violations.
Why Itâs Important:
The 4.6% increase in AIâs âpositive impactâ on society suggests open discussions can address public concerns and build trust around AI advancements.
Metaâs Community Forum emphasizes the importance of considering local and international perspectives when designing AI models, ensuring chatbots are culturally sensitive to avoid perpetuating biases.
AI RESEARCH
âïžSWE-Agent for Software Engineering Language Models
Princetonâs Natural Language Processing (NLP) Team developed SWE-agent, an open-source system that transforms OpenAIâs GPT-4 into a software engineering agent that autonomously resolves issues in GitHub repositories.
SWE-agent outperformed Devin (i.e., the worldâs first fully autonomous AI software engineer) on the SWE-bench benchmark, which evaluates language models on real-world software issues collected from GitHub.
SWE-agent resolved 12.29% of issues autonomously by interacting with a specialized terminal to open files, edit specific lines, and execute tests.
đ TRENDING TOOLS
đžCo-Manager offers personalized guidance to power your music career.
đĄHomeScore unlocks personalized home insights to help you make the right real estate choices.
đŠAIxBlock is an end-to-end platform that integrates with decentralized supercomputers.
đłUndermind systematically finds the exact papers you need to solve complex problems.
đMathGPTPro creates personalized, interactive, and progressive math learning.
đźBrowse our always Up-To-Date AI Tools Database.
đ°VENTURE CAPITAL UPDATES
SaaS entrepreneur Raisinghaniâs new AI venture nabs $5.5M to boost sales efficiency.
HD secures $5.6M to build a Sierra AI for Southeast Asian healthcare.
Seattle startup OpenPipe raises $6.7M to help companies reduce costs for LLM models.
đŒWHOâS HIRING?
Ripple (San Francisco, CA): Developer Advocate Intern, Summer 2024
Databricks (Mountain View, CA): IT Data Engineering Intern, Fall 2024
Motive (Remote): Data Science Intern, Fall 2024
IXL Learning (San Mateo, CA): Software Engineer, New Grad
Neuralink (Fremont, CA): Software Engineer, New Grad
đ€PROMPT OF THE DAY
BALLER BUDGET
âïžCost-Cutting Hacks
Provide me with some ideas and tips on effectively cutting costs when running [Business].
Business = [Insert Here]đFINAL NOTE
If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.
How was todayâs newsletter?
â€ïžAI Pulse Review of The Day
âChatGPT prompt about cutting costs? Big fan of the newsletter.â
đNOTION TEMPLATES
đšSubscribe to our newsletter for free and receive these powerful Notion templates:
âïž150 ChatGPT prompts for Copywriting
âïž325 ChatGPT prompts for Email Marketing
đSimple Project Management Board
â±Time Tracker
