- The AI Pulse
- Posts
- š§ How to Prompt Like a Pro
š§ How to Prompt Like a Pro
PLUS: Use This Prompt to Turn OpenAIās ChatGPT Into Your Personal Tutor
Welcome back AI prodigies!
In todayās Sunday Special:
āļøPrompting > Training Data?
šExtracting Chatbot Expertise: An Example
š¦¾How To Craft Quality Prompts
šKey Takeaway
Read Time: 7 minutes
šKey Terms
Large Language Models (LLMs): AI models pre-trained on vast amounts of data to generate human-like text.
Machine Learning (ML): Leverages data to recognize patterns and make predictions without explicit instructions from developers.
Chain-of-Thought (CoT): a technique that encourages LLMs to explain their reasoning by breaking down complex tasks into manageable steps.
š©ŗ PULSE CHECK
How often do you use conversational chatbots to learn something?Vote Below to View Live Results |
āļøPROMPTING > TRAINING DATA?
Before the growing popularity of LLMs through OpenAIās ChatGPT, AI frameworks rewarded companies with the best hoards of data. For example, Amazonās massive amounts of data on Prime members, such as purchase history or recently visited items, fueled the algorithms that predicted a Prime memberās next purchase. Data was the new oil if companies could gather enough of it, clean it properly for analysis, and build ML systems to decipher it.
With LLMs, Tech Giants still believe whoever has access to the most amount of data will win. In fact, Big Tech companies recently used 173,536 YouTube videos across 48,000 channels without creator consent to curate datasets for AI model training. But what fueled this decision? Industry experts believe all high-quality data could be exhausted by 2026. So, Big Tech companies are searching for synthetic data sources and real-world videos to continue training their AI models.
But for consumers like you, the battle over proprietary datasets is less relevant. Knowing how to summon knowledge from free, world-class conversational chatbots like OpenAIās ChatGPT, Anthropicās Claude, or Googleās Gemini is far more critical.
šEXTRACTING CHATBOT EXPERTISE: AN EXAMPLE
If youāve ever tried to get a conversational chatbot to explain something, you probably asked it to simplify a concept. Though effective at providing concise explanations, prompts like āExplain how LLMs work like Iām in high schoolā fail to interact with you as you learn. Research shows that knowledge of the subject matter is just a part of a tutorās effectiveness. A tutor must also interact with students by tailoring explanations to their current level of understanding, asking them to recall what theyāve learned, and drawing connections between old insights and new knowledge. Hereās a more complex prompt that turns conversational chatbots like OpenAIās ChatGPT into effective tutors:
Youāre a strategic, patient tutor. Your goal is to explain a topic to me in a clear, concise, and straightforward way and check my understanding of the topic. Make sure your explanation matches my learning level without sacrificing accuracy or detail.
First, introduce yourself and let me know that youāll ask me some questions. Then, ask me four questions to gain insights into my interests, learning level, and existing knowledge on the topic. Only accept answers thatāre detailed and at least a couple of sentences. Donāt number the questions for me. Instead, wait for me to respond to each question before moving on to the next question.
- Question #1: Ask me about my learning level (e.g., beginner, intermediate, advanced, or expert).
- Question #2: Ask me what topic Iād like explained.
- Question #3: Ask me why this topic has piqued my interest.
- Question #4: Ask me what I already know about this topic.
Using the information youāve gathered, provide a clear, concise, and straightforward three-paragraph explanation of the topic with two examples and an analogy. Keep in mind what you now know about me to customize your explanation.
Once youāve provided the three-paragraph explanation, two examples, and an analogy, ask me three quiz-like questions one at a time to ensure I understand the topic. The three quiz-like questions should get more challenging with each correct answer.
Reflect on my responses to the three quiz-like questions to offer actionable suggestions and general feedback. Wrap up the tutoring session by asking me to explain the topic to you in my own words by providing an example that effectively conveys the topic in a real-world setting. If my example isnāt entirely accurate or detailed enough for you, offer me helpful hints. Then, end on a positive note!
š¦¾HOW TO CRAFT QUALITY PROMPTS
The prompt above is an example of how you can extract knowledge from LLMs more effectively when exploring a topic. But you may want a prompt for other daily tasks like writing, coding, or cooking. Here are six best practices to train LLMs and how they were applied to the prompt above:
Tell the AI Who Itāll Become: Context helps LLMs produce tailored answers in valuable ways, but you donāt need to go overboard. For example, āYouāre a strategic, patient tutor.ā
Tell the AI Your Specific Objective: For example, āYour goal is to explain a topic to me in a clear, concise, and straightforward way and check my understanding of the topic.ā
Step-By-Step Instructions: LLMs work best when you give them explicit step-by-step instructions. For example, āFirst, introduce yourself and let me know that youāll ask me some questions. Then, ask me four questions to gain insights into my interests, learning level, and existing knowledge on the topic.ā Step-by-step instructions have become more effective with conversational chatbots thanks to the Google Research Brain Teamās recent developments in CoT. Weāll describe how OpenAI is using CoT in more detail shortly.
Personalization: Empower LLMs to ask you for contextual information. For example, āAsk me about my learning level (e.g., beginner, intermediate, advanced, or expert).ā This level of personalization puts you in the driverās seat, ensuring that LLMs meet your specific needs.
Constraints: LLMs often produces outputs that you donāt expect. Constraints ensure they avoid behaviors that impede on your objective. For example, āOnly accept answers thatāre detailed and at least a couple of sentences.ā
Fine-Tuning: Refining and adjusting your instructions to achieve more specific, accurate, and relevant outputs is essential when crafting effective prompts. Ask yourself, Is the output helpful? How can I make the output more helpful? Does it need more context? Does it need further constraints? Creating effective prompts requires tweaking through trial and error. Remember, the potential for improvement is always there, so be patient and keep fine-tuning.
š®OpenAIās āOpenAI o1ā is a Game Changer!
Prompting is not an exact science, but recent developments in CoT, including OpenAIās āOpenAI o1,ā rewards more complex prompts. āOpenAI o1ā is a new series of AI models designed to spend more time thinking before they respond. They can reason through complex tasks and solve more intricate problems than any of OpenAIās previous AI models in science, coding, and math.
To put this into perspective, āOpenAI o1ā reportedly scored an IQ of 120 on the Norway Mensa IQ Test, making it the first AI model to surpass the average human IQ level. In a qualifying exam for the International Mathematical Olympiad (IMO), it correctly solved 83% of the problems. In comparison, GPT-4o (āoā for āomniā) correctly solved only 13% of the problems. It ranked 89th percentile on Codeforcesās competitive programming questions. It placed among the top 500 students in the U.S. for the American Invitational Mathematics Examination (AIME).
So, how does āOpenAI o1ā perform so well? It deploys a CoT framework, which enables it to break down complex problems into manageable steps. Then, it processes each step sequentially (i.e., āword-by-wordā), building on previous steps to reach a logical conclusion. The CoT framework enables āOpenAI o1ā to learn how to recognize mistakes, try different strategies, and verify solutions. As a result, itās far more responsive to carefully designed prompts with 20 explicit instructions.
āļøPrompt Library > Custom LLMs?
As the reasoning ability of conversational chatbots improves, specialized prompts will become more powerful. Since more specific tasks require more specific prompts, companies should build prompt libraries to ensure their employees are more productive. Yes, they can also develop their own LLMs for specific job functions (e.g., finance, marketing, or human resources). However, itās often time-consuming, resource-intensive, and takes years to implement at scale. As we outlined last week, they must overcome several hurdles to ensure successful implementation, including accuracy levels, cost barriers, data privacy, and security concerns. In the meantime, teams within companies should crowdsource prompts from employees and fine-tune them for specific use cases.
Building prompt libraries also applies to individuals. You can use the prompt above to turn āOpenAI o1ā into an effective tutor that considers your learning style. Creating prompt libraries presents a more accessible and practical approach to leveraging the current capabilities of existing LLMs.
šKEY TAKEAWAY
Historically, building the best technology was reserved for a select few, such as engineers and designers at top companies. However, recent AI advancements have put us in the driverās seat. As conversational chatbots become more transparent about their āthoughtā process, building prompt libraries is a no-brainer.
šFINAL NOTE
FEEDBACK
How would you rate todayās email?It helps us improve the content for you! |
ā¤ļøTAIP Review of The Week
āIām AI crazy now!ā
REFER & EARN
šYour Friends Learn, You Earn!
You currently have 0 referrals, only 1 away from receiving āļøUltimate Prompt Engineering Guide.
Refer 3 friends to learn how to š·āāļøBuild Custom Versions of OpenAIās ChatGPT.
Copy and paste this link to friends: https://theaipulse.beehiiv.com/subscribe?ref=PLACEHOLDER
Reply