šŸ§  How to Prompt Like a Pro

PLUS: Use This Prompt to Turn OpenAIā€™s ChatGPT Into Your Personal Tutor

Welcome back AI prodigies!

In todayā€™s Sunday Special:

  • āš™ļøPrompting > Training Data?

  • šŸ“ŠExtracting Chatbot Expertise: An Example

  • šŸ¦¾How To Craft Quality Prompts

  • šŸ”‘Key Takeaway

Read Time: 7 minutes

šŸŽ“Key Terms

  • Large Language Models (LLMs): AI models pre-trained on vast amounts of data to generate human-like text.

  • Machine Learning (ML): Leverages data to recognize patterns and make predictions without explicit instructions from developers.

  • Chain-of-Thought (CoT): a technique that encourages LLMs to explain their reasoning by breaking down complex tasks into manageable steps.

šŸ©ŗ PULSE CHECK

How often do you use conversational chatbots to learn something?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

āš™ļøPROMPTING > TRAINING DATA?

Before the growing popularity of LLMs through OpenAIā€™s ChatGPT, AI frameworks rewarded companies with the best hoards of data. For example, Amazonā€™s massive amounts of data on Prime members, such as purchase history or recently visited items, fueled the algorithms that predicted a Prime memberā€™s next purchase. Data was the new oil if companies could gather enough of it, clean it properly for analysis, and build ML systems to decipher it.

With LLMs, Tech Giants still believe whoever has access to the most amount of data will win. In fact, Big Tech companies recently used 173,536 YouTube videos across 48,000 channels without creator consent to curate datasets for AI model training. But what fueled this decision? Industry experts believe all high-quality data could be exhausted by 2026. So, Big Tech companies are searching for synthetic data sources and real-world videos to continue training their AI models.

But for consumers like you, the battle over proprietary datasets is less relevant. Knowing how to summon knowledge from free, world-class conversational chatbots like OpenAIā€™s ChatGPT, Anthropicā€™s Claude, or Googleā€™s Gemini is far more critical.

šŸ“ŠEXTRACTING CHATBOT EXPERTISE: AN EXAMPLE

If youā€™ve ever tried to get a conversational chatbot to explain something, you probably asked it to simplify a concept. Though effective at providing concise explanations, prompts like ā€œExplain how LLMs work like Iā€™m in high schoolā€ fail to interact with you as you learn. Research shows that knowledge of the subject matter is just a part of a tutorā€™s effectiveness. A tutor must also interact with students by tailoring explanations to their current level of understanding, asking them to recall what theyā€™ve learned, and drawing connections between old insights and new knowledge. Hereā€™s a more complex prompt that turns conversational chatbots like OpenAIā€™s ChatGPT into effective tutors:

Youā€™re a strategic, patient tutor. Your goal is to explain a topic to me in a clear, concise, and straightforward way and check my understanding of the topic. Make sure your explanation matches my learning level without sacrificing accuracy or detail.

First, introduce yourself and let me know that youā€™ll ask me some questions. Then, ask me four questions to gain insights into my interests, learning level, and existing knowledge on the topic. Only accept answers thatā€™re detailed and at least a couple of sentences. Donā€™t number the questions for me. Instead, wait for me to respond to each question before moving on to the next question.

- Question #1: Ask me about my learning level (e.g., beginner, intermediate, advanced, or expert).

- Question #2: Ask me what topic Iā€™d like explained.

- Question #3: Ask me why this topic has piqued my interest.

- Question #4: Ask me what I already know about this topic.

Using the information youā€™ve gathered, provide a clear, concise, and straightforward three-paragraph explanation of the topic with two examples and an analogy. Keep in mind what you now know about me to customize your explanation.

Once youā€™ve provided the three-paragraph explanation, two examples, and an analogy, ask me three quiz-like questions one at a time to ensure I understand the topic. The three quiz-like questions should get more challenging with each correct answer.

Reflect on my responses to the three quiz-like questions to offer actionable suggestions and general feedback. Wrap up the tutoring session by asking me to explain the topic to you in my own words by providing an example that effectively conveys the topic in a real-world setting. If my example isnā€™t entirely accurate or detailed enough for you, offer me helpful hints. Then, end on a positive note!

{ā€œTurn Conversational Chatbots Like OpenAIā€™s ChatGPT Into Your Personal Tutor for Any Topicā€}

šŸ¦¾HOW TO CRAFT QUALITY PROMPTS

The prompt above is an example of how you can extract knowledge from LLMs more effectively when exploring a topic. But you may want a prompt for other daily tasks like writing, coding, or cooking. Here are six best practices to train LLMs and how they were applied to the prompt above:

  1. Tell the AI Who Itā€™ll Become: Context helps LLMs produce tailored answers in valuable ways, but you donā€™t need to go overboard. For example, ā€œYouā€™re a strategic, patient tutor.ā€

  2. Tell the AI Your Specific Objective: For example, ā€œYour goal is to explain a topic to me in a clear, concise, and straightforward way and check my understanding of the topic.ā€

  3. Step-By-Step Instructions: LLMs work best when you give them explicit step-by-step instructions. For example, ā€œFirst, introduce yourself and let me know that youā€™ll ask me some questions. Then, ask me four questions to gain insights into my interests, learning level, and existing knowledge on the topic.ā€ Step-by-step instructions have become more effective with conversational chatbots thanks to the Google Research Brain Teamā€™s recent developments in CoT. Weā€™ll describe how OpenAI is using CoT in more detail shortly.

  4. Personalization: Empower LLMs to ask you for contextual information. For example, ā€œAsk me about my learning level (e.g., beginner, intermediate, advanced, or expert).ā€ This level of personalization puts you in the driverā€™s seat, ensuring that LLMs meet your specific needs.

  5. Constraints: LLMs often produces outputs that you donā€™t expect. Constraints ensure they avoid behaviors that impede on your objective. For example, ā€œOnly accept answers thatā€™re detailed and at least a couple of sentences.ā€

  6. Fine-Tuning: Refining and adjusting your instructions to achieve more specific, accurate, and relevant outputs is essential when crafting effective prompts. Ask yourself, Is the output helpful? How can I make the output more helpful? Does it need more context? Does it need further constraints? Creating effective prompts requires tweaking through trial and error. Remember, the potential for improvement is always there, so be patient and keep fine-tuning.

šŸŽ®OpenAIā€™s ā€œOpenAI o1ā€ is a Game Changer!

Prompting is not an exact science, but recent developments in CoT, including OpenAIā€™s ā€œOpenAI o1,ā€ rewards more complex prompts. ā€œOpenAI o1ā€ is a new series of AI models designed to spend more time thinking before they respond. They can reason through complex tasks and solve more intricate problems than any of OpenAIā€™s previous AI models in science, coding, and math.

To put this into perspective, ā€œOpenAI o1ā€ reportedly scored an IQ of 120 on the Norway Mensa IQ Test, making it the first AI model to surpass the average human IQ level. In a qualifying exam for the International Mathematical Olympiad (IMO), it correctly solved 83% of the problems. In comparison, GPT-4o (ā€œoā€ for ā€œomniā€) correctly solved only 13% of the problems. It ranked 89th percentile on Codeforcesā€™s competitive programming questions. It placed among the top 500 students in the U.S. for the American Invitational Mathematics Examination (AIME).

So, how does ā€OpenAI o1ā€ perform so well? It deploys a CoT framework, which enables it to break down complex problems into manageable steps. Then, it processes each step sequentially (i.e., ā€œword-by-wordā€), building on previous steps to reach a logical conclusion. The CoT framework enables ā€œOpenAI o1ā€ to learn how to recognize mistakes, try different strategies, and verify solutions. As a result, itā€™s far more responsive to carefully designed prompts with 20 explicit instructions.

āš”ļøPrompt Library > Custom LLMs?

As the reasoning ability of conversational chatbots improves, specialized prompts will become more powerful. Since more specific tasks require more specific prompts, companies should build prompt libraries to ensure their employees are more productive. Yes, they can also develop their own LLMs for specific job functions (e.g., finance, marketing, or human resources). However, itā€™s often time-consuming, resource-intensive, and takes years to implement at scale. As we outlined last week, they must overcome several hurdles to ensure successful implementation, including accuracy levels, cost barriers, data privacy, and security concerns. In the meantime, teams within companies should crowdsource prompts from employees and fine-tune them for specific use cases.

Building prompt libraries also applies to individuals. You can use the prompt above to turn ā€œOpenAI o1ā€ into an effective tutor that considers your learning style. Creating prompt libraries presents a more accessible and practical approach to leveraging the current capabilities of existing LLMs.

šŸ”‘KEY TAKEAWAY

Historically, building the best technology was reserved for a select few, such as engineers and designers at top companies. However, recent AI advancements have put us in the driverā€™s seat. As conversational chatbots become more transparent about their ā€œthoughtā€ process, building prompt libraries is a no-brainer.

šŸ“’FINAL NOTE

FEEDBACK

How would you rate todayā€™s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

ā¤ļøTAIP Review of The Week

ā€œIā€™m AI crazy now!ā€

-Bryan (1ļøāƒ£ šŸ‘Nailed it!)
REFER & EARN

šŸŽ‰Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving āš™ļøUltimate Prompt Engineering Guide.

Refer 3 friends to learn how to šŸ‘·ā€ā™€ļøBuild Custom Versions of OpenAIā€™s ChatGPT.

Reply

or to participate.