- The AI Pulse
- Posts
- 🧠 The Creativity Paradox
🧠 The Creativity Paradox
PLUS: Testing Creativity Reveals Humans’ Edge
Welcome back AI prodigies!
In today’s Sunday Special:
🎨Is Generative AI Actually Creative?
🤖GPT-4 vs. College Students
🧩Creativity Thrives On Complexity
Read Time: 5 minutes
🎓Key Terms
Large Language Models (LLMs): a deep-learning model that understands and generates text in a human-like fashion. Deep learning involves finding patterns in data without knowing which answers are right or wrong beforehand. For example, Google Photos automatically categorizes your photos into albums.
Machine Learning (ML): an application of artificial intelligence (AI) that provides systems with the ability to automatically learn and improve from experience without being explicitly programmed.
First Principles Thinking: The practice of questioning every assumption you think you know about a given problem, then creating new solutions from scratch.
🎨IS GENERATIVE AI ACTUALLY CREATIVE?
Some researchers think so. Three months ago, a University of Montana professor pitted GPT-4 against college students in the Torrance Tests for Creative Thinking (TTCT), the most widely used creativity assessment. Before we share the results and their potential implications, let’s define creativity:
Creativity requires both novelty and utility. It combines existing things in a new and valuable way or produces entirely new things that serve a purpose. But there’s something abstract, perhaps even magical, about how we create novel ideas. We’ve all experienced the “Aha!” moment, but discerning where it came from and how to replicate that process is no easy feat. The act of being creative involves both convergent and divergent thinking. Convergent thinking synthesizes loose ideas into a specific framework or coherent idea. Divergent thinking requires brainstorming, exploring ideas, taking one stimulus, and radiating thoughts, events, and concepts from that thing. TTCT focuses on divergent thinking.
🤖GPT-4 VS. COLLEGE STUDENTS
The test contains 3 sections, each with a myriad of challenges. We have included a few tasks from each section (most of which were originally conceived for children) for your interest, self-assessment, or amusement. Each task has a time limit based on age, test objective, and other factors.
1. Verbal Tasks Using Verbal Stimuli
Impossibilities: List as many impossibilities as possible.
Just Suppose: Confronted with an unlikely scenario, subjects must predict potential outcomes. New variables will be introduced throughout the exercise to influence their predictions.
2. Verbal Tasks using Non-Verbal Stimuli
Ask and Guess: Ask non-obvious questions about a picture. Hypothesize the causes and effects of the scenario in the picture.
Unusual uses: Consider a toy’s most clever, engaging, and unique uses (or any object).
3. Non-Verbal Tasks (excluded from the ChatGPT-student duel for obvious reasons)
Circles and Squares: On a page with 42 circles of equal size, sketch objects or pictures that use circles. Repeat for squares.
Incomplete Figures: A page has 10 squares, each containing a different stimulus drawing. Sketch objects or designs by adding as many lines as possible to the 10 figures.
Results are scored based on four categories: fluency, flexibility, originality, and elaboration. Fluency describes the total number of interpretable, meaningful, and relevant ideas generated, and flexibility refers to the number of different categories of relevant responses. Compared to college students nationwide, guess which percentile GPT-4 placed in.
99th for fluency, originality, and elaboration. 97th for flexibility. If you’ve tinkered with ChatGPT, this shouldn’t be too surprising. Put simply, it read the web, remembered what it read, and (kind of) generated the most likely words to occur, one after another, based on human prompting.
🧩CREATIVITY THRIVES ON COMPLEXITY
Like GPT-4, all generative ML models, regardless of modality (text-to-image, text-to-video, or image-to-video), output content that most likely matches the prompt. These probabilities stem from the data the model trained on, all of which were originally created by humans. The AI-human relationship is similar to that of a farmer and chef. A farmer must raise crops and livestock for a chef to create tasty food. Like the farmer, only humans can produce content from scratch, and AI relies on human creations to generate “novel” outputs.
Models may produce combinations of one or more types of content—sentences, pictures, sounds, and videos—that never existed. But its “reasoning” is only by analogy—what’s been done and digitized in the past and documented for the future. Without significant prompting and back-and-forth with a human, no output-generating tool can reason from first principles to generate a revolutionary (novel, helpful, and feasible) idea. At least not yet.
Given these findings, the creativity skills tested become more pertinent. In narrow, structured assessments, LLMs are quick, high-volume brainstormers. But most problems worth solving require an unprogrammable mixture of ingredients that humans can’t define. Making connections between disparate fields, seeing the big picture, intuition, and many others. And if humans are involved, relationship-building, power dynamics, cultural differences, competing personalities, murky motives, and a seemingly infinite list of factors complicate the picture.
📒FINAL NOTE
If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.
How was today’s newsletter?
❤️AI Pulse Review of The Week
“I read y’all every day on my way to work.”
🎁NOTION TEMPLATES
🚨Subscribe to our newsletter for free and receive these powerful Notion templates:
⚙️150 ChatGPT prompts for Copywriting
⚙️325 ChatGPT prompts for Email Marketing
📆Simple Project Management Board
⏱Time Tracker
Reply