- The AI Pulse
- Posts
- š§ Unveiling Big Techās New AI Playbook
š§ Unveiling Big Techās New AI Playbook
PLUS: What Must Developers Overcome to Captivate Consumers?
Welcome back AI prodigies!
In todayās Sunday Special:
šThe Beginning
šMake Something People Want
š·Four Obstacles For Developers
šKey Takeaway
Read Time: 7 minutes
šKey Terms
Hyperscaler: A large-scale cloud service provider that offers computing, storage, and network services for AI applications.
Large Language Models (LLMs): AI models pre-trained on vast amounts of data to generate human-like text.
Foundation Models: Versatile AI models that can be fine-tuned to build various AI applications.
Machine Learning (ML): Leverages data to recognize patterns and make predictions without explicit instructions from developers.
Prompt Injections: When malicious prompts are disguised as legitimate prompts to manipulate conversational chatbots into leaking sensitive content or taking harmful actions.
š©ŗ PULSE CHECK
Will Big Tech companies benefit from their billion-dollar AI infrastructure investments?Vote Below to View Live Results |
šTHE BEGINNING
Big Tech companies are spending trillions of dollars on AI hardware and data center infrastructure. Still, they donāt have a clear plan on how to generate revenue from AI-enabled products or services. This glaring short-term mismatch between AI investments and the technologyās revenue has led to growing concerns over the Generative AI (GenAI) bubble.
As we outlined a few weeks ago, the underlying technology is truly transformative, but many investors may be getting ahead of themselves. And despite record-high share prices, the most profitable, sophisticated Big Tech companies have had a bumpy ride.
What mistakes have Big Tech companiesāAI model developers (e.g., OpenAI) and hyperscalers (e.g., Google Cloud)āmade? How have they tried to course-correct? How can they make enough money to justify their initial investment in AI hardware and data center infrastructure?
šMAKE SOMETHING PEOPLE WANT
When OpenAIās ChatGPT first launched, people found countless unexpected use cases and were curious about its guardrails. Developers eagerly sought to build AI tools using ChatGPTās core processes and create impressive concepts. However, this eagerness led to a growing gap between impressive concepts and beneficial products that consumers actually want. This growing gap led to a flawed approach with making money using Large Language Models (LLMs).
OpenAI initially focused on building impressive AI models without worrying about products or services. For example, they were slow to capitalize on the potential of mobile devices. It took OpenAI six months to release a ChatGPT iOS app and eight months to release a ChatGPT Android app. Given that over half of all humans spend most of their screen time on mobile devices, OpenAI wasnāt focused on reaching a vast audience to accelerate adoption. They were solely focused on building impressive AI models.
Within five days of OpenAI launching ChatGPT, it reached over one million active users. Microsoft noticed the growing popularity of conversational chatbots and invested over $13 billion into OpenAI to have a 49% ownership stake in the company and access to its AI models. Then, Microsoft shoved AI into everything without considering which products or services would benefit from it. Most famously, Microsoft threw ChatGPT into Bing Search to reinvent it. In response, it accelerated Googleās efforts to integrate AI Overviews into Google Search. Before we knew it, AI dominated every product or service offered by Big Tech companies.
Amid this AI surge, Big Tech companies forgot legendary startup incubator Y Combinatorās slogan: āMake something people want.ā The general-purpose nature of consumer-facing LLMs like OpenAIās ChatGPT created a facade of Product-Market Fit (PMF), which refers to how well a product or service meets the needs of a target market.
OpenAIās ChatGPT can generate a recipe, write a poem, or summarize complex topics. This ability to answer various user queries created the illusion of a product that could meet the needs of a wide range of users. However, the conversational chatbotās generic, surface-level, error-prone responses often failed to address specific problems effectively.
OpenAIās ChatGPT is like a Swiss Army Knife. Itās versatile and can handle many jobs, but itās not always the best tool for each specific job. For example, imagine youāre trying to fix a leaky faucet. A Swiss Army Knife might be able to tighten a screw or cut a small rubber tube, but itās not designed for plumbing tasks.
To ensure products or services meet the specific needs of a target market, Big Tech companies must focus on Product-Problem Fit (PPF), which refers to testing whether a product or service solves a real problem for your customers and whether theyāre willing to pay for it. By prioritizing PPF, you can develop a product or service that truly resonates with your audience and delivers the value they seek.
Big Tech companies are quickly changing their ways to prioritize PPF. For instance, OpenAI has transitioned from a research lab into a For-Profit Benefit Corporation (āB. Corpā) that will no longer be controlled by a non-profit Board of Directors (BofD). At OpenAI DevDay 2024, the company made four major announcements to make AI models more accessible, efficient, and cost-effective for developers. OpenAI is planning to make more developer-centric products in 2025. Nvidia recently partnered with global consulting firm Accenture to drive the adoption of AI applications within businesses. Nvidiaās AI applications will perform in-depth, category-specific processes like customer service or supply chain management.
Big Tech companies are starting to focus on how AI investments can deliver tangible value by building AI applications that solve specific use cases for specific customers. OpenAI is prioritizing developers while Nvidia is prioritizing business operations.
š·FOUR OBSTACLES FOR DEVELOPERS
Big Tech companies that develop LLMs must address four challenges as they build AI-enabled products or services at scale: cost, accuracy, data privacy, and safety and security.
1. Cost
There are many AI-enabled applications where capability isnāt the primary barrier; cost is. For instance, cost concerns dictate how much chat history a conversational chatbot can track. Processing the entire chat history for every response quickly gets expensive as the conversations grow longer and more interconnected. For example, OpenAIās ChatGPT costs over $700,000 daily to operate.
There has been rapid progress regarding cost concerns. In the last 18 months, the cost-effectiveness of AI models has improved dramatically. For example, OpenAI CEO Sam Altman claims that LLMs will soon be too cheap to meter, meaning they wonāt even charge developers when their AI-enabled applications ask the underlying LLMs for information. However, costs will continue to be a concern because, in many AI-enabled applications, cost improvements donāt directly translate to accuracy improvements, requiring users to refine user queries multiple times to achieve desired outputs, which offsets the cost savings.
2. Accuracy
If an AI system performs a task correctly 90% of the time, itās unreliable. Perfect accuracy is intrinsically hard to achieve with statistical learning-based AI systems like LLMs. However, perfect accuracy isnāt the goal with advertisement targeting, fraud detection, or weather forecasting. The AI system just has to be much better than the status quo. Even in medical diagnosis, we tolerate a lot of error.
But when developers integrate AI into consumer products, people expect it to behave like software, meaning it should do exactly what they expect. If I press the āNextā button, I expect it to take me to the following webpage. But it wonāt be successful if your autonomous AI Agent books vacations to the correct destination only 90% of the time.
3. Data Privacy
Historically, Machine Learning (ML) has relied on sensitive data sources like browsing history for targeted advertising. Although Foundation Models primarily rely on public data through websites or concepts in research articles, personalized AI products reignite privacy concerns, and rightfully so. Your email writing AI copilot would be much better if trained on your personal emails.
Of course, such necessities arenāt easy to sell. For example, at Microsoft Build 2024, Microsoft introduced āCopilot+ Recall,ā which tracks everything you see and do on your device. It constantly takes screenshots of whatās on your screen. Then, it leverages an on-device GenAI model and Neural Processing Unit (NPU) to process all those screenshots, making them searchable. When you perform a āRecallā action, the feature presents a screenshot related to your question to jog your memory. āCopilot+ Recallā was designed to give your device a photographic memory. However, after public backlash, Microsoft scrapped the concept.
4. Safety and Security
Safety issues and security risks are a minefield for AI-enabled applications. For instance, Big Tech companies must prevent bad actors from manipulating AI-enabled applications to elicit harmful responses that spread misinformation in critical, high-risk Chemical, Biological, Radiological, and Nuclear (CBRN) domains.
For example, Anthropic launched a bug bounty program to identify critical vulnerabilities in AI systems. The program focused on identifying and mitigating āuniversal jailbreak attacksā: a method to circumnavigate an AI systemās built-in safety measures and ethical guidelines. Finding weaknesses in AI systems allows Anthropic to build more robust and secure AI models, which in turn helps foster public trust in AI technology.
šKEY TAKEAWAY
AI advocates and short-term AI investors often claim that we should see massive societal and economic benefits soon due to the rapid improvement of AI capabilities. However, thereās a growing tension between the hype surrounding AI and the practical challenges of monetizing it.
The massive investments by Big Tech companies in AI hardware and data center infrastructure are outpacing the technologyās short-term revenue potential. To succeed, they must focus on building practical AI solutions that address real user needs rather than simply showcasing impressive AI offerings no one asked for.
šFINAL NOTE
FEEDBACK
How would you rate todayās email?It helps us improve the content for you! |
ā¤ļøTAIP Review of The Week
āThis newsletter is a breath of fresh air! Greetings from Sweden.šøšŖā
REFER & EARN
šYour Friends Learn, You Earn!
You currently have 0 referrals, only 1 away from receiving āļøUltimate Prompt Engineering Guide.
Refer 3 friends to learn how to š·āāļøBuild Custom Versions of OpenAIās ChatGPT.
Copy and paste this link to friends: https://theaipulse.beehiiv.com/subscribe?ref=PLACEHOLDER
Reply