• The AI Pulse
  • Posts
  • šŸ§  Unveiling Big Techā€™s New AI Playbook

šŸ§  Unveiling Big Techā€™s New AI Playbook

PLUS: What Must Developers Overcome to Captivate Consumers?

Welcome back AI prodigies!

In todayā€™s Sunday Special:

  • šŸ“œThe Beginning

  • šŸ›’Make Something People Want

  • šŸ‘·Four Obstacles For Developers

  • šŸ”‘Key Takeaway

Read Time: 7 minutes

šŸŽ“Key Terms

  • Hyperscaler: A large-scale cloud service provider that offers computing, storage, and network services for AI applications.

  • Large Language Models (LLMs): AI models pre-trained on vast amounts of data to generate human-like text.

  • Foundation Models: Versatile AI models that can be fine-tuned to build various AI applications.

  • Machine Learning (ML): Leverages data to recognize patterns and make predictions without explicit instructions from developers.

  • Prompt Injections: When malicious prompts are disguised as legitimate prompts to manipulate conversational chatbots into leaking sensitive content or taking harmful actions.

šŸ©ŗ PULSE CHECK

Will Big Tech companies benefit from their billion-dollar AI infrastructure investments?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

šŸ“œTHE BEGINNING

Big Tech companies are spending trillions of dollars on AI hardware and data center infrastructure. Still, they donā€™t have a clear plan on how to generate revenue from AI-enabled products or services. This glaring short-term mismatch between AI investments and the technologyā€™s revenue has led to growing concerns over the Generative AI (GenAI) bubble.

As we outlined a few weeks ago, the underlying technology is truly transformative, but many investors may be getting ahead of themselves. And despite record-high share prices, the most profitable, sophisticated Big Tech companies have had a bumpy ride.

What mistakes have Big Tech companiesā€”AI model developers (e.g., OpenAI) and hyperscalers (e.g., Google Cloud)ā€”made? How have they tried to course-correct? How can they make enough money to justify their initial investment in AI hardware and data center infrastructure?

šŸ›’MAKE SOMETHING PEOPLE WANT

When OpenAIā€™s ChatGPT first launched, people found countless unexpected use cases and were curious about its guardrails. Developers eagerly sought to build AI tools using ChatGPTā€™s core processes and create impressive concepts. However, this eagerness led to a growing gap between impressive concepts and beneficial products that consumers actually want. This growing gap led to a flawed approach with making money using Large Language Models (LLMs).

OpenAI initially focused on building impressive AI models without worrying about products or services. For example, they were slow to capitalize on the potential of mobile devices. It took OpenAI six months to release a ChatGPT iOS app and eight months to release a ChatGPT Android app. Given that over half of all humans spend most of their screen time on mobile devices, OpenAI wasnā€™t focused on reaching a vast audience to accelerate adoption. They were solely focused on building impressive AI models.

Within five days of OpenAI launching ChatGPT, it reached over one million active users. Microsoft noticed the growing popularity of conversational chatbots and invested over $13 billion into OpenAI to have a 49% ownership stake in the company and access to its AI models. Then, Microsoft shoved AI into everything without considering which products or services would benefit from it. Most famously, Microsoft threw ChatGPT into Bing Search to reinvent it. In response, it accelerated Googleā€™s efforts to integrate AI Overviews into Google Search. Before we knew it, AI dominated every product or service offered by Big Tech companies.

Amid this AI surge, Big Tech companies forgot legendary startup incubator Y Combinatorā€™s slogan: ā€œMake something people want.ā€ The general-purpose nature of consumer-facing LLMs like OpenAIā€™s ChatGPT created a facade of Product-Market Fit (PMF), which refers to how well a product or service meets the needs of a target market.

OpenAIā€™s ChatGPT can generate a recipe, write a poem, or summarize complex topics. This ability to answer various user queries created the illusion of a product that could meet the needs of a wide range of users. However, the conversational chatbotā€™s generic, surface-level, error-prone responses often failed to address specific problems effectively.

OpenAIā€™s ChatGPT is like a Swiss Army Knife. Itā€™s versatile and can handle many jobs, but itā€™s not always the best tool for each specific job. For example, imagine youā€™re trying to fix a leaky faucet. A Swiss Army Knife might be able to tighten a screw or cut a small rubber tube, but itā€™s not designed for plumbing tasks.

To ensure products or services meet the specific needs of a target market, Big Tech companies must focus on Product-Problem Fit (PPF), which refers to testing whether a product or service solves a real problem for your customers and whether theyā€™re willing to pay for it. By prioritizing PPF, you can develop a product or service that truly resonates with your audience and delivers the value they seek.

Big Tech companies are quickly changing their ways to prioritize PPF. For instance, OpenAI has transitioned from a research lab into a For-Profit Benefit Corporation (ā€œB. Corpā€) that will no longer be controlled by a non-profit Board of Directors (BofD). At OpenAI DevDay 2024, the company made four major announcements to make AI models more accessible, efficient, and cost-effective for developers. OpenAI is planning to make more developer-centric products in 2025. Nvidia recently partnered with global consulting firm Accenture to drive the adoption of AI applications within businesses. Nvidiaā€™s AI applications will perform in-depth, category-specific processes like customer service or supply chain management.

Big Tech companies are starting to focus on how AI investments can deliver tangible value by building AI applications that solve specific use cases for specific customers. OpenAI is prioritizing developers while Nvidia is prioritizing business operations.

šŸ‘·FOUR OBSTACLES FOR DEVELOPERS

Big Tech companies that develop LLMs must address four challenges as they build AI-enabled products or services at scale: cost, accuracy, data privacy, and safety and security.

1. Cost

There are many AI-enabled applications where capability isnā€™t the primary barrier; cost is. For instance, cost concerns dictate how much chat history a conversational chatbot can track. Processing the entire chat history for every response quickly gets expensive as the conversations grow longer and more interconnected. For example, OpenAIā€™s ChatGPT costs over $700,000 daily to operate.

There has been rapid progress regarding cost concerns. In the last 18 months, the cost-effectiveness of AI models has improved dramatically. For example, OpenAI CEO Sam Altman claims that LLMs will soon be too cheap to meter, meaning they wonā€™t even charge developers when their AI-enabled applications ask the underlying LLMs for information. However, costs will continue to be a concern because, in many AI-enabled applications, cost improvements donā€™t directly translate to accuracy improvements, requiring users to refine user queries multiple times to achieve desired outputs, which offsets the cost savings.

2. Accuracy

If an AI system performs a task correctly 90% of the time, itā€™s unreliable. Perfect accuracy is intrinsically hard to achieve with statistical learning-based AI systems like LLMs. However, perfect accuracy isnā€™t the goal with advertisement targeting, fraud detection, or weather forecasting. The AI system just has to be much better than the status quo. Even in medical diagnosis, we tolerate a lot of error.

But when developers integrate AI into consumer products, people expect it to behave like software, meaning it should do exactly what they expect. If I press the ā€œNextā€ button, I expect it to take me to the following webpage. But it wonā€™t be successful if your autonomous AI Agent books vacations to the correct destination only 90% of the time.

3. Data Privacy

Historically, Machine Learning (ML) has relied on sensitive data sources like browsing history for targeted advertising. Although Foundation Models primarily rely on public data through websites or concepts in research articles, personalized AI products reignite privacy concerns, and rightfully so. Your email writing AI copilot would be much better if trained on your personal emails.

Of course, such necessities arenā€™t easy to sell. For example, at Microsoft Build 2024, Microsoft introduced ā€œCopilot+ Recall,ā€ which tracks everything you see and do on your device. It constantly takes screenshots of whatā€™s on your screen. Then, it leverages an on-device GenAI model and Neural Processing Unit (NPU) to process all those screenshots, making them searchable. When you perform a ā€œRecallā€ action, the feature presents a screenshot related to your question to jog your memory. ā€œCopilot+ Recallā€ was designed to give your device a photographic memory. However, after public backlash, Microsoft scrapped the concept.

4. Safety and Security

Safety issues and security risks are a minefield for AI-enabled applications. For instance, Big Tech companies must prevent bad actors from manipulating AI-enabled applications to elicit harmful responses that spread misinformation in critical, high-risk Chemical, Biological, Radiological, and Nuclear (CBRN) domains.

For example, Anthropic launched a bug bounty program to identify critical vulnerabilities in AI systems. The program focused on identifying and mitigating ā€œuniversal jailbreak attacksā€: a method to circumnavigate an AI systemā€™s built-in safety measures and ethical guidelines. Finding weaknesses in AI systems allows Anthropic to build more robust and secure AI models, which in turn helps foster public trust in AI technology.

šŸ”‘KEY TAKEAWAY

AI advocates and short-term AI investors often claim that we should see massive societal and economic benefits soon due to the rapid improvement of AI capabilities. However, thereā€™s a growing tension between the hype surrounding AI and the practical challenges of monetizing it.

The massive investments by Big Tech companies in AI hardware and data center infrastructure are outpacing the technologyā€™s short-term revenue potential. To succeed, they must focus on building practical AI solutions that address real user needs rather than simply showcasing impressive AI offerings no one asked for.

šŸ“’FINAL NOTE

FEEDBACK

How would you rate todayā€™s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

ā¤ļøTAIP Review of The Week

ā€œThis newsletter is a breath of fresh air! Greetings from Sweden.šŸ‡øšŸ‡Ŗā€

-Erik (1ļøāƒ£ šŸ‘Nailed it!)
REFER & EARN

šŸŽ‰Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving āš™ļøUltimate Prompt Engineering Guide.

Refer 3 friends to learn how to šŸ‘·ā€ā™€ļøBuild Custom Versions of OpenAIā€™s ChatGPT.

Reply

or to participate.