🧠 Are Self-Driving Cars Safe?

PLUS: Why You Always Hear About AI But Don’t See Transformative Change

Welcome back AI prodigies!

In today’s Sunday Special:

  • đŸ€–Why Hasn’t AI Changed Our Lives Yet?

  • 🚖Are Waymo’s Self-Driving Cars Safer Than Humans?

  • đŸŠșAre Human Drivers Causing Waymo’s Accidents?

  • 🔑Key Takeaway

Read Time: 7 minutes

🎓Key Terms

  • Large Language Models (LLMs): AI models pre-trained on vast amounts of data to generate human-like text.

  • Enterprise AI: Integrating AI models into business processes to streamline operations and drive better decision-making.

  • Machine Learning (ML): Leverages data to recognize patterns and make predictions without explicit instructions from developers.

  • Supervised Machine Learning (SML): Uses labeled datasets to learn the relationships between inputs and outputs to predict outcomes.

  • Return on Investment (ROI): Measures how much money an investor has earned or lost on an investment relative to the amount initially invested.

đŸ©ș PULSE CHECK

Do you trust human drivers or self-driving cars more?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

đŸ€–WHY HASN’T AI CHANGED OUR LIVES YET?

AI is one of the most transformative forces of our time. But our day-to-day lives haven’t changed much. So, what’s the cause of this? Roughly 80% of AI startups in the past two years have focused on developing AI solutions for businesses rather than creating AI platforms for consumers. So, why do AI startups focus on businesses rather than consumers? By promising businesses cost savings and enhanced customer efficiency, AI startups can secure long-term contracts worth millions of dollars that guarantee ROI for their investors. Though the customers of corporate America (i.e., the general public) will ultimately benefit from this, Enterprise AI rollouts have been blocked by several obstacles. Here’s a snapshot of the most pressing obstacles across various industries:

  1. Data Quality and Accessibility: A retail company trying to implement ML for personalized recommendations faces inconsistent customer data across its various platforms (e.g., website, mobile app, or physical store). Inaccurate or missing customer information, like outdated preferences or incomplete purchase histories, impedes the accuracy of AI models.

  2. Integration With Legacy Systems: A financial institution attempts to introduce new AI-enabled predictive fraud detection frameworks but struggles to integrate them within its legacy banking infrastructure, which uses software implemented in the 1990s. This issue causes delays in processing and limits the AI system’s ability to access real-time transaction data for fraud detection.

  3. Skills and Talent Gaps: A healthcare provider wants to use SML to analyze patient records and predict disease outbreaks but can’t find enough qualified Data Scientists and ML Engineers to develop and deploy the AI system. Hiring a third-party contractor is too expensive, as the hospital competes with high-paying Big Tech companies for talent.

  4. Cost Constraints: A small manufacturing company wants to implement ML to predict when its machinery might break down, but it can’t afford the initial costs of setting up AI infrastructure, including purchasing vision sensors, accessing cloud computing resources, and hiring Data Scientists. The ongoing costs of AI model training and maintenance are also prohibitive.

  5. Bias and Ethics: A Big Tech company implements an ML tool to screen resumes. However, the AI model disproportionately favors male candidates over female candidates due to biases in historical hiring data used to train the AI model.

  6. Regulation, Security, and Compliance: A global e-commerce platform wants to deploy conversational chatbots for customer service but faces challenges in complying with privacy regulations like the General Data Protection Regulation (GDPR), a European Union (EU) law governing how organizations collect, store, process, and share personal data of individuals. The global e-commerce platform’s in-house LLM, trained on past customer conversations, must comply with this law.

Self-Driving Cars?

While Enterprise AI works behind the scenes, other real-world AI use cases feel like science fiction and already impact millions of Americans. If you live in Austin, TX, Phoenix, AZ, Los Angeles, CA, or San Francisco, CA, you’ve probably seen white Jaguar I-PACE SUVs with mounted cameras roaming the streets. Waymo, an autonomous ride-hailing service owned by Google’s Alphabet, has offered fully autonomous rides without human drivers since 2017. Like any disruptive technology, self-driving cars have faced legal, ethical, technical, and environmental challenges. But no challenge is as pertinent as safety.

🚖ARE WAYMO’S SELF-DRIVING CARS SAFER THAN HUMANS?

For any great question, the answer depends on what data you use. Waymo published a Safety Impact Website that offers data-driven insights into how the Waymo Driver is “already making roads safer in the places we currently operate.” For instance, the Waymo Driver has over 22 million miles of real-world driving experience in Phoenix, AZ, and San Francisco, CA. Compared to a human driver with the same miles in the same locations, Waymo achieves:

  • 81% Fewer Airbag Deployment Crashes

  • 72% Fewer Injury-Causing Crashes

  • 57% Fewer Police-Reported Crashes

At first glance, these data-driven insights are encouraging. Even crashes caused by human drivers may be severely underreported, making Waymo rides even safer. The National Highway Traffic Safety Administration (NHTSA) estimates that 32% of injury crashes and 60% of property damage crashes aren’t reported to police. In contrast, autonomous ride-hailing services report even the most minor crashes, prioritizing transparency to gain public trust. Waymo is legally required to report any physical contact that results or allegedly results in any injury, property damage, or fatality. On the other hand, police won’t file collision reports of minor crashes between human drivers, so they’re excluded from the national database.

Though this adjustment falls in Waymo’s favor, others don’t. Some streets are more crash-prone than others, with hairpin turns, poor signage, or challenging intersections. Waymo’s self-driving cars, which only have significant operations in large cities, drive on city streets with crash profiles different from the average street. When benchmarking their data against human drivers around large cities, Waymo adjusts for the differences in crash risk between average streets and city streets. This statistical adjustment, coupled with the underreporting of human driver crashes, adds credibility to Waymo’s impressive safety claims. However, to truly understand the safety profile of self-driving cars, we must examine what caused Waymo’s crashes. Who exactly was at fault? Waymo or human drivers?

đŸŠșARE HUMAN DRIVERS CAUSING WAYMO’S ACCIDENTS?

Out of nearly 200 of Waymo’s reported accidents, only 20 resulted in injuries. This ratio makes sense, as 43% of crashes in Phoenix, AZ, and San Francisco, CA, result in minor damages. Of the 20 injuries, only one was considered serious. Interestingly, this serious injury occurred when a human driver, fleeing from the police, collided with two other cars during a chase, one of which was a Waymo Jaguar I-PACE SUV. The pedestrians who were injured were struck by the fleeing car, while no one in the self-driving car itself was harmed. When examining Waymo’s 23 most serious incidents, here’s what caused them:

  • 2 involved human drivers veering into lanes.

  • 2 were human drivers turning in front of Waymo’s self-driving car.

  • 3 were human drivers who ran a red light.

  • 16 were human drivers rear-ending Waymo’s self-driving car.

Notably, no serious accidents were caused by Waymo’s self-driving car running a red light, hitting another car, or engaging in other reckless driving behavior. These reassuring data-driven insights come at a critical time for Waymo, which is rapidly scaling up its autonomous ride-hailing service.

In 2023, Waymo was providing 10,000 rides per week. This year, weekly ridership exceeded 150,000. Next year, Waymo plans to expand coverage to Atlanta, GA. All the evidence so far suggests that Waymo is making the streets safer. To some, this may sound too good to be true. To others, it may be unsurprising. Irrespective of one’s reaction, the verdict is out. Waymo’s self-driving cars in Phoenix, AZ, and San Francisco, CA, are safer than human drivers.

What Makes Them Safer?

Waymo’s self-driving cars leverage a network of cameras, RADAR, LiDAR, and Computer Vision (CS) to monitor the vehicle’s surroundings constantly. CS enables computers to interpret, analyze, and extract visual data. It also employs large volumes of Neural Networks (NNs) trained through driving data, such as handling turns, maneuvering obstacles, and making speed adjustments. NNs recognize patterns in driving data to make real-time decisions for safer navigation.

Waymo’s self-driving cars excel because they employ Sensor Fusion: combining visual data from multiple sensors (e.g., RADAR and LiDAR) to create a robust and reliable understanding of the vehicle’s surroundings, enabling safer autonomous navigation.

🔑KEY TAKEAWAY

Waymo’s self-driving cars have shown a significant safety advantage over human drivers, with 81% fewer airbag deployments, 72% fewer injury-causing crashes, and 57% fewer police-reported accidents in its core markets of Phoenix, AZ, and San Francisco, CA. The NHTSA estimates that 32% of injury crashes and 60% of property damage crashes involving human drivers go unreported, improving Waymo’s safety profile. Of nearly 200 Waymo accidents, only 20 resulted in injuries, and just one was serious and caused by a human driver fleeing the police.

Waymo’s safety profile has improved alongside its rapid growth, with weekly ridership increasing from 10,000 to over 150,000, with plans to expand to Atlanta, GA, in 2025. However, the findings are limited to specific cities and don’t extend to other companies providing autonomous ride-hailing services. Nevertheless, Waymo’s self-driving cars may be the most jaw-dropping use of AI so far.

📒FINAL NOTE

FEEDBACK

How would you rate today’s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

❀TAIP Review of The Week

“I LOVE the actionable content!”

-Logan (1ïžâƒŁ 👍Nailed it!)
REFER & EARN

🎉Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving ⚙Ultimate Prompt Engineering Guide.

Refer 3 friends to learn how to đŸ‘·â€â™€ïžBuild Custom Versions of OpenAI’s ChatGPT.

Reply

or to participate.