• The AI Pulse
  • Posts
  • šŸ§  Self-Driving: Hype or Imminent Reality?

šŸ§  Self-Driving: Hype or Imminent Reality?

PLUS: How Does Self-Driving Work? Sherlock Holmes Helps Explain.

Welcome back AI prodigies!

In todayā€™s Sunday Special:

  • šŸThe Six Stages of Self-Driving

  • šŸš—Is Self-Driving Worthwhile?

  • šŸ•µļøā€ā™‚ļøVisualizing a Self-Driving Car as Sherlock Holmes

Read Time: 6 minutes

šŸŽ“Key Terms

  • Deep Learning: a technique that borrows principles and practices from the human brain to make predictions. It involves creating artificial neural networks that process data through multiple layers of computation. These networks learn from vast datasets by adjusting the connections (i.e., weights) between nodes to recognize complex patterns, enabling them to excel in tasks like image recognition.

  • Computer Vision: developing algorithms to enable machines to identify and analyze objects, shapes, colors, and motion in images and videos.

  • Actuators: components or devices that convert electrical, hydraulic, or pneumatic signals into mechanical movement. Theyā€™re like the ā€œmusclesā€ of a self-driving car, responsible for executing specific actions, such as controlling the position of a vehicleā€™s steering wheel.

  • Dead Reckoning: calculating the current position of a moving object by using a previously determined position or fix and incorporating estimates of speed, direction, and elapsed time.

šŸTHE SIX STAGES OF SELF-DRIVING

Before we dive into the safety implications of autonomous driving, here are six specific stages of vehicle autonomy:

Level 0 (No Driving Automation)

Most drivers around the globe fully control their vehicles. While there may be systems like emergency braking to assist the driver, Level 0 vehicles lack automation, such as cruise control.

Level 1 (Driver Assistance)

Level 1 vehicles have one automated system for driver assistance, such as cruise control or lane assist. However, the human driver still manages braking and steering.

Level 2 (Partial Driving Automation)

At Level 2, vehicles have Advanced Driver Assistance Systems (ADAS) capable of controlling steering and acceleration. However, human drivers must remain ready to take control at any moment. Most new luxury vehicles include these features in a premium package. Although Teslaā€™s ā€œFull Self-Drivingā€ (FSD) has Level 3 elements, itā€™s technically Level 2. As impressive as driving from San Francisco to Los Angeles with just a few human interventions is, Teslaā€™s system intentionally lacks redundancy, meaning the human driver is the backup if the autonomous system fails to protect Tesla from liability.

Level 3 (Conditional Driving Automation)

Level 3 is a significant technological leap where vehicles possess environmental detection capabilities to make informed decisions, like overtaking slow vehicles. Nonetheless, human override is required. Mercedes debuted Level 3 in select models via its DrivePilot feature earlier this year, but itā€™s only approved in Nevada and restricts vehicles to 40 miles per hour (mph). Though Nevadan Mercedes drivers can legally take their eyes off the road, their vehiclesā€™ environmental detection capabilities are inferior to Teslaā€™s AutoPilot. It is trained on a far more robust dataset, performs in urban settings, and activates at any speed.

Level 4 (High Driving Automation)

Level 4 vehicles can intervene in system failures or emergencies, operating without human interaction in most situations. However, humans can still manually override if necessary. These vehicles are often limited to specific geofenced areas, typically urban environments with lower speeds, and are primarily used in ridesharing services. For instance, Cruise has been testing fully driverless cars in San Francisco since October 2020. Other companies like NAVYA, Waymo, Magna, Volvo, and Baidu are also actively involved in Level 4 testing.

Level 5 (Full Driving Automation)

Level 5 vehicles are completely autonomous, requiring no human intervention. They lack steering wheels and pedals and can operate without geofencing restrictions. These fully autonomous cars are currently in preliminary testing but are unavailable to the public. Many experts doubt their feasibility in an unstandardized driving environment made for humans.

šŸš—IS SELF-DRIVING WORTHWHILE?

Letā€™s give the skeptics their red meat. Level 5 vehicles never become a reality. Even in this scenario, vehicles equipped with high-tech visual tools (e.g., cameras, radar, and lidar) and over-the-air software updates (in pursuit of self-driving) are far safer than their traditional counterparts. First, data scientists can analyze visual data to determine the most likely causes of accidents and engineer safety features accordingly. Second, over-the-air updates on safety-critical systems such as safety and braking, pioneered by Tesla in 2012, equip existing car owners with new features without buying the latest model. In fact, according to the National Highway Traffic and Safety Administration (NHTSA), the safest mass-market vehicles ever tested are Teslaā€™s Model S, Model 3, and Model X, all boasting ā€œvisionā€ tools and over-the-air safety updates.

In the alternate self-driving scenario, would we be safer? Like any great question, the answer is complicated. Itā€™s indisputable that Level 1 and Level 2 vehicles are safer than Level 0 vehicles. Further, itā€™s highly likely that tools like Mercedes DrivePilot and Teslaā€™s Autopilot, when used with human intervention, lower the probability of an accident even further. This lack of clarity stems from the fact that comparing Level 3 tools, Level 2 tools with human assistance, Level 1 tools with human intervention, and Level 0 features requires controlling for dozens of confounding variables. Driving environment factors such as road surface, weather, amount of light, type of sensors (e.g., cameras, radar, and lidar), and extraneous obstacles (e.g., debris, cones, and cyclists) complicate the picture. Though unverified data from autonomous leaders like Tesla indicates significant strides in the safety of FSD-enabled vehicles (i.e., advanced Level 2), the truth remains muddy [see slide 77 of this report for Teslaā€™s statistics].

šŸ•µļøā€ā™‚ļøVISUALIZING A SELF-DRIVING CAR AS SHERLOCK HOLMES

Regardless of oneā€™s opinion on self-driving, itā€™s top of mind for most leading automakers and, therefore, worth understanding. At a high level, autonomous cars mimic every step of human driving. We view the driving landscape (e.g., positions of other cars, traffic signals, or pedestrians), analyze our location, speed, and direction relative to every relevant object, and implement the right combination of acceleration, braking, and steering to get closer to our destination. In other words, we take visual inputs, process them, and make decisions. Autonomous driving computerizes the human driving process through four distinct technologies: deep learning, computer vision, robotics, and dead reckoning. Each discipline is an intellectual heavyweight, so we find the following analogy helpful for deconstructing the pillars of self-driving.

  1. Deep Learning (i.e., Sherlockā€™s Mind): Deep learning is like Sherlockā€™s mind, a vast repository of knowledge and experiences. The self-driving carā€™s artificial neural network replicates the fundamental processes of the human brain. Itā€™s a complex web of interconnected neurons that remembers millions of patterns and details, using layers of neurons to process data. In last weekā€™s Sunday Special featuring facial recognition, we explained how deep learning and neural networks function.

  2. Computer Vision (i.e., Sherlockā€™s Sharp Senses): Computer vision is like Sherlockā€™s heightened senses. The carā€™s cameras, radar, and lidar (i.e., laser) sensors act as his superhuman eyes and ears, keenly observing the world. They feed this information to the neural network, which analyzes it meticulously, just as Sherlock processes sensory information to draw conclusions.

  3. Robotics (i.e., Watson, Sherlockā€™s Loyal Assistant): Robotics within self-driving cars resemble Dr. John Watson, Sherlockā€™s loyal partner. The carā€™s robotic actuators are comparable to Watsonā€™s, executing precise control commands. They act as supportive and dependable companions, translating the neural networkā€™s deductions into actions like Watson assists Sherlock in his investigative work.

  4. Dead Reckoning (i.e., Sherlockā€™s Deductive Reasoning): Dead reckoning aligns with Sherlockā€™s legendary deductive reasoning skills. Dead reckoning estimates the carā€™s position by considering past data, direction, and distance traveled, much like Sherlock Holmes deduces a criminalā€™s whereabouts based on a trail of clues.

The carā€™s operation is reminiscent of Sherlockā€™s detective work, combining knowledge, keen observation, a trusted assistant, and deductive reasoning to navigate urban and highway driving. Autonomous systems will improve as equipped cars continue to collect inconceivably large volumes of data. However, edge cases, or previous unseen driving scenarios, are often tricky for autonomous systems to handle. To realize their utopian dream, self-driving proponents must overcome technical limitations, public resistance, and regulatory hurdles.

šŸ“’FINAL NOTE

If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.

How was todayā€™s newsletter?

ā¤ļøAI Pulse Review of The Week

ā€œNice balance between news, tips, and use cases.ā€

-Joey (ā­ļøā­ļøā­ļøā­ļøā­ļøNailed it!)

šŸŽNOTION TEMPLATES

šŸšØSubscribe to our newsletter for free and receive these powerful Notion templates:

  • āš™ļø150 ChatGPT prompts for Copywriting

  • āš™ļø325 ChatGPT prompts for Email Marketing

  • šŸ“†Simple Project Management Board

  • ā±Time Tracker

Reply

or to participate.