- The AI Pulse
- Posts
- š§ Self-Driving: Hype or Imminent Reality?
š§ Self-Driving: Hype or Imminent Reality?
PLUS: How Does Self-Driving Work? Sherlock Holmes Helps Explain.
Welcome back AI prodigies!
In todayās Sunday Special:
šThe Six Stages of Self-Driving
šIs Self-Driving Worthwhile?
šµļøāāļøVisualizing a Self-Driving Car as Sherlock Holmes
Read Time: 6 minutes
šKey Terms
Deep Learning: a technique that borrows principles and practices from the human brain to make predictions. It involves creating artificial neural networks that process data through multiple layers of computation. These networks learn from vast datasets by adjusting the connections (i.e., weights) between nodes to recognize complex patterns, enabling them to excel in tasks like image recognition.
Computer Vision: developing algorithms to enable machines to identify and analyze objects, shapes, colors, and motion in images and videos.
Actuators: components or devices that convert electrical, hydraulic, or pneumatic signals into mechanical movement. Theyāre like the āmusclesā of a self-driving car, responsible for executing specific actions, such as controlling the position of a vehicleās steering wheel.
Dead Reckoning: calculating the current position of a moving object by using a previously determined position or fix and incorporating estimates of speed, direction, and elapsed time.
šTHE SIX STAGES OF SELF-DRIVING
Before we dive into the safety implications of autonomous driving, here are six specific stages of vehicle autonomy:
Level 0 (No Driving Automation)
Most drivers around the globe fully control their vehicles. While there may be systems like emergency braking to assist the driver, Level 0 vehicles lack automation, such as cruise control.
Level 1 (Driver Assistance)
Level 1 vehicles have one automated system for driver assistance, such as cruise control or lane assist. However, the human driver still manages braking and steering.
Level 2 (Partial Driving Automation)
At Level 2, vehicles have Advanced Driver Assistance Systems (ADAS) capable of controlling steering and acceleration. However, human drivers must remain ready to take control at any moment. Most new luxury vehicles include these features in a premium package. Although Teslaās āFull Self-Drivingā (FSD) has Level 3 elements, itās technically Level 2. As impressive as driving from San Francisco to Los Angeles with just a few human interventions is, Teslaās system intentionally lacks redundancy, meaning the human driver is the backup if the autonomous system fails to protect Tesla from liability.
Level 3 (Conditional Driving Automation)
Level 3 is a significant technological leap where vehicles possess environmental detection capabilities to make informed decisions, like overtaking slow vehicles. Nonetheless, human override is required. Mercedes debuted Level 3 in select models via its DrivePilot feature earlier this year, but itās only approved in Nevada and restricts vehicles to 40 miles per hour (mph). Though Nevadan Mercedes drivers can legally take their eyes off the road, their vehiclesā environmental detection capabilities are inferior to Teslaās AutoPilot. It is trained on a far more robust dataset, performs in urban settings, and activates at any speed.
Level 4 (High Driving Automation)
Level 4 vehicles can intervene in system failures or emergencies, operating without human interaction in most situations. However, humans can still manually override if necessary. These vehicles are often limited to specific geofenced areas, typically urban environments with lower speeds, and are primarily used in ridesharing services. For instance, Cruise has been testing fully driverless cars in San Francisco since October 2020. Other companies like NAVYA, Waymo, Magna, Volvo, and Baidu are also actively involved in Level 4 testing.
Level 5 (Full Driving Automation)
Level 5 vehicles are completely autonomous, requiring no human intervention. They lack steering wheels and pedals and can operate without geofencing restrictions. These fully autonomous cars are currently in preliminary testing but are unavailable to the public. Many experts doubt their feasibility in an unstandardized driving environment made for humans.
šIS SELF-DRIVING WORTHWHILE?
Letās give the skeptics their red meat. Level 5 vehicles never become a reality. Even in this scenario, vehicles equipped with high-tech visual tools (e.g., cameras, radar, and lidar) and over-the-air software updates (in pursuit of self-driving) are far safer than their traditional counterparts. First, data scientists can analyze visual data to determine the most likely causes of accidents and engineer safety features accordingly. Second, over-the-air updates on safety-critical systems such as safety and braking, pioneered by Tesla in 2012, equip existing car owners with new features without buying the latest model. In fact, according to the National Highway Traffic and Safety Administration (NHTSA), the safest mass-market vehicles ever tested are Teslaās Model S, Model 3, and Model X, all boasting āvisionā tools and over-the-air safety updates.
In the alternate self-driving scenario, would we be safer? Like any great question, the answer is complicated. Itās indisputable that Level 1 and Level 2 vehicles are safer than Level 0 vehicles. Further, itās highly likely that tools like Mercedes DrivePilot and Teslaās Autopilot, when used with human intervention, lower the probability of an accident even further. This lack of clarity stems from the fact that comparing Level 3 tools, Level 2 tools with human assistance, Level 1 tools with human intervention, and Level 0 features requires controlling for dozens of confounding variables. Driving environment factors such as road surface, weather, amount of light, type of sensors (e.g., cameras, radar, and lidar), and extraneous obstacles (e.g., debris, cones, and cyclists) complicate the picture. Though unverified data from autonomous leaders like Tesla indicates significant strides in the safety of FSD-enabled vehicles (i.e., advanced Level 2), the truth remains muddy [see slide 77 of this report for Teslaās statistics].
šµļøāāļøVISUALIZING A SELF-DRIVING CAR AS SHERLOCK HOLMES
Regardless of oneās opinion on self-driving, itās top of mind for most leading automakers and, therefore, worth understanding. At a high level, autonomous cars mimic every step of human driving. We view the driving landscape (e.g., positions of other cars, traffic signals, or pedestrians), analyze our location, speed, and direction relative to every relevant object, and implement the right combination of acceleration, braking, and steering to get closer to our destination. In other words, we take visual inputs, process them, and make decisions. Autonomous driving computerizes the human driving process through four distinct technologies: deep learning, computer vision, robotics, and dead reckoning. Each discipline is an intellectual heavyweight, so we find the following analogy helpful for deconstructing the pillars of self-driving.
Deep Learning (i.e., Sherlockās Mind): Deep learning is like Sherlockās mind, a vast repository of knowledge and experiences. The self-driving carās artificial neural network replicates the fundamental processes of the human brain. Itās a complex web of interconnected neurons that remembers millions of patterns and details, using layers of neurons to process data. In last weekās Sunday Special featuring facial recognition, we explained how deep learning and neural networks function.
Computer Vision (i.e., Sherlockās Sharp Senses): Computer vision is like Sherlockās heightened senses. The carās cameras, radar, and lidar (i.e., laser) sensors act as his superhuman eyes and ears, keenly observing the world. They feed this information to the neural network, which analyzes it meticulously, just as Sherlock processes sensory information to draw conclusions.
Robotics (i.e., Watson, Sherlockās Loyal Assistant): Robotics within self-driving cars resemble Dr. John Watson, Sherlockās loyal partner. The carās robotic actuators are comparable to Watsonās, executing precise control commands. They act as supportive and dependable companions, translating the neural networkās deductions into actions like Watson assists Sherlock in his investigative work.
Dead Reckoning (i.e., Sherlockās Deductive Reasoning): Dead reckoning aligns with Sherlockās legendary deductive reasoning skills. Dead reckoning estimates the carās position by considering past data, direction, and distance traveled, much like Sherlock Holmes deduces a criminalās whereabouts based on a trail of clues.
The carās operation is reminiscent of Sherlockās detective work, combining knowledge, keen observation, a trusted assistant, and deductive reasoning to navigate urban and highway driving. Autonomous systems will improve as equipped cars continue to collect inconceivably large volumes of data. However, edge cases, or previous unseen driving scenarios, are often tricky for autonomous systems to handle. To realize their utopian dream, self-driving proponents must overcome technical limitations, public resistance, and regulatory hurdles.
šFINAL NOTE
If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.
How was todayās newsletter?
ā¤ļøAI Pulse Review of The Week
āNice balance between news, tips, and use cases.ā
šNOTION TEMPLATES
šØSubscribe to our newsletter for free and receive these powerful Notion templates:
āļø150 ChatGPT prompts for Copywriting
āļø325 ChatGPT prompts for Email Marketing
šSimple Project Management Board
ā±Time Tracker
Reply