The case against ‘smart cars’: Why we’re better off without them

  • The lack of understanding of the technology across the industry and government is shocking, and knowledge of cars equipped with AI technology must be widespread.
  • The failure mode of AI is difficult to predict, or no one can guarantee that such a machine will never make mistakes.
  • Hackers have many ways to crash you: malware attacks, man-in-the-middle attacks, denial-of-service attacks, ransomware attacks.
  • Regulation of automotive artificial intelligence has not yet begun, and cars carrying Ai still have a long way to go.

AI technology is now widely used in self-driving cars, along with sensors, cameras, radar, and more to drive between destinations without human operation. Companies developing and/or testing autonomous vehicles include Audi, BMW, Ford, Google, General Motors, Tesla, Volkswagen, and Volvo. Great! The driver can free his hands and just go along for the ride! But is this the case?

The idea that AI can be a risk as well as a benefit has long been debated. I think this is especially true in cars and vehicles.

To some extent AI provides significant advantages, removing human error from the driving equation. But what about when it goes wrong?

Here are 4 reasons why I think AI is not worth the trouble:

  • A false sense of security distracts drivers.
  • AI bad judgment
  • Hacker attacks
  • Lack of regulations on accident recognition

Please fasten your seat belt, the car without AI is about to leave!


Concern # 1: A false sense of security

Mary (Missy) L. Cummings, IEEE Senior Fellow and Professor at the Duke Institute for Brain Sciences (DIBS) at Duke University, asked the U.S. Senate Committee on Commerce, Science, and Transportation in 2016 to regulate the use of artificial intelligence in automobiles. But neither the pleas of Mary (Missy) L. Cummings nor Brown’s death could prompt the government to act. The lack of understanding of technology across industry and government is shocking.

Now the question we have to consider is: If self-driving cars become widespread, human error in operation is replaced by human error in coding. Is this an escape from the road killer?

Proponents of self-driving cars often assert that the sooner we get rid of drivers, the safer we’ll be on the road. Citing statistics from the National Highway Traffic Safety Administration, they claim that 94 percent of accidents are caused by human drivers. But this statistic has been taken out of context and is inaccurate. Moreover, claims that self-driving cars will be safer than human-driven cars ignore what anyone who has ever worked in software development knows all too well: software code is very error-prone, and the problem will only get worse as systems become more complex.

There was the October 2021 crash of a Pony. AI driverless car into a sign, the April 2022 crash of a TuSimple tractor trailer into a concrete barrier, the June 2022 crash of a Cruise robotaxi that suddenly stopped while making a left turn, and the March 2023 crash of another Cruise car that rear-ended a bus.

Also read: Volkswagen drivers are about to join the ChatGPT conversation

Concern # 2: AI system activates when it shouldn’t

One scenario we have to consider is if the AI system activates when it shouldn’t (stops abruptly when it detects a “shadow” of an obstacle ahead), or doesn’t respond when it should. What situation would you be in?

Admittedly, the failure mode of AI is hard to predict, or no one can guarantee that such a machine will never make mistakes.

A LLM guises which words and phrases will appear next by referring to an archive gathered from existing data during training.The autonomous driving module interprets the scene based on a database of labeled images ( here’s a car, this is a pedestrian, this is a tree) and makes similar guitions to decide how to navigate around obstacles. These tagged images were also provided during training. But not every possibility can be modeled, so numerous failure mode is very difficult to predict.

One failure mode that was not previously anticipated was phantom braking. For no apparent reason, an autonomous vehicle brakes suddenly, potentially causing a rear-end collision with the vehicle behind it and other vehicles behind it. In May 2022, NHTSA sent a letter to Tesla noting that the agency had received 758 complaints about phantom braking in Model 3 and Y vehicles.In May, German newspaper Handelsblatt reported 1,500 complaints about braking problems in Tesla cars and 2,400 about sudden acceleration. It now appears that the rear-end rate of self-driving cars is about twice that of human-driven cars.IBM’s A.I. -based Watson, the precursor to today’s LLM, was good at guessing but had no real knowledge, especially when it came to making judgments under uncertainty and deciding on actions based on incomplete information. Today’s LLMS are no exception: the underlying models simply cannot cope with the lack of information, nor do they have the capacity to assess whether their estimates are good enough in this context.

These problems are common in the field of autonomous driving. The crash in June 2022 involved a cruise robotaxi when it decided to make a big left turn between two vehicles. As detailed in an accident report by auto safety expert Michael Woon, the car correctly took a viable path, but in the middle of the turn, it slammed on its brakes and came to a stop in the middle of the intersection. It guessed that an oncoming car in the right lane would make a turn, even though turning was physically impossible at the speed the car was traveling. This uncertainty has confused cruise lines and made the worst possible decisions.An oncoming Prius failed to turn and ran straight into the Cruise, injuring the occupants of both vehicles.

Also read: Apple quits its decade-long EVs development project

Also read: Huawei in talks with Audi, Mercedes to invest in smart car firm


Pop quiz

For lithium batteries, what is the best operating temperature?

A. 30 degrees Celsius

B. 50 degrees Celsius

C. 60 degrees Celsius

D. 70 degrees Celsius

The answer is at the bottom of this article.


Concern # 3:Hacking attacks

In the era of AI-driven innovation, the advent of self-driving cars brings not only promises of enhanced mobility but also raises critical ethical concerns.

Mr. Ratan Bajaj, Founder and CEO at MindWell Al

As early as in 2019, new models Tesla Model 3 was hacked in just a few minutes. Hackers Amat Cama and Richard Zhu exploited weaknesses in the “infotainment” system to gain access to one of the car’s computers (the physical world).

Once successful, Amat Cama and Richard Zhu were able to run their own lines of code. You can see them demonstrating the attack in the video .

Here are a few examples from before, I’m not talking nonsense.

Malware attack

In 2011, the Chevrolet Malibu became the first remote intrusion vehicle that attackers were able to control. Hackers “exploited a weakness in the Bluetooth stack to manipulate the vehicle’s radio and insert malware code by syncing the phone with the radio” (Attacks on self-driving cars and their Countermeasures: Investigation). Once successfully inserted, the code can send a message to the car’s ECU and lock the brakes.

Man-in-the-middle (MiTM) attacks

In a man-in-the-middle attack, a hacker can manipulate communications between two entities and gain control of an ECU or infrastructure roadside unit (RSU) by eavesdropping, replaying, and modifying messages sent between entities.

Denial of service (DoS) attack

DoS is one of the most dangerous attacks that can happen on self-driving cars; They can result in serious accidents or death. Attackers can use DoS attacks to “prevent cameras, liDAR, and radar from detecting objects, roads, and safety signs” (Attacks on self-driving cars and their Countermeasures: Investigation). A DoS attack can also affect the braking system, causing the car to stop suddenly or not stop at all, resulting in the braking system not working properly.

Ransomware attack

This type of attack can be very dangerous for commercial vehicles. Back in 2017, Honda Motor Company was hit by a major WannaCry ransomware attack in which the attackers “demanded large amounts of cryptocurrencies to provide decryption keys” (” Attacks on Self-driving Cars and Their Countermeasures: Investigation “).

Although the attack was not aimed at self-driving cars, it still affected many of Honda’s self-driving cars to get software updates during the attack. This attack is probably more common than others because hackers are very successful in executing ransomware attacks.

What makes self-driving cars vulnerable?

Self-driving cars are tempting targets for cybercriminals who may “attempt to steal a driver’s financial data or launch advanced terrorist attacks by turning the vehicle into a weapon.”
In addition to unintentional threats such as the sudden failure of AI systems, there are intentional attacks designed to specifically compromise the safety-critical functions of AI systems. For example, painting roads to mislead navigation systems, or putting stickers on stop signs to prevent them from being identified.
Light Detection and Ranging (LiDAR) is a camera and laser pulse ranging system that constitutes the “eyes” of an autonomous vehicle, feeding information about the driving scene and environment into a CNN computer model to make decisions such as speed adjustments and steering corrections. Unfortunately, CNNS can easily be hacked by “adding tiny pixel-level changes to input images that are not visible to the naked eye.” Unfortunately, this vulnerability could allow bad actors to attack self-driving cars (The Lighthouse).

On-board diagnostics (OBD) is one of the most vulnerable parts of autonomous vehicles; The malware code can be inserted into an electronic control unit (ECU) via an OBD. The inserted malware can tweak and reprogram the ECU. Infected ECUs may not be able to communicate with other on-board unit (OBU) components, such as lidar, cameras, and radar, which can compromise the safety of autonomous vehicles.

Conclusion: to obtain several methods of automatic control driving a car

  • Remote access via the Internet
  • Remote access via Bluetooth
  • Insert the back door into the self-driving car through the vehicle manufacturer (supply chain)
  • Special devices are implanted in the vehicle
  • Disturbing the senses of the vehicle

Please think about it:

Hacking scenarios and Unintended consequences: What if self-driving cars fall victim to a hack that causes them to lose control, or worse, have an accident?

Moral dilemmas: Who bears responsibility? The issue of liability looms large when self-driving cars are compromised.

Ensuring the future of autonomous Driving: Examining ethical requirements, we delve into the measures needed to protect autonomous vehicles from hacking threats. What role should regulations play in enforcing stringent cybersecurity standards?

Human life, moral hazard: When we turn the steering wheel over to AI, the stakes are high, especially when it comes to life and safety issues. How can society ensure that ethical considerations guide the development of self-driving car technology to prevent hacking-related accidents?

Concern # 4:Lacking of regulations

Ai has system-level implications that cannot be ignored.
Self-driving cars rely on wireless connectivity to stay aware of the road, but what happens when that connection drops? One driver found his car trapped among 20 Cruise vehicles that had lost contact with the remote operations center, causing a major traffic jam.

The Cruise vehicles that caused the major traffic jams

Also read: Cruise to Launch First Ever Wheelchair-Accessible Car

Attitudes toward self-driving cars in tech-friendly San Francisco, once optimistic, have taken a negative turn as the city is experiencing a plethora of issues. If an intercepted self-driving car results in the death of someone who can’t get to the hospital in time, that sentiment could eventually lead the public to reject the technology.

So what does the self-driving car experience mean for regulating AI more broadly? Not only do companies need to make sure they understand the broader system-level implications of AI, they also need oversight – and they shouldn’t let themselves police themselves.

Ai still has a long way to go in cars and trucks. I’m not calling for a ban on self-driving cars. There are clear advantages to using AI, and it is irresponsible for people to call for a ban or even a moratorium on its use. But we need more government oversight to prevent unnecessary risk taking.

However, regulation of AI in cars has yet to begin. This can be attributed in part to excessive demands and pressure from the industry, but also to a lack of capacity on the part of regulators. The European Union has been more proactive in regulating artificial intelligence, especially self-driving cars.


The correct answer to the pop quiz is  A. 30 degrees Celsius.

Fei-Wang

Fei Wang

Fei Wang is a journalist with BTW Media, specialising in Internet governance and IT infrastructure, with a focus on interviewing leaders in the technology industry. Fei holds a Master of Science degree from the University of Edinburgh. Have a tip? Reach out at f.wang@btw.media.
Follow Me:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *