Are self driving cars a good or bad idea?

  • Self-driving cars can coordinate with each other to maintain safe following distances and optimise merging and lane-changing maneuvers. This coordination can help prevent unnecessary braking and acceleration, reducing fuel consumption and emissions.
  • With the increase of algorithm complexity and the improvement of autonomous driving level, safety has become an important obstacle to the real landing of autonomous vehicles.
  • Current discussions about the ethical dilemmas of autonomous vehicles are primarily related to the well-known trolley problem in moral philosophy.

Autonomous vehicles have great potential to become the primary mode of transportation in the near future, it is generally believed that science and technology are double-edged swords, and autonomous vehicles are no exception. While driving autonomous vehicles is a good idea, there are also some issues associated with this technology.

Advantages

Self-driving cars can coordinate with each other to maintain safe following distances and optimise merging and lane-changing maneuvers. This coordination can help prevent unnecessary braking and acceleration, reducing fuel consumption and emissions.

Additionally, autonomous vehicles can leverage vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication systems to gather information about traffic conditions, road construction, and other relevant factors. By utilising this information, self-driving cars can adjust their routes and speeds accordingly, further enhancing efficiency. Autonomous vehicles are equipped with advanced sensors such as cameras, LiDAR, and radar, allowing them to perceive their surroundings with precision and react to potential hazards in real-time, further enhancing safety. 

Also read: Chinese EV invasion: Europe’s automotive landscape shifts

Adapt with the safety standards

With the increase of algorithm complexity and the improvement of autonomous driving level, safety has become an important obstacle to the real landing of autonomous vehicles. In 2011, ISO 26262 emerged as a dedicated functional safety standard for the automotive industry. ISO 26262 Provides automobile manufacturers with a complete set of systematic design methods to identify hazards and improve the safety of vehicles. However, the latest version of ISO 26262 is not fully applicable to the functional safety of autonomous driving-related controllers, and research on how to apply ISO 26262 standards to the field of autonomous driving is gradually put on the agenda. However, there is no clear standard or draft issued

Also read: Exploring Bing: Microsoft’s search engine evolution

Standardised ethical issues

Current discussions about the ethical dilemmas of autonomous vehicles are primarily related to the well-known trolley problem in moral philosophy. This thought experiment was initially proposed by the philosopher Philippa Foot. He put forward the issue that if you can switch the trolley’s track to hit one person or let it continue on its path to hit five people.

Over the past few decades, this issue has developed according to various situations. If an autonomous vehicle also faces an unavoidable traffic accident, which party should the AI system choose to collide with.

According to a report published in the journal Nature in 2015 by Mitchell Waldrop, most people are unwilling to relinquish decision-making power to machines in the event of an inevitable accident. Without clear ethical guidelines to guide the decision-making of autonomous vehicles in accidents, it will be difficult to change the current lack of trust among users and may even lead to people refusing to purchase autonomous vehicles. Defining ethical standards for autonomous vehicles is a pressing issue, but current theoretical research is still in its early stages.

Legal liability

May 2016, a Tesla electric car in Florida, the United States, collided with a truck that suddenly crossed the road while in autopilot mode, resulting in the death of the electric car’s owner.

This accident became the first publicly reported case of a traffic fatality involving autonomous driving functions worldwide. After a thorough and prolonged investigation, the National Highway Traffic Safety Administration (NHTSA) released its final investigation report in January 2017, which clearly stated that the accident was not a direct responsibility of Tesla’s autopilot system, thus Tesla was not required to bear legal liability. This incident profoundly revealed the complexity of defining legal liability in traffic accidents involving autonomous vehicles.

Once fully autonomous vehicles are officially put into commercial operation, any traffic accidents that occur will undoubtedly pose severe challenges to the current laws and regulations, which involve multiple legal relationships between design, manufacturing, and users.

Miurio-Huang

Miurio Huang

Miurio Huang is an intern news reporter at Blue Tech Wave media specialised in AI. She graduated from Jiangxi Science and Technology Normal University. Send tips to m.huang@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *