Whether we like it or not, artificial intelligence is already a part of our day-to-day reality. From Siri on our smartphones to robots in our surgery room, robotic assistance is being employed everywhere to make our lives more convenient and efficient.
One of the decade’s most exciting prospects is seen in the introduction of driverless cars. Car manufacturing companies have already integrated automation in steering, acceleration, parking and lane adherence, and, with the race on to create an affordable driverless car for the masses, it will not be long before fully autonomous cars roam the streets.
The development of these autonomous vehicles has been broken down into 5 stages:
- Some automation provided for steering and acceleration;
- Incorporation of more automation for cruise control and line adherence;
- Full automation provided in controlled environments;
- Automation provided in most situations save for a select few circumstances;
- Automation employed completely with its performance matching or exceeding the decision-making ability of human drivers.
Currently, we find ourselves in the third stage, where the vast majority of our cars are driven by drivers who may, at their discretion, choose to use automated assistance in cars made by companies including Tesla, BMW, Infiniti and Mercedes Benz. Our advancement beyond this stage, however, may be inhibited by the debate over the ability of robots to effectively replace humans in situations requiring ethical judgment. A dilemma might be presented, hypothetically, if the software must choose between running into 1 newborn baby or swerving to hit 2 people diagnosed with 1 day to live. Although this is an extreme case, it would be imperative that the software is designed in such a way to accommodate such circumstances in the unlikely event it should ever arise. Such ethical dilemmas filter down into decisions that may occur more commonly, such as swerving to avoid people or animals on the road at the risk of endangering the lives of passengers. People react differently in these situations as there is never one clear cut formula to determine a straightforward answer. How would you choose what to do? Is it even feasible to create a software designed to accommodate for all ethical factors in every situation?
To address the ethics of driverless cars, Governments around the world will soon be required to develop legal frameworks to regulate the ethical application of artificial intelligence, in cars or otherwise, in anticipation of further developments. In 2017, the Ethics Commission at the German Ministry of Transport and Digital Infrastructure created the world’s first set of ethical rules to guide driverless car software designers in their creation of fully automated cars. The rules are based upon the following principles:
- Human safety must come first
- All humans are considered equal
- The fewest people possible must be harmed
- The manufacturing companies are liable
- Companies must be transparent
In Australia, the National Transport Commission is currently developing our own set of rules in preparation for the onset of driverless cars. Their most recent statement, released in May 2018, aims to ‘support the safe, commercial deployment and operation of automated vehicles at all levels of automation’ and ensures that there will always be a legal entity responsible for driving when an automated driving system is engaged. Although the government is still coming to terms with its role in assuring and enforcing the safety of driverless car technology, this statement is an important step towards advancing robotic assistance and heralds the advent of automated vehicles for the average Australian citizen.