Five philosophy perspectives presented by U of G students
This semester students in Ethics (PHIL 2120) have examined ethical issues concerning the introduction of driverless cars to our roadways. Philosophical discussion of these issues is extremely nascent, with the earliest scholarly publications dating from around 2015. For this reason, we are in the exciting position of beginning to thinking with others about a current issue that hasn’t been thoroughly discussed.
In second year ethics, students learn about some fundamental philosophical perspectives on what we ought to do. These perspectives shape ethical thinking both within and outside philosophy: some people think that our actions should be evaluated based on the consequences of the action. For instance, utilitarians emphasize the pleasure and pain that result from our actions. They hold that we ought always to maximize pleasure and minimize pain for all sentient creatures. Others emphasize the intrinsic, non-fungible value of persons. Kantian ethics is predicated on the principle that we cannot treat persons as means to the maximization of overall welfare. According to such a perspective, every person has a right against being harmed even if doing so will maximize overall welfare.
Students were invited to bring these clashing philosophical perspectives to questions concerning whether we should employ driverless cars and if so, how they should be programmed to respond in the event of emergency situations wherein inevitably someone will be harmed or killed.
The short essays found here encapsulate some of the insights yielded by our discussions.
—John Hacker-Wright, Ph.D.
Associate Professor of Philosophy
The ethics of accident algorithms for driverless cars
By Noah Solomon, Ryan Hunter, Madison Carey & Christopher Kim
In 2018, an Arizona woman was struck and killed by a vehicle — the driver had been on his cellphone at the time of the accident. While this may seem like a standard incident of vehicular manslaughter, one key variable about this incident raised substantial questions: The vehicle that struck the woman was a self-driving Uber.
Scientists and engineers are consistently creating innovative solutions to today’s biggest problems. With automation on everybody’s mind, it should come as no shock that automated cars are the way of the future. Discussions surrounding autonomous vehicles are becoming more common; it is necessary that those discussions to raise major concerns and ask difficult questions. One burning question is: How should driverless cars operate in emergency situations? Specifically: Should automated cars be programmed to sacrifice the driver in order to minimize the number of deaths that would result from an unavoidable accident?
The answer to this question can be approached using the same reasoning employed in the classic trolley dilemma. The trolley dilemma is a scenario wherein a runaway trolley is moving in an unavoidable trajectory towards five people tied up on the tracks. There is, however, a separate connected track where only a single person is tied up. You have the opportunity to pull a lever and divert the trolley onto the track with a single person — this opportunity represents the dilemma. A driverless car can either run off the road risking only the driver, or it can drive off course saving the driver but risking the lives of pedestrians in the process.
Utilitarianism dictates that the ethical decision is the one that generates the most happiness and the least suffering. From the utilitarian perspective, one might argue that the driver should die to save the many lives of others. By sacrificing one person to save many lives, you minimize suffering and maximize happiness and wellbeing for the multiple lives you save.
Should we opt for this utilitarian position? By the same logic, one could justify murder or theft or lying to save large groups. Furthermore, what if the vehicle will either strike a single pedestrian or harm the driver? Whose life is more valuable in that scenario? What if the vehicle contains more passengers than there are pedestrians? These questions are beyond the grasp of utilitarianism — we must look elsewhere for answers.
Immanuel Kant, an 18th-century philosopher, argued that we should never use people as “a mere means,” which is to say, we should not use others to advance our own interests without the consent of the other party. In the hypothetical scenario above, the pedestrian has not consented to be used as a means to save the driver’s life. In contrast, the driver has consented to drive despite the inherent danger and lack of control in an autonomous vehicle. Considering this, hitting a pedestrian to save a driver is unethical.
Recent progress in the development of driverless cars has brought forth several ethical questions — particularly life and death decisions regarding the behaviour of these autonomous vehicles. Regardless of the method of operation, motor vehicles can be extremely dangerous. However, unlike human-operated vehicles, the outcomes of collisions are not the fault of human error but pre-programmed settings. The fact is, driverless cars are the way of the future — there’s no denying that. While these questions of mortality may be frightening, they are necessary as we take the next step in the evolution of transportation.
Will machines be the moral thinkers of tomorrow?
By Ann-Kelly Appadoo, Madeleine Arndt, Hannah Brock, Spencer Harris, & Julianne Kalocsai
The popularity of self-driving cars seems imminent, but it remains unclear whether they will be an improvement on human cognition or a mere reflection of our biases.
A recent study by the Georgia Institute of Technology suggests the latter, finding that self-driving cars display clear racial bias: automated vehicles were five per cent less accurate in recognizing individuals with darker skin tones.
This finding, consistent across eight different object-detection systems, indicates that people of colour are currently more likely to be struck by self-driving vehicles than lighter-skinned people. It is a stark reminder that even the most sophisticated algorithms, when not supplied with proper forethought by their programmers, are liable to be grossly unjust.
Consider the following example: a child darts into the path of a self-driving car. The car has no time to brake. The only options are to continue driving and strike the child or to swerve and strike an elderly person crossing further down the road.
The decision to save the child’s life by swerving into the elderly person may seem intuitive. However, self-driving vehicles present a novel problem, because they provide us with the luxury of forethought. Given this time to rationally weigh the options, it becomes decidedly unacceptable to save one human life over another based on physical characteristics. To program a machine to prioritize a child’s safety over that of an elderly person is a form of prejudicial discrimination. It implies that a difference in age, a mere physical characteristic, is a valid determinant of the worth of a human life.
For future policymakers to sanction the incorporation of this discriminatory logic into algorithms is fundamentally immoral. It must be an absolute priority for any form of artificial intelligence to view all human beings equally. Certain characteristics should not mean the difference between life and death.
When considering how to create an algorithm that is morally just, it is useful to apply the moral reasoning of philosopher John Rawls. He proposed that ethical standards of behaviour are best derived through democratic deliberation, where all those involved adopt a ‘veil of ignorance.’
The veil of ignorance can be seen as a mental construct, where rules are made for society with complete ignorance about our statuses. As we gaze through the veil, we evaluate our proposed actions with no knowledge of their individual characteristics. This philosophy is intended to encourage policy treating everyone as equals. It appeals to the human desire for self-preservation, by compelling us to consider how any given ethical choice would impact us were we the group most harshly impacted. By striving to act in a way that produces the best outcome for every person, we ensure policy that is just and fair to all.
Should the implementation of driverless cars deliver on all the safety and convenience benefits they promise, they may very well become a primary form of transportation. If this is to occur, the ethical programming on which they operate will impact millions.
Today, to debate the morality of avoiding a young child at the expense of an elderly person seems to be nothing more than a thought experiment. The increasing popularity of algorithms that profile and discriminate between individuals in this way will very quickly take this out of the abstract and into the real world.
Ethical implications of autonomous cars
By Kelsey Coome, Amelia Edmonds, Jacob Kelly, Drew C. McKinnon & Ramez Wadie
The decision of whose life will be prioritized by driverless cars in accidents will not land on the shoulders of ethicists, but on the programmers being paid by corporations interested in making a profit. Most ethicists would not reject driverless cars, because they have the potential to save lives through the reduction of human error. However, they struggle to address ethical questions regarding how these cars should be programmed to respond in emergency scenarios.
It is our position that driverless cars would be beneficial to the public, as long as two conditions are met. The first being that pedestrians ought to be prioritized over passengers and the second being that the corporations designing these cars do not prioritize the interests of their customers over public opinion and safety.
If driverless cars are safer than human-operated cars then all drivers would benefit from phasing out human-operated vehicles. At the same time, discerning consumers would only buy a product programmed to prioritize their lives over the lives of others. If the end goal of corporations is to replace human drivers with artificial intelligence (AI), they must create a product that prioritizes their customers’ interests. This will result in a mutual contract between the passengers of driverless cars — my car will protect me and your car will protect you.
The issue becomes more complex when pedestrians are involved, as this brings up the issue of consent. Non-drivers have not agreed to the use of driverless cars and have no control over the choices of others. Pedestrians are not entering this situation from a position of equality, making the killing of a pedestrian to protect the safety of a passenger unjust.
In A Theory of Justice, John Rawls’s theory of contractualism uses the creation of social contracts made between equals as a basis for morality. Rawls suggests decision-making should be made behind a hypothetical “veil of ignorance,” where one does not know what their status or situation will be during negotiations.
Prioritizing the safety of those who can afford AI-powered cars over those who cannot carries classist implications that would not be agreed upon from this original position. Therefore, contractualism does not allow for the safety of pedestrians to be placed below that of passengers. Because AI is imperfect, and programmers cannot account for freak accidents, collisions will still occur with automated cars.
There will be intense scrutiny from the public regarding accidents involving driverless cars, so companies must ensure their programming is adequate. Their development ought to include reviews by ethics boards and transparency in order to make public discussion and debate possible. The company will create the car with the driver’s personal safety in mind, but consumerism cannot replace an acceptable ethical framework.
A conflict of interest is likely to arise if corporations are left to their own devices. Corporations focused solely on customer interests and profit will put non-consenting pedestrians at risk. Accountability for malfunctions and errors must rest on the manufacturers, as passengers will have no control of their vehicle. The onus must be on those supervising the system’s development. Human error and extraneous circumstances are the cause of many lethal accidents. It seems logical that we should strive to reduce the probability of human error now that we have the means to do so.
If the solution to human error is to remove the driver from the driving process, it shifts the time and place of critical judgements from in an instant on the road to months or even years prior, in a lab. Regardless, these are still human judgements. As a result, we ought to hold them to the same moral standards as we have with human-operated cars. Responsibility lies with those designing the software who would be making these critical decisions regarding what would be done in the event of an inevitable collision.
Will the future of autonomous vehicles lead to robot cars making life or death decisions?
By Grayson DesRoches, Logan Garrick, Christopher Giampaolo, Avery Martin & Aden Wegelin
A concept that had only existed in our imaginations is now becoming a reality. In the near future, driverless cars are expected to dominate our roadways, vastly altering our idea of modern transportation. Already Tesla is rolling out self-driving features in their Model S and X cars and has plans to sell the new Model Y with fully autonomous capabilities, though they have faced criticism on how autonomous they truly are.
At first glance, the greatest challenge appears to be the technological innovation required in the programming and engineering of these cars, but upon close inspection the true challenge may be answering important ethical questions surrounding autonomous vehicles. Particularly, what ethical principles should these autonomous vehicles operate by? One of the key issues with respect to this question is how to program a car’s priorities in the event of an emergency. That is, whether or not the vehicle should choose a path that may lead to the driver’s death if it will minimize harm to others.
As outrageous as that may sound, it is helpful to understand this argument by grounding it in a possible real-world scenario. Imagine there are two pedestrians crossing the street when a tree falls in the road, forcing the car to make a decision. An accident is unavoidable, and there will be a fatality. Does the car choose to save the driver at the cost of the two pedestrians or does it sacrifice the driver and minimize the harm that could be caused by the accident?
According to utilitarian and virtue ethicist perspectives, the driver should likely be killed so that the pedestrians may be safe. These philosophical theories are concerned mainly with the maximization of happiness, and the cultivation of personal virtues, such as benevolence and generosity.
Both perspectives would suggest that the most ethical course of action would be to let the driver die based on consent. By deciding to enter this car, the driver is informed enough to have consented to the risk of danger that comes alongside a possible collision, much like we consent to risk when driving ourselves. The pedestrians, however, have not made this same decision. They, by virtue of choosing to walk instead of drive, have implicitly not consented to the potential of dying in a car accident. This consent-based argument is a contractualist point of view.
To further illustrate this concept, consider an everyday cigarette smoker. Is it fair to assert that those exposed to second-hand smoke should develop lung cancer rather than the person smoking the cigarettes? Many would argue that since the bystanders have not consented to put themselves in harm’s way in the same manner as the driver, they should not be forced to deal with the consequences of a resulting accident.
It may seem like the failure of driverless cars is inevitable. Why would anyone choose to get in a car that they know will kill them in the event of an emergency? We must keep in mind, however, that these vehicles will overall be much safer than the vehicles we have on the roads today. Perhaps this simple fact will be enough to encourage consumers to risk the chance of harm in the same way that we risk it today when we get in our cars.
Why driverless cars are not an ethical choice for the future of transportation
By Alexander Ng, Jessica Badler, Remo Boscarino-Gaetano, Kiri Goodall & Kurdell Reason
What if you were faced with a life or death decision and death is a certain outcome either for you or another person? This could be a reality in the near future if driverless cars begin to rule the streets. One of the main concerns with regards to the ethics of driverless cars is whether the car should kill the driver in situations where a death is unavoidable.
On what moral grounds would we decide how we would program the driverless cars? In all scenarios where it would have to choose one life over another, there would be targeting of certain people whether that be based on age, race, size, etc. It could be based on moral grounds regarding whose life is worth more or it could be random. In terms of it being a random choice, it seems unfair that innocent civilians should be killed based on chance. They did not necessarily give consent for the driverless car to be on the road. One of philosopher Immanuel Kant’s moral theories states that an act is unjust if it uses others as merely a means and not also an end. If the other drivers or pedestrians did not give consent then they are being used as just a means to avoid harming driver.
In Turning the Trolley (The Trolley Problem), Judith Jarvis Thomson suggested that there should be a third option in the classical trolley problem where instead of choosing which person or people to kill, the person pulling the lever should opt to kill themselves. If the person is unwilling to kill themselves to save the others, it is clearly unacceptable to condemn an innocent bystander to the same fate that the lever-puller refused for themselves. This ties back into Kant’s idea of treating people as both a means and an end. For instance, stealing from someone would be treating them as merely a means because they did not give consent, compared to asking if you can borrow a possession of theirs. By asking to borrow something from someone, you are using them as a means to an end to get what you need, however, they are giving you permission to do so. The main factor in treating people as an end and not just as a means is making sure that they comply with the situation at hand. Killing the non-driver would be treating them as a means to save the driver. They would not be also used as an end as it is non-consensual, and it is not beneficial for both parties but rather just for the driver. If one is unwilling to sacrifice themselves over the other people, that means they hold themselves to a higher standard which is morally impermissible. Choosing to kill the driver would cause the least amount of harm as they consented to the driverless car whereas the other people on the road did not; making this option the most moral approach.
Similarly, since it is ultimately the driver of the car’s decision to purchase and operate the driverless car, it is their responsibility for the safety of themselves and others. Since it is the driver’s choice and responsibility, it is immoral for the driver to kill another person who had no say in the matter when the driver can opt to kill themselves instead. If there is an option to kill the driver, that option should be taken as the driver put themselves and everyone else on the road in that situation by choosing a driverless car.
Photos obtained via Wikimedia Commons
