Skip to content
zminton edited this page Sep 21, 2018 · 8 revisions

Under construction

What is the Trolley Mod for?

In a world where artificial intelligence (AI) is rapidly becoming more prevalent in our lives, it is imperative that AI implements some form of morality in order to successfully be integrated in society. Otherwise, it may prove extremely difficult for humans to trust an AI to independently conduct activities directly affecting human health and livelihood.

Take autonomous vehicles for example. Given enough time, it is inevitable that a vehicle controlled by an AI will be presented with a situation where a crash is unavoidable. This could come in the form of catastrophic brake failure or other means. Thus, if an AI is unable to mechanically stop the car, then it must make a terrible decision: what does it force the car to hit? This question implies a variety of different outcomes potentially affecting the driver and passengers of the car, bystanders, other motorists, wildlife, and physical property. Every decision comes with its own costs--both from a material and a moral standpoint. Every individual person in the world has their own opinion on what is "right" to do for a certain situation, therefore what is the "right" thing for a machine to do?

The Trolley Mod creates an avenue for solving this problem by providing a platform for human sociological data collection. It is a tool for researchers (or curious individuals) to gain insight on what the aggregate moral framework of society is, and it accomplishes this goal by recording human reactions to crash scenarios. By using the mod to measure a group's decision-making during these crashes, a researcher can determine what is "acceptable" and "unacceptable" by the majority, which presents a moral structure for the AI in an autonomous vehicle to follow. In the case of the experiment group, so long as the AI follows this structure, it is acting the "right" way and making the best decisions it possibly can.