New ethical and legal questions when it comes to self-driving cars
Hui Ding, Guanwen Li, Ana Pop Stefanija, Mattia Trino, Natalie Walow, Manlin Zhu
Does AI need an introduction?
Year 2018. With the latest advancement of artificial intelligence and its increased usage in vast areas of societal life, the chances you asked yourself at least once “Will robots take over our jobs?”, “Who will be responsible for the self-driving cars if something bad happens?” are big.
Today, one of the most quoted examples is the development of autonomous vehicles, which could bring positive changes to traffic management, security and urban development, but also rise complex legal issues and ethical dilemmas. When talking about it, the first question we should keep in mind is “who is accountable when a self-driving car causes an accident?”. Meanwhile, try to think about Asimov’s first law: “A robot may not injure a human being or, through inaction, allow a human being to come to harm”. Thus, how come that a robot built according to Asimov’s laws could cause an accident? Practice says, it already happens, unfortunately. Furthermore, in an algorithmically unforeseen situation, who would the car hit: the kid crossing the street, the old person at the sidewalk, or will it crash against a wall killing the person inside the car? It comes clear, at this point, that we are in front of many AI-related legal and ethical issues and there’s no easy solution.
What self-driving cars at an intersection would look like
Continue reading “AI on the Driving Seat?”