Machine Morality: Where Robotics Meets Ethical Behavior Standards

Deborah Kakolobango, Viktorija Ulickaite, Isabell Schwanke, Sergejs Mikaeljans, Itsaso Goikoetxea Mallea, Xu Huiqin

The breakthrough of digital technology and 4th Industrial Revolution have already affected almost all industries and economies in the world. Global interest in artificial intelligence (AI) technologies shows its potential to transform the way people live. Self-driving cars, intelligent home assistants, smartphones and precise medical predictions are only the first steps of AI use in industry, whereas its potential in many more industries are growing rapidly. In the past decade, tech companies, including Facebook, Google and Amazon not only invested large amount of money in artificial intelligence, but also opened research labs to deal with its future development and growing threats.


The omnipresence of AI technologies is already being felt in almost every economic sector According to Gartner, 85% of digital customer services won’t need a human intervention anymore in year 2020. AI capability to compete with human intelligence spurs different attitudes towards latest technological advances. The late physicist Stephen Hawking warned that AI could replace humans in the future, Tesla`s CEO Elon Musk threatened that AI might potentially turn out far more dangerous than nukes, whereas futurist Ray Kurzweil emphasized that new technologies will enhance humans far beyond our imagination.

Ethics in the age of Artificial Intelligence-Driven Society

It is now clear that robotics revolution rises a great deal of ethical questions. Many researchers agree that robots must be designed in a way that complies with ethical values like dignity, autonomy, freedom, individual empowerment, non-discrimination. But is it possible to transpose humanity to a machine?


Modern philosophers consider the questions of AI and robots’ social responsibility as extremely challenging.  Who is to blame when the famous humanoid robot Sophia, which is holding citizenship of Saudi Arabia, promises to destroy all humans? What if an AI-controlled missile has a malfunction and hits a big city in some country?  Recent cases show that autonomous machine capability to make decisions without human control sometimes results in errors that put ethical values in danger. After the introduction of self-driving cars, we continuously hear about accidents involving AI-driven vehicles. The question is: who is the responsible party? The driver, or the companies that designed the autonomous driving technology? There are no well-defined answers to these questions so far.

Thus, modern Ethics should tackle Artificial Intelligence development and robotics as priority issues. Morality ought to be a crucial part of AI legislation and a major concern for policy-makers. Technology has always been a major driver of the positive societal change, but nowadays the pace of tech development seems to be too fast for the society to handle. Should more regulations be imposed on AI technology? Or will it have more chances to flourish unregulated? How can we protect privacy from the abuse of AI systems? The complexity of AI issues leaves many unanswered questions for governments, researchers, tech experts and other involved parties. However, there is one thing clear: the time to find the best possible solution is almost up.




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s