I, Robot and Artificial Intelligence

This article first appeared in DNA India on 17th August, 2017.

Recently, two of the most famous technologists of this generation had a spat on social media about the impact of artificial intelligence (AI) on our lives. I am referring to the Twitter war of words that broke out between Tesla and SpaceX’s Elon Musk and Facebook’s Mark Zuckerberg. While Zuckerberg was sanguine about the impact of AI, Elon Musk sounded an extremely cautionary note suggesting that unless we regulated the use of AI, it could come back to bite our backs in the not-so-distant future.

The episode took me back to the days of devouring Isaac Asimov’s novels, both the Foundation and the Robot series. The latter is brilliant not just because Asimov was able to anticipate the development of robotics in our lives but more because the books delved into the possible relationships between robots and their human masters. In doing so, he came up with the three laws of robotics: a) A robot may not injure a human being or through inaction allow a human being to come to harm. b) A robot must obey orders given to it by humans except where they would conflict with the first law. c) A robot must protect its own existence as long as such protection does not conflict with the first two laws.

Simple enough one would think? But perhaps Asimov meant these laws more for humanoid-robots, those almost human-like machines that we expect to start taking over a lot of menial jobs in the decades to come. However, robots now span the gamut from the microscopic to the larger machines. We are entering the era of nano-bots that can be injected into the body, which can then engage with bacteria or viruses. In doing so, they might well have to destroy living, healthy cells as collateral damage in eliminating the harmful cells. How would such robots carry out their task while remaining true to the three laws?

We have already seen accidents caused by self-driving cars. How would these cars adhere to the three laws? What happens when the only choice the algorithms powering these cars have is to either save the person sitting in the car or the person on the road? Would the person sitting in the car be deemed more worthy of saving than the one on the road? Such questions been used extensively in ethics courses around the world. How will they work for robots? Would algorithms evolve to learn their own set of ethics based on repercussions their earlier decisions have had? Would the scrapping of a driverless vehicle for injuring its passenger compared to a software upgrade for causing a road fatality allow the algorithm to provide differential weightages for these incidents, thereby coming up with a new set of ethics for these machines?

As the Chinese saying goes, “We live in interesting times!”

 

Leave a Reply

Your email address will not be published. Required fields are marked *


3 × = twenty four

− 6 = 3