Friday, September 30, 2016

Tech Must Look To Past To Protect The Future From An Artificial Intelligence Apocalypse

Industry needs to devise simple rules like Isaac Asimov.

By Therese Poletti
September 30, 2016

Through a joint alliance on artificial intelligence, five tech giants need to take a better look at the guidelines established by one of the most influential writers of modern science fiction to protect us from an apocalyptic future.

Fears of what autonomous technologies are capable of is entirely understandable, even though artificial intelligence is still in early stages. So Inc. AMZN, +0.04% Alphabet Inc.’s DeepMind/Google GOOG, -0.84% GOOGL, -0.92% Facebook Inc. FB, -0.88% IBM Corp.IBM, -0.11% and Microsoft Corp. MSFT, -1.09% have created an ethics-focused nonprofit that says it has goals such as educating the public, protecting the privacy and security of individuals, and to create a forum for discussion of the complex issues in a future with machines may be in decision-making roles.

The Partnership on AIproposed a set of eight tenets to ensure that the use of AI is “beneficial to people and society.” But the corporate-sounding tenets seem to be a trumped-up, yet still watered-down, version of the three fundamental “Rules of Robotics,” as written by legendary science fiction author Isaac Asimov. Asimov recorded his rules in a 1942 short story called “Runaround,” which takes place in approximately 2015 and involves two engineers working with a robot to get a mining station working again on the planet Mercury.

The three rules state:

1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The last two rules contradict each other in the story, when the robot gets stuck in an infinite loop as he obeys the command to gather some much needed selenium from a pool 17 miles away from the mining station, but hesitates to approach the pool because it may present a danger. One of the scientists then decides to venture out into Mercury’s life-threatening heat, and the robot breaks out of its loop to try to rescue him, illustrating that the first rule—protecting human beings—is the most important of all.

Partnership on AI has a seemingly corporate take on Asimov’s most important rule. Rule No. 6 of the group’s tenets has several parts, including that it will “work to maximize the benefits and address the potential challenges of AI technologies,” by “ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints;” and “opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.”

The group’s statement of “do no harm” though, is more nebulous than Asimov’s rules, which are clear, plain and simple. Attempting to understand those convoluted rules could produce an infinite loop in a human being, much less an artificially intelligent machine.

It is laudable that a group of tech giants are starting to think about ethics, responsibility and education of the public, but the Partnership on AI lacks enforcement capabilities and could be more about public relations than true action. If the industry is going to continue to pursue tech development as it has been predicted by futurists, it also needs to come up with a set of tenets that a machine can understand and set a precedent on protecting human life, just as Asimov suggested.

Article Link To MarketWatch: