Monday, March 27, 2017

Will Humans Regulate Artificial Intelligence?

While the artificial intelligence revolution is coming quickly, economic adjustments take time. Therefore, dislocation, disruption, and suffering are inevitable. How can we ensure that the revolution empowers people—or “informates,” to use the term coined by Shoshana Zuboff—rather than denigrates us by leaving us without jobs and a loss of control over our lives? Renowned physicist Steven Hawking has warned that AI may become an existential threat to our species. He told the BBC "The development of full artificial intelligence could spell the end of the human race...It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”

Other experts assure us that people can always gain control, at least in the foreseeable future. They harken back to science fiction writer Isaac Asimov’s three laws of robotics:
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law; and
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course, while we are in the domain of science fiction, we remember the computer HAL in Stanley Kubrick’s classic 1968 film 2001 A Space Odyssey. HAL came to believe that the humans were a threat to the mission, and in the man versus machine battle that followed, HAL eventually was deactivated. (For a while, only, for those who watched the sequels!)

Without even getting into the existential threats, the AI revolution raises fundamental questions about who will win and who will lose. How will robotics impact the distribution of political, economic and social power across the globe? Will the current organization of nation states still make sense? Will transnational corporations control the means and effects of production and employment, with their systems beyond government access and understanding?

What about those whose work is no longer needed, be that executives, managers, production workers, service workers, agricultural workers, or anybody else? And particularly, what will happen to those who are not suitable for new jobs that may emerge? Do we need to fundamentally restructure access to income, adopting, for example, guaranteed minimum income or negative income tax schemes? Or will we be content to let the unemployed fall through overwhelmed safety nets?

Next, we can look at the ethical decision-making built into AI systems themselves. We all are familiar with the self-driving car question: The car has a choice of slamming into a wall, killing its occupants, or running over a group of pedestrians. Networking among vehicles should make this event rare, but it will come up. What should the car be programmed to do? Should the car owner have a say?

The U.S. military is already a parallel issue in drone attacks on targets by requiring a human action to order a kill. In the future battlefield, if the AI system determines that the time for human action will result in significant loss of friendly life, what should it do? Our forces are now testing swarms of smaller drones that use “colony” behavior modeled on ants and bees to identify and eliminate risks. As one officer explained, when you eliminate the human pilot, you can buy a lot more of them. So how much autonomy should we give the swarms?

Of course, AI also gives us the capacity to understand more than we literally can imagine ourselves. Companies are already using AI to analyze supply chains to eliminate human rights abuses, minimize environmental risks and reduce carbon footprints. The very modeling of our planet’s atmosphere, oceans and surfaces gives us knowledge that we may use to address the existential risks of climate change. AI in the end will do what we tell it to—unless Steven Hawking is right.

So, what are the rules going forward? How do multi-national organizations, governments, companies and citizens have a say?

Last September, the New York Times reported that tech companies’ main concern is having regulators jump in and create unworkable rules around their AI work. Peter Stone, one of the authors of a Stanford University report titled Artificial Intelligence and Life in 2030 remarked, “We’re not saying that there should be no regulation. We’re saying that there is a right way and a wrong way.”

The Stanford report itself states that “attempts to regulate AI in general would be misguided, since there is no clear definition of AI, it is not any one thing, and the risks and considerations are very different at all levels of government.” David Kenny, general manager for IBM’s Watson AI division, is quoted as saying, “There is a role for government, and we respect that. The challenge, he said is “a lot of times policies lag the technologies.”

Five tech giants—including Alphabet, Amazon, Facebook, IBM, and Microsoft—recently agreed that industry self-regulation, in the context of appropriate government regulation, is the way forward. The Times reported a new tech group modeled on a similar human rights effort known as the Global Network Initiative, in which corporations and nongovernmental organizations are focused on freedom of expression and privacy rights. Specifics of the effort, including its name, are still being hashed out.

I am encouraged by self-regulation of AI, particularly if the self-regulation process, standards and underlying values are fully transparent and open to broad input, debate, review and modification. Computer scientists will need to interact with social scientists and philosophers, as proposed by Joi Ito, director of the MIT Media Lab and a member of the New York Times Board. In this schema, AI and robotic systems have what they term “society in the loop.” This means that we humans still need to be a integral part of any system.

While the workings of AI systems will be well beyond our common understanding, their impact on our lives will be quite obvious. Like generations of our ancestors, we will be on a transformational journey with winners and losers, this time at blinding speed. It’s going to be quite a ride.

In these situations, we used to say, “Fasten your seat belts.” But soon, the robots will do that for us. Should we trust them?

—Barton Alexander, Principal, Alexander & Associates LLC

1 comment:

  1. The post is useful and it would really help for those who search for Aido | Aido Robot

    ReplyDelete