Rabu, 10 Agustus 2016

Why Fear AI? fifianahutapea.blogspot.com

At Gigaom Change Leader’s Summit 2016 in September, we will be exploring seven key business technologies that are real today. One of these is AI. In anticipation of that event, we are doing a series of short pieces on the featured technologies.

In 1950, Alan Turing asked the question of whether a machine can think. In 1955, John McCarthy coined the term “Artificial Intelligence.” In 1956, a group consisting of McCarthy and three other AI scientists proposed a summer retreat to work out the essentials of the new field.

The optimism ran high that a thinking machine could be developed relatively quickly, even with the technology of that time. This optimism proved to be unfounded.

AI has endured a number of so-called “winters” where funding dried up due to a disconnect between the expectation of the funders and the realities of the science. But all the while, the power of computers and computer languages advanced such that for the first time, the expectations aren’t just being met, but being exceeded.

Now, some wonder if we have inadvertently begun to act out the drama in Mary Shelley’s Frankenstein, creating something that either through intention or accident wreaks havoc on our world. Some call it an existential threat.

But should we worry? Let’s dive in.

First, those suggesting caution:

Physicist Stephen Hawking believes development of a true AI would be no less than “the end of the human race.” According to Hawking, the fundamental problem is that AI would iterate and advance rapidly, whereas we “[h]umans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Tesla CEO Elon Musk is equally bearish on AI, calling it the “biggest existential threat” facing mankind, “potentially more dangerous than nukes.”

Microsoft co-founder Bill Gates is more restrained, saying he’s “in the camp that is concerned about super intelligence,” because decades of accelerating progress may result in an intelligence so advanced that it’s hard to control.

Now, let’s hear from the other side:

Linguist and behavioralist Noam Chomsky, far from panicking about malevolent AI, thinks the entire pursuit of a statistically-driven AI is barking up the wrong tree. He says it’s a shallow approach which may have practical applications but is unlikely to yield real insight into the nature of intelligence or cognition.

AI author Ray Kurzweil isn’t afraid of artificial intelligence, either. He analogizes to other scientific endeavors with doomsday potential, like biotechnology, observing that safety rules and ethical guidelines have kept those technologies relatively safe.

And finally, deep learning pioneer Andrew Ng actually mocks the idea of an ‘evil killer AI,’ downplaying concerns as a “phantom” and an unnecessary policy distraction. To Ng, “[w]orrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven’t even landed on the planet yet!”

So what is the net of all of this? Well, when the estimates from smart people as to when we will have AGI vary from five to 500 years, there is obviously neither clarity nor consensus about the future. One thing seems certain: There is no turning back. Whatever is possible will be done.

Join us at Gigaom Change Leader’s Summit 2016 for a more in-depth look into Artificial Intelligence.

Easy Way to Download