Senin, 27 Juli 2015

Interview with Stephen Wolfram on AI and the future fifianahutapea.blogspot.com

sw-dr040-5x7Few people in the tech world can truly be said to “need no introduction.” Stephen Wolfram is certainly one of them. But while he may not need one, the breadth and magnitude of his accomplishments over the past four decades invite a brief review:

Stephen Wolfram is a distinguished scientist, technologist and entrepreneur. He has devoted his career to the development and application of computational thinking.

His Mathematica software system, launched in 1988, has been central to technical research and education for more than a generation. His work on basic science—summarized in his bestselling book A New Kind of Science—has defined a major new intellectual direction, with applications across the sciences, technology, and the arts. In 2009 Wolfram built on his earlier work to launch Wolfram|Alpha to make as much of the world’s knowledge as possible computable—and accessible on the web and in intelligent assistants like Apple’s Siri.

In 2014, as a culmination of more than 30 years of work, Wolfram began to roll out the Wolfram Language, which dramatically raises the level of automation and built-in knowledge available in a programming language, and makes possible a new generation of readily deployed computational applications.

Stephen Wolfram has been the CEO of Wolfram Research since its founding in 1987. He was educated at Eton, Oxford, and Caltech, receiving his PhD in theoretical physics at the age of 20.

 

Publisher’s Note: The following interview was conducted on June 27, 2015.  Although it is lengthy, weighing in at over 10,000 words, it is published here in its entirety with only very minor edits for clarity.

Byron Reese: So when do you first remember hearing the term “artificial intelligence”?

Stephen Wolfram: That is a good question. I don’t have any idea. When I was a kid, in the 1960s in England, I think there was a prevailing assumption that it wouldn’t be long before there were automatic brains of some kind, and I certainly had books about the future at that time, and I’m sure that they contained things about them, how there would be some electronic brains, and so on. Whether they used the term “artificial intelligence,” I’m not quite sure. Good question. I don’t know.

Would you agree that AI, up there with space travel, has kind of always been the thing of tomorrow and hasn’t advanced at the rate we thought they would?

Oh, yes. But there’s a very definite history. People assumed, when computers were first coming around, that pretty soon, we’d automate what brains do just like we’ve automated what arms and legs do, and so on. Nobody had any real intuition for how hard that might be. It turned out, for reasons that people simply didn’t understand in the ’40s, and ’50s, and ’60s, that lots of aspects of it were quite hard, and also, the specific problem of reproducing what human brains choose to do may not be the right problem. Just like if you want to build a transportation system, having it based on legs is not the best engineering solution. There was an assumption that we can automate brains just like you can automate mechanical kinds of things, and it’s only a matter of time, and in the early ’60s, it seemed like it would be a short time, but that turned out not to be true, at least for some things.

What is the state of the technology? Have we built something as smart as a bird, for instance?

Well, what does it mean to make something that is as smart as X? In the history of artificial intelligence, there’s been a continuing set of tests that people have come up with. If you can do X, then we’ll know you’re as smart as humans, or something like that. Almost every X that’s been defined so far, machines have ended up being able to do, though the methods that they use to do it are usually utterly different from the ones that seem to be involved with humans. So the types of things that machines find easy are very different from those kinds of things that people find easy. I think it’s also the case that a lot of things people say, “Gosh, we should automate this,” the mode of automation ends up being different from just sort of the way that you would—sort of if you had a brain in a box, the way that you would use that. Probably a core question about AI is, “How do you get all of intelligence?” For that to be a meaningful question, one has to define what one means by “intelligence.” This, I think, gets us into some bigger kinds of questions.

Let’s dive into those questions. But first, one last “groundwork” question: Do you think we’re at a point with AI where we know what to do, and it’s just that we’re waiting on the hardware again? Or do we have plenty of hardware, and are we still kind of just figuring out how to do it?

Well, it depends what “it” is. Let’s talk a little bit more systematically about this notion of artificial intelligence, and what we have, what we could have, and so on. I suppose artificial intelligence is kind of a—it’s just words, but what do we think those words mean? It’s about automating the intellectual activities that humans do. The story of technology has been a long one of automating things that humans do; technology tends to be about picking a task where we understand what the objective is because humans are already doing it, and then we make it possible to do that in an automatic way using technology.

So there’s a whole class of tasks that seem to be associated with what brains and intelligence and so on deal with, which we can also think of automating in that way. Now, if we say, “Well, what would it take? How would I know if this box that’s sitting on my desk was intelligent?” I think this is a slightly poorly defined question because we don’t really have an abstract definition of intelligence, because we actually only have one example of intelligence that we definitively think of as such, which is humans and human intelligence. It’s an analogous situation to defining life, for example. Where we have only one example of that, which is life on Earth, and all the life on Earth is connected in a very historical way—it all has the same RNA and cell membranes, and who knows what else—and if we ask ourselves this sort of abstract question, “How would we recognize abstract life that doesn’t happen to share the same history as all the particular kinds of life on Earth?” That’s a hard question. I remember, when I was a kid, the first spacecraft landed on Mars, and they were kind of like, “How do we tell if there’s life here?” And they would do things like scoop the soil up, and feed it sugar, and see whether it produced carbon dioxide, which is something that is unquestionably much more specific than asking the general question, “Is there life there?”

And I think what one realizes in the end is that these abstract definitions of life—it self-reproduces, it does weird thermodynamic things—none of them really define a convincing boundary around this concept of life, and I think the same is true of intelligence. There isn’t really a bright-line boundary around things which are the general category of intelligence, as opposed to specific human-like intelligence. And I guess, in my own science adventures, I gradually came to understand that, in a sense, sort of, it’s all just computation. That you can have a brain that we identify, okay, that’s an example of intelligence. You have a system that we don’t think of as being intelligent as such; it just does complicated computation. One of the questions is, “Is there a way to distinguish just doing complicated computation from being genuinely intelligent?” It’s kind of the old saying, “The weather has a mind of its own.” That’s sort of a question of, “Is that just pure, primitive animism, or is there, in fact, at some level some science to that?” Because the computations that are going on in the fluid dynamics of the weather are really not that different from the kinds of computations that are going on in brains.

And I think one of the big conclusions that came out of lots of basic science that I did is that, really, there isn’t a distinction between the intelligent and the merely computational, so to speak. In fact, that observation is what got me launched on doing practical things like building Wolfram|Alpha, because I had thought for decades, “Wouldn’t it be great to have some general system that would take knowledge, make it computational, make it so that if there was a question that could in principle be answered on the basis of knowledge that our civilization has accumulated, we could, in practice, do it automatically.”

But I kind of thought the only way to get to that end result would be to build a sort of brain-like thing and have it work kind of the same—I didn’t know how—as humans brains work. And what I realized from the science that I did it was that just doesn’t make sense. That’s sort of a fool’s errand to try to do, because actually, it’s all just computation in the end, and you don’t have to go through this sort of intermediate route of building a human-like, brain-like thing in order to achieve computational knowledge, so to speak.

Then the thing that I found interesting is there are tasks that. … So, if we look at the history of AI, there were all these places where people said, “Well, when computers can do calculus, we’ll know they’re intelligent, or when computers can do some kind of planning task, we’ll know they’re intelligent.” This, that, and the other. There’s a series of these kinds of tests for intelligence. And as we all know, in practice, the whole sequence of these things has been passed by computers, but typically, the computers solve those problems in ways that are really different from brains. One way I like to think about it is when Wolfram|Alpha is trying to solve a physics problem, for example. You might say, “Well, maybe it can solve it in a brain-like way, just like people did in the Middle Ages, where it was a natural philosophy, where you would reason about how things should work in the world, and what would happen if you pushed this lever and did that, and [see] things had a propensity to do this and that.” And it would be all a matter of human-like reasoning.

But in fact, the way we would solve a problem like that is to just turn it into something that uses the last 300 years of science development, turn it into a bunch of mathematical equations, and then just industrially solve those equations and get the answer, kind of doing an end run around all of that human-like, thinking-like, intelligence-like stuff. But still, one of the things that’s happened recently is there are these tasks that have been kind of holdouts, things where they’re really easy for humans, but they’ve seemed to be really hard for computers. A typical example of that is visual object recognition. Is this thing an elephant or a bus? That’s been a type of question that’s been hard for computers to answer. The thing that’s interesting about that is, we can now do that. We have this website, imageidentify.com, that does a quite respectable, not-obviously-horribly-below-human job of saying, “What is this picture of?” And what to me is interesting, and an interesting episode in the history of science, is the methods that it’s using are fundamentally 50 years old. Back in the early 1940s, people were talking about, “Oh, brains are kind of electrical, and they’ve got [things] like wires, and they’ve got like computer-like things,” and McCulloch and Pitts came up with the whole neural network idea, and there was kind of the notion that the brain is an electrical machine, and we should be able to train it by showing it examples of things, and so on.

I worked on this stuff around 1980, and I played around with all kinds of neural networks and tried to see what kinds of behaviors they could produce and tried to see how you would have neural networks be sort of trained, or create attractors that would be appropriate for recognizing different kinds of things. And really, I couldn’t get them to do anything terribly interesting. There was a fair amount of interest around that time in neural networks, but basically, the field—well, it had a few successes, like optical character recognition stuff, where you’re distinguishing 26 characters, and so on. It had a few successes there, but it didn’t succeed in doing some of the more impressive human-like kinds of things, until very recently. Recently, computers, and GPUs, and all that kind of thing became fast enough that, really—there are a bunch of engineering tricks that have been invented, and they’re very clever, and very nice, and very impressive, but fundamentally, the approach is 50 years old, of being able to just take one of these neural network–like systems, and just show it a whole bunch of examples and have it gradually learn distinctions between examples, and get to the point where it can, for example, recognize different kinds of objects and images. And by the way, when you say “neural networks,” you say, “Well, isn’t that an example of why biology has been wonderful, and we’re merely following on the coattails of biology?” Well, biology certainly gave us a big clue, but the fact is that the actual things we use in practice aren’t particularly neural-like. They’re basically just compositions of functions. You can think of them as just compositions of functions that have certain properties, and the one thing that they do have is an ability to incrementally adjust, that allows one to do some kind of incremental learning process. The fact that they get called neural networks is because it historically was inspired by how brains work, but there’s nothing really neurological about it. It’s just some kind of, essentially, composition of simple programs that just happens to have certain features that allow it to be taught by example, so to speak.

Anyway, this has been a recent thing that for me is one of the last major things where it’s looked like, “Oh, gosh! The brain has some magic thing that computers don’t have.” We can go through all kinds of different things about creativity, about language, about this and that and the other, and I think we can put a checkmark against essentially all of them at this point as, yes, that component is automatable. Now, I think it’s an interesting thing that I’ve been slowly realizing recently. It’s kind of a hierarchy of different kinds of what one might call “intelligent activity.” The zero-th level of the hierarchy, if we take the human example, is reflexive-type stuff, stuff that every human is physiologically wired to do, and it’s just part of the hardware, so to speak.

The first level is stuff where we have a plain brain, so to speak, and upon being actually exposed to the world, that plain brain learns certain kinds of things, like physiologic recognition. But that has to be done separately for every generation of the species. It’s not something where the parent can pass to the child the knowledge of how to do physiologic recognition, at least not in the way that it’s directly wired into the brain. Then the second level, the level that we as a species have achieved, and doesn’t look like any other species has achieved, is being able to use language and so on to pass knowledge down from generation to generation, which allows us to build up this thing that goes beyond pure one-brain intelligence, so to speak, and make something which is a collective, progressively growing achievement, which is that corpus of human knowledge.

And the thing that I’ve been interested in is that idea that there is language and knowledge, and that we can create it as a long-term artifact, so what’s the next step beyond that? What I realized is that I think a bunch of things that I’ve been interested in for many decades now is—it’s slowly coming into focus for me that this is actually really the thing that one should view as the next step in this progression. So we have computer languages, but computer languages tend not to be set up to codify knowledge in the kind of way that our civilization has codified knowledge. They tend to be set up to say, “Okay, you’re going to do these operations. Let’s start from the very basic primitives of the computer language, and just do what we’re going to do.”

What I’ve been interested in is building up what I call “knowledge-based language,” and this Wolfram Language thing that I’ve basically been working on for 30 years now is kind of the culmination of that effort. The point of such a language is that one’s starting from this whole corpus of knowledge that’s been built up by our civilization, and then one’s providing something which allows one to systematically build from that. One of the problems with the existing corpus of knowledge that our civilization has accumulated is that we don’t get to do knowledge transplants from brain to brain. The only way we get to communicate knowledge from brain to brain is turn it into something like language, and then reabsorb it in another brain and have that next brain go through and understand it afresh, so to speak.

The great thing about computer language is that you can just pick up that piece of language and run it again and build on top of it. Knowledge usually is not immediately runnable in brains. The next brain down the line, so to speak, or of the next generation or something, has to independently absorb the knowledge before it can make use of it. And so I think one of the things that’s pretty interesting is that we are to the point where when we build up knowledge in our civilization, if it’s encoded in this kind of computable form, this sort of standardized encoding of knowledge, we can just take it and expect to run it, and expect to build on it, without having to go through this rather biological process of reabsorbing the knowledge in the next generation and so on.

I’ve been slowly trying to understand the consequences of that. It’s a little bit beyond what people usually think of as just AI, because AI is about replicating what individual human brains do rather than this thing that is more like replicating, in some more automated way, the knowledge of our civilization. So in a sense, AI is about reproducing level one, which is what individual brains can learn and do, rather than reproducing and automating level two, which is what the whole civilization knows about.

Go to page 2 (of 3) on Gigaom .

Interview with Stephen Wolfram on AI and the future originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download