Is Artificial Intelligence Possible?
Originally posted by chroot
Does it matter what you think?
I just ask that people avoid entering discussions for which they are unprepared.
- Warren
Does it matter what you think?
I just ask that people avoid entering discussions for which they are unprepared.
- Warren
I for one respect his opinion, therefore it does matter what he thinks.
As for asking that people avoid entering discussions for which they are unprepared:
Your request has been denied
I think that this post shows just what little understanding you have of a subject that you are prepared to flame others about......
The ability to solve problems in a manner that the programmer "probably never imagined" does not change the fact that it is solving the problem in the way that it was programmed to solve it. This would therefore NOT be "reasoning" in any sense.
That YOU consider that reasoning is "far too high level a concept to be useful" does not change the fact that reasoning is what would be required for true AI, which was what Magician was asking about .......
The ability to solve problems in a manner that the programmer "probably never imagined" does not change the fact that it is solving the problem in the way that it was programmed to solve it. This would therefore NOT be "reasoning" in any sense.
That YOU consider that reasoning is "far too high level a concept to be useful" does not change the fact that reasoning is what would be required for true AI, which was what Magician was asking about .......
Let's take an example: someone sets up an untrained neural network, provides no information to it besides exemplars, and discovers a few hours later that it's able to differentiate human voices. The researcher then begins "killing off" neurons, one after another, to see if he can isolate where the "knowledge" is. Strangely enough, the network is able to function correctly with only a tiny fraction of its original neurons. The handful that are left alive are still not enough for the researcher to come to a conclusive "mechanism" through which the network was able to differentiate.
Then the researcher takes the same trained weight matrix, and starts over. Except this time, he kills different neurons. Guess what? The effect is the same -- the network is still able to solve the problem with only a fraction of its neurons intact -- and this time it's a different group. And the researcher is still no closer to being able to pin down what mechanism the network found to perform its work.
Did the programmer who made the abstract neural network engine have anything to do with the problem of differentiating human voices? Did the programmer "tell the computer how to solve it?" No. Absolutely not. Did the computer solve it anyway? Yes. Absolutely so.
The term "reason" is too high level, because examples of reasoning are far too concrete. Reasoning is actually the end result of a large number of smaller (and equally important) computations. There's no way to describe the problem of differentiating human voices in terms of your "reasoning." On the other hand, a basic chess computer simply considers an enormous number of different moves, evaluating the goodness of each. It eventually comes to the same conclusion as a good human player, and makes the best move. Is it reasoning? Well, it comes to the same answer, but we wouldn't call it "reasoning" because it's too mechanical. Well, guess what? "Reasoning" is too high-level a concept. "Reasoning" implies some mystical, fanciful computation engine that doesn't do things mechanically. Guess what else? There's no evidence people work any differently than complex machines. Whether you like it or not, the term "reasoning" has absolutely no place in any discussion of artificial intelligence, BECAUSE IT'S TOO HIGH LEVEL A CONCEPT.
- Warren
Did the programmer who made the abstract neural network engine have anything to do with the problem of differentiating human voices? Did the programmer "tell the computer how to solve it?" No. Absolutely not. Did the computer solve it anyway? Yes. Absolutely so.
magician,
How does one define a one "true" AI?
According the folks here, a "true" AI is one that has no structure or preconditions from "birth" -- all it knows, it learns on its own, as has been said, through "reasoning." According to that logic (as tenuous as it is), neural networks are "truer" AI than highly structured systems like expert systems.
The operative term is "domain knowledge," and it means "knowledge about a specific problem that is included in the structure of an AI system." It seems the folks here equate "lack of domain knowledge" with "smarter."
Personally, I don't choose to believe that the presence or absence of domain knowledge has any bearings on the "trueness" of an intelligence. I happen to believe that the human brain is a highly structured machine, not a puddle of self-organizing muck.
- Warren
How does one define a one "true" AI?
According the folks here, a "true" AI is one that has no structure or preconditions from "birth" -- all it knows, it learns on its own, as has been said, through "reasoning." According to that logic (as tenuous as it is), neural networks are "truer" AI than highly structured systems like expert systems.
The operative term is "domain knowledge," and it means "knowledge about a specific problem that is included in the structure of an AI system." It seems the folks here equate "lack of domain knowledge" with "smarter."
Personally, I don't choose to believe that the presence or absence of domain knowledge has any bearings on the "trueness" of an intelligence. I happen to believe that the human brain is a highly structured machine, not a puddle of self-organizing muck.
- Warren






