Intersting Debate
The problem with the question is that "smart" is too vague of a term. Will there ever be a computer that can store more information than a single human? Absolutely (if it doesn't already exist). Will there ever be a computer that can perform calculations faster than a human? Absolutely (if it doesn't already exist). Will there ever be a computer that can generate new ideas or completely unique solutions to problems that have not previously been encountered? I don't know. Is it possible? I'd say absolutely. Will we figure out how to build it? I'm not sure.
The brain is a (relatively) simple device. It's based on electrical impulses generated by chemical reactions through a network of connections. If we can disassemble it and truly understand how it works, then I have no doubt that we can simulate it with synthetic parts and even improve upon its original design.
The brain is a (relatively) simple device. It's based on electrical impulses generated by chemical reactions through a network of connections. If we can disassemble it and truly understand how it works, then I have no doubt that we can simulate it with synthetic parts and even improve upon its original design.
great topic. I think Neo said it best when he said "We need machines to survive and they need us to survive.", meaning computers and humans both have characteristics that the others need. However, there will come a day when AI and robotics are a normal part of life on this planet...who knows what will happen.
we have not unlocked AI fully
if and when that happens and you cannot distinguish between a human thought process and a computer process I believe a computer will be able to out function a human if we allow it to.
without starting a theological debate, if a computer ever becomes fully sentient and self-aware and it happens to have risk management and self-preservation atributes.....we may be in big trouble
if and when that happens and you cannot distinguish between a human thought process and a computer process I believe a computer will be able to out function a human if we allow it to.
without starting a theological debate, if a computer ever becomes fully sentient and self-aware and it happens to have risk management and self-preservation atributes.....we may be in big trouble
Here's some counter-intuitive thoughts on this:
1. Computers can't goof. They only do two things: 1 and 0. It takes a long time and a LOT of AI before they start considering boundary values and input that is out of margin.
2. Humans F up almost every second. It's our mistakes and failures that actually lead us to innovation. . . computers are completely incapable of that. The only failure they "know" is through a methodical and iterative process (albeit on a grander and faster scale than a single human brain).
1. Computers can't goof. They only do two things: 1 and 0. It takes a long time and a LOT of AI before they start considering boundary values and input that is out of margin.
2. Humans F up almost every second. It's our mistakes and failures that actually lead us to innovation. . . computers are completely incapable of that. The only failure they "know" is through a methodical and iterative process (albeit on a grander and faster scale than a single human brain).
Originally Posted by 8D_In_Trunk,Mar 19 2009, 01:28 PM
Here's some counter-intuitive thoughts on this:
1. Computers can't goof. They only do two things: 1 and 0. It takes a long time and a LOT of AI before they start considering boundary values and input that is out of margin.
2. Humans F up almost every second. It's our mistakes and failures that actually lead us to innovation. . . computers are completely incapable of that. The only failure they "know" is through a methodical and iterative process (albeit on a grander and faster scale than a single human brain).
1. Computers can't goof. They only do two things: 1 and 0. It takes a long time and a LOT of AI before they start considering boundary values and input that is out of margin.
2. Humans F up almost every second. It's our mistakes and failures that actually lead us to innovation. . . computers are completely incapable of that. The only failure they "know" is through a methodical and iterative process (albeit on a grander and faster scale than a single human brain).
2. computers can (and already have) been programmed to 'learn' from their mistakes, so i don't see how you can say they are completely incapable of it. i admit it isn't anywhere near the level of human learning, but it is a step in that direction - which is all i'm saying. it is a building block for future development.
to the original point, the basic ideas behind AI are there, and some prototypes can do some very impressive things, but overall i don't think we'll see it for another few hundred years. if then.
Originally Posted by INTJ,Mar 19 2009, 10:38 AM
binary is Sooo 1962. There are fuzzy logic washing machines, so I'd assume that with good programming we can approximate "situational learning," even if it is brute force.
Computers will produce a Vulcan, and Humans will remain like Shatner.
Hmm, I guess "smart" is really throwing a wrench into the works. The main point we are all arguing is that computers currently lack the power of reason. This brings us to the point of things like I-robot and Gears of War. AI getting better and better, as evidenced by many current video games (although the AI is still loosely based on a given series of events and the outcomes.) Can something like I-robot be technically possible?
The essence of this argument lies within on simple question....can a human create a computer that can think on it's own?
The essence of this argument lies within on simple question....can a human create a computer that can think on it's own?






