I'm not "intelligent", thanks
How did the AI movement make the radical assumption that the "intelligence" bit in the term of "Artificial Intelligence" (I'm referring to strong AI here which is meant to replicate human "intelligence") actually describes the entirety of the human being? From wikipedia I found this definition of (strong) AI: "The field was founded on the claim that a central property of humans, intelligence - the sapience of Homo sapiens - can be so precisely described that it can be simulated by a machine."
"Central property of humans"? I don't feel "intelligence" comes even vaguely close to being an appropriate definition of the experience of being human, of this experience of being the me I've been all my life, or any other "central property". It's an incredibly glib term for the totality of one's humanity that conveniently reduces us into something that could conceptually be recreated as a robot. I think with any such discussions we initially should take the step of using a more apt description to get us in the right ballpark, for example "Human Consciousness". Then it seems we are approaching our humanity more correctly for what it is. Subsequently we can talk about replicating it with terms such as "Artificial Human Consciousness" or the better sounding "Synthetic (Human) Consciousness".
As soon as we do that, things suddenly become a whole lot more complicated. Why? Simply because the scientific world is having a huge problem in dealing with the whole phenomenon of consciousness. For a number of years now, Australian Philosopher David Chalmers has been highlighting this "hard problem of consciousness", which is essentially the very unanswered question "How is the experience of subjectivity generated?', and "hard" because it can never be approached by empirical methods. He suggests that consciousness is a fundamentally a different property of the universe that isn't based on matter. (He calls it Panprotopsychism. I'm still amazed he's got away with this with so little trouble!)*
The current and prevailing paradigm that mainstream science is sticking to is that consciousness must be generated by the matter in the brain. There's signifigant data to suggest so, though many are not convinced it's as simple as that. If this experience of individuated subjective consciousness that you and I are having right now and have had our entire lives is really actually generated by a bunch of electrons, atoms and molecules in the form of neurons, etc. it's entirely possible that we really don't understand Matter at all - which at this point would be terribly embarrassing!
Either way, as it stands the scientific world really doesn't have a handle on consciousness, and it doesn't seem that things are going well. Personally I don't think science as we know it will survive this particular adventure intact!
Here's a video of the leader of the Singularity movement himself, Ray Kurzweil - in a stunningly glib aside - acknowledging this "hard problem" (at 25:00 mins in): http://www.thoughtware.tv/videos/watch/4536-The-Ubiquity-And-Predictability-Of-The-Exponential-Growth-Of-Information-Technology
[QUOTED FROM THE VIDEO]
"Chalmer's hard problem is really an impossible problem from a scientific [perspective]...because of this conceptual gap between objective reality and subjective experience which is very much a first person experience, which only concludes that there's still a role for philosophy..it very quickly gets very mysterious"
- Ray Kurzweil, Singularity Summit, 2009