More links
Peter Morville Discusses User Experience Strategy
Peter presents a "T-shaped" consulting process: intersecting Information Architecture with User Experience Strategy. More, he discusses the concept of user experience, how to avoid prejudicing your study with the framework of your expectations, why a strategy can have impact, and how to think about the strategy paradox and strategizing as part of the process of making disruptive technologies -- see his column here.
Robin Marantz Henig Discusses Sociable Robots
The Real transformers on the NY Times. Among other things, it asks if robots will ever have self-consciousness. What do you think? And how would you go about proving to someone that you yourself are conscious?
Do you think animals are conscious?
When I was younger, I theorized that a difference between conscious people and other animals is that with consciousness, people are aware of being aware of themselves. Other animals may be self-aware, as in they know they exist and can reason about their own health and comfort and environment, etc, but the question for me would be whether or not they can examine and reason that quality of being self-aware: to be aware of being aware of being aware of themselves. That is abstract thought of a special nature that I find integral to the concept of consciousness.
We have robots that can reason about the situation they are in and match environmental inputs to reasoning about how they should change their own situation. Even if you have that robot refer to itself in its planning and calculations, I would argue that does not make it conscious. Among other things, it ought to have a sense of "embodied intelligence" - knowing what in the world constituted its self.
What do you think constitutes consciousness? And how would you tell in a robot or AI?
Peter presents a "T-shaped" consulting process: intersecting Information Architecture with User Experience Strategy. More, he discusses the concept of user experience, how to avoid prejudicing your study with the framework of your expectations, why a strategy can have impact, and how to think about the strategy paradox and strategizing as part of the process of making disruptive technologies -- see his column here.
Robin Marantz Henig Discusses Sociable Robots
The Real transformers on the NY Times. Among other things, it asks if robots will ever have self-consciousness. What do you think? And how would you go about proving to someone that you yourself are conscious?
Do you think animals are conscious?
When I was younger, I theorized that a difference between conscious people and other animals is that with consciousness, people are aware of being aware of themselves. Other animals may be self-aware, as in they know they exist and can reason about their own health and comfort and environment, etc, but the question for me would be whether or not they can examine and reason that quality of being self-aware: to be aware of being aware of being aware of themselves. That is abstract thought of a special nature that I find integral to the concept of consciousness.
We have robots that can reason about the situation they are in and match environmental inputs to reasoning about how they should change their own situation. Even if you have that robot refer to itself in its planning and calculations, I would argue that does not make it conscious. Among other things, it ought to have a sense of "embodied intelligence" - knowing what in the world constituted its self.
What do you think constitutes consciousness? And how would you tell in a robot or AI?
no subject
no subject
But then, the idea of "intelligence" as this mass noun, like "water," this stuff that you can have more or less of, is probably just bad folk psychology. It tends to be an average of many specific abilities, some of which can be wildly disparate; compare, for example, the intelligence of someone with Williams syndrome with the intelligence of an autistic child. Is this really a single-dimensional attribute? Probably not.
Similarly with consciousness -- the functional definitions all seem to be grasping at, "Is this thing 'one of us,' or not?" And the answer with artificial systems for a long time will probably be, "Not in any way that really matters," followed by a period of, "In some ways yes, and in some ways, no." And everyone's definitions of consciousness will hinge on what they want to accept as conscious, rather than starting from some kind of first principles and figuring out what's conscious from there.
Personally, I'm sympathetic to philosophers David Chalmers and Ned Block when they talk about "phenomenal consciousness" -- that the most interesting question is, "Is there something that it's like to be that thing?" Or in other words, when I imagine being that thing, am I imagining a "sensory world" that actually exists, or am I making a mistake and imagining experiences that don't exist? (For an example of the latter: there's probably nothing that it's like to be a thermometer, even though one could argue that it embodies a kind of simple representation of the external world.) But I see no way with current science to tell whether something is phenomenally conscious or not; it might even be unknowable to anybody but the thing itself, though I wouldn't be hasty in jumping to that conclusion.
no subject
An underlying assumption is that machine intelligence will in some way be similar to human awareness. I personally think that biological computers will be the coming wave, so the question of intelligence will be moot, the computers will have humanlike brains.
no subject
(Anonymous) 2007-07-31 12:07 am (UTC)(link)Anyway, your conscious-of-consciousness description sounds a lot like the same kind of processing that goes into hypothetical scenarios, and your definition of consciousness might exclude a goodly number of adult humans.
no subject
no subject
no subject
no subject
no subject
http://www.theafternow.com/
in the episode called "Rachael's Mutt"