Thursday, January 21, 2016

Some links and an extended repost on awareness

Something that still baffles me, the idea of different learning styles is simple rubbish that we feed teachers. Also, see my earlier complaints.

A good, long article on CRISPR and the future of gene editing.

And Jef Yakst at The Scientist gives a good overview of neural genesis in adults.

At Neuropatch, Janet Kwasniak has a piece on letting go of the magic of consciousness.


Conscious Awareness


We are “aware” that we are here, that we are humans, that we live on earth, and that that man over there wishes me to pay him money. We are part of a complex social, national, and family web. We form representations and models of these different relationships of our world and our selves. When we need one of those representations or some facet of information, we can turn our attention on it, and it will readily be there. “This is who I am. I live in the United States.”

None of this awareness is going to transcend a robot that represents its self as being at “this location on the factory floor, in this position on the assembly line.” If the robot senses/represents, “I am out of screws, roll to far wall and get more,” however such robots will do that, it will have awareness in a similar way that any of us humanrobots do at this moment. When we form the representations/thoughts that “I am out of food; need to go to store,” the representations and sensations we are aware of provides a great deal of information. This includes information and representations such as the location of the car, to the hunger in our body, to the place of the store.
The kind of awareness (vast representational stores) that adult humans have surpasses what higher animals, babies, and what our new robots have. As in the previous post, we have representational stores in abundance about our selves and our relationship to external events. We have such in ways that lesser "conscious" entities do not have, such as a chimpanzee or IBM's Watson. There is good reason to believe that language acquisition allows adult humans to have this kind of widespread and consistent information about our selves and the world.
But this high level of awareness, the vast informational stores about self and world, is not consciousness as often defined. It certainly is not qualia as often brought in. As explained in previous posts, your representational structures for seeing red are unique. No one else has quite the exact representational or informational repertoire as regards particular shades of red, and also do not have the bodily processes like feelings/emotions that will hang onto any particular shade. The person who played Big Bird for a long time will have representations and bodily attitudes (feelings of pleasure from remembrance) towards that particular shade of yellow in a way far different than anyone else. This should not be a bewildering fact. The only way to get a creature to represent and respond to a certain shade of color in the exact same away as another individual is to create that exact individual.

That there is a great deal of overlap in our representations and bodily responses to a shade of color, just goes into the fact that we share the same general sensory mechanisms and share a great deal of environmental and developmental structures. So, many of our representations and emotional responses will be similar to other humans, even if each of us has certain unique differences. Also, it is not surprising that we are walled off from imagining the basic representational repertoires of other beings, such as the representational structures of a dolphin.
Though language and other brain aspects (attention span, e.g.) gives humans information in quantities greater than a roach, greater than a capuchin, and greater than a robot, there is not some level of awareness where we have now woke up and “know that we exist.” Language does allow for self-representations and self/world representations that means that we represent our selves in widespread, globally countable ways.
The moral of this story is that humans do not have some property of awareness or consciousness that is outside information, generally speaking. Humans do not access awareness any more than a Google car will represent another "car over there." We of course do have a slightly different kind of “awareness,” but that is only because we represent a vastly greater store of information. Yes, like self-driving cars, we “see that car over there” and will avoid it, while at the same time (or within quick succession) representing that “I really need to get home” or that “This ice cream I bought from the store is going to taste great.” In this sense, we are informationally far richer than a dog, a baby, or likely a Google Car. For clarity, even if we program all of wikipedia into possible representational structures of the Google car, it may surpass us in total information and access and use of total information, but there may still be something to be said about continuous ego space relations that humans come to inhabit. That is, there may still be something to be said about the kind of narrative, bodily, homeostatic, and centralized sensory content that humans constantly represent about their selves. That information is both vast and centrally focused, so to speak. Perhaps we enjoy this in ways that we will not find useful to put into Google cars. Despite that kind of centralized self representation that we create, it is still just an information structure, and not some kind of awareness property or some kind of consciousness property that we have now tapped into.

Just as we eliminate consciousness as some property that emerged in animals and humans, we will stop seeing awareness as some property of the world. We understand, generally speaking, what baseline sensory content is, and we understand how nervous systems will represent, routinize, and draw future inferences about external environments. As those nervous systems ramp up, even into human knowledge, there is not going to be reasons to see consciousness or awareness as arising. We are representing the world out there in vast ways, both in our brains and in our theories, but such representations and knowledge do not transcend information or relational aspects to reach some different kind of representation. We do not reach a state of awareness that surpasses that of any other type of external representing structure. In this sense, humans do not become aware of their external world in a way different than IBM's Watson was aware of a Jeopardy! clue. We just happen to be programmed with a wider and more encompassing representational structure.  

No comments:

Post a Comment