Thursday, January 21, 2016

Some links and an extended repost on awareness

Something that still baffles me, the idea of different learning styles is simple rubbish that we feed teachers. Also, see my earlier complaints.

A good, long article on CRISPR and the future of gene editing.

And Jef Yakst at The Scientist gives a good overview of neural genesis in adults.

At Neuropatch, Janet Kwasniak has a piece on letting go of the magic of consciousness.


Conscious Awareness


We are “aware” that we are here, that we are humans, that we live on earth, and that that man over there wishes me to pay him money. We are part of a complex social, national, and family web. We form representations and models of these different relationships of our world and our selves. When we need one of those representations or some facet of information, we can turn our attention on it, and it will readily be there. “This is who I am. I live in the United States.”

None of this awareness is going to transcend a robot that represents its self as being at “this location on the factory floor, in this position on the assembly line.” If the robot senses/represents, “I am out of screws, roll to far wall and get more,” however such robots will do that, it will have awareness in a similar way that any of us humanrobots do at this moment. When we form the representations/thoughts that “I am out of food; need to go to store,” the representations and sensations we are aware of provides a great deal of information. This includes information and representations such as the location of the car, to the hunger in our body, to the place of the store.
The kind of awareness (vast representational stores) that adult humans have surpasses what higher animals, babies, and what our new robots have. As in the previous post, we have representational stores in abundance about our selves and our relationship to external events. We have such in ways that lesser "conscious" entities do not have, such as a chimpanzee or IBM's Watson. There is good reason to believe that language acquisition allows adult humans to have this kind of widespread and consistent information about our selves and the world.
But this high level of awareness, the vast informational stores about self and world, is not consciousness as often defined. It certainly is not qualia as often brought in. As explained in previous posts, your representational structures for seeing red are unique. No one else has quite the exact representational or informational repertoire as regards particular shades of red, and also do not have the bodily processes like feelings/emotions that will hang onto any particular shade. The person who played Big Bird for a long time will have representations and bodily attitudes (feelings of pleasure from remembrance) towards that particular shade of yellow in a way far different than anyone else. This should not be a bewildering fact. The only way to get a creature to represent and respond to a certain shade of color in the exact same away as another individual is to create that exact individual.

That there is a great deal of overlap in our representations and bodily responses to a shade of color, just goes into the fact that we share the same general sensory mechanisms and share a great deal of environmental and developmental structures. So, many of our representations and emotional responses will be similar to other humans, even if each of us has certain unique differences. Also, it is not surprising that we are walled off from imagining the basic representational repertoires of other beings, such as the representational structures of a dolphin.
Though language and other brain aspects (attention span, e.g.) gives humans information in quantities greater than a roach, greater than a capuchin, and greater than a robot, there is not some level of awareness where we have now woke up and “know that we exist.” Language does allow for self-representations and self/world representations that means that we represent our selves in widespread, globally countable ways.
The moral of this story is that humans do not have some property of awareness or consciousness that is outside information, generally speaking. Humans do not access awareness any more than a Google car will represent another "car over there." We of course do have a slightly different kind of “awareness,” but that is only because we represent a vastly greater store of information. Yes, like self-driving cars, we “see that car over there” and will avoid it, while at the same time (or within quick succession) representing that “I really need to get home” or that “This ice cream I bought from the store is going to taste great.” In this sense, we are informationally far richer than a dog, a baby, or likely a Google Car. For clarity, even if we program all of wikipedia into possible representational structures of the Google car, it may surpass us in total information and access and use of total information, but there may still be something to be said about continuous ego space relations that humans come to inhabit. That is, there may still be something to be said about the kind of narrative, bodily, homeostatic, and centralized sensory content that humans constantly represent about their selves. That information is both vast and centrally focused, so to speak. Perhaps we enjoy this in ways that we will not find useful to put into Google cars. Despite that kind of centralized self representation that we create, it is still just an information structure, and not some kind of awareness property or some kind of consciousness property that we have now tapped into.

Just as we eliminate consciousness as some property that emerged in animals and humans, we will stop seeing awareness as some property of the world. We understand, generally speaking, what baseline sensory content is, and we understand how nervous systems will represent, routinize, and draw future inferences about external environments. As those nervous systems ramp up, even into human knowledge, there is not going to be reasons to see consciousness or awareness as arising. We are representing the world out there in vast ways, both in our brains and in our theories, but such representations and knowledge do not transcend information or relational aspects to reach some different kind of representation. We do not reach a state of awareness that surpasses that of any other type of external representing structure. In this sense, humans do not become aware of their external world in a way different than IBM's Watson was aware of a Jeopardy! clue. We just happen to be programmed with a wider and more encompassing representational structure.  

Thursday, January 14, 2016

Graziano in the Atlantic

Here is a short piece in the Atlantic by Michael Graziano, who I have talked about before. Also see Michael Smith's take on the piece at Self Aware Patterns. At the Atlantic there are 600 comments on the piece, so Graziano's brand of eliminativism rankles a few feathers. A few thoughts on the article:


In general I follow Graziano's thinking. He claims that we misconstrued the qualities of white light because of the information we took in of it. It seemed white, so to speak. Likewise, he claims, we have developed poor concepts about consciousness because of the limited information ("consciousness) that presents itself to us, to our other self-reflecting, self-perceiving aspects. And thus we endlessly get really bad conceptions about that entity that we have labeled consciousness, including that it is something outside of information or representation.  

I do wonder at the idea that “this is the way consciousness seems to us.” By the time of, say, Locke, Kant, Descartes, or later in 20th century batty-qualia-philes, it becomes difficult to claim that we are forming our ideas about what our consciousness is by just searching our inner world. Obviously, the Greeks, given that they were the first formalizers, transversed some ground rather naively. Perhaps their analyzing of their own consciousness was rather unadulterated, so to speak. But certainly any later thinker has already been thoroughly fed societal and philosophical conceptions about their “consciousness” as they go in search of knowledge about such. Or in Graziano's terms, their reflecting on the information of their consciousness already has included in it information about what consciousness is supposed to be, along with other endless ideas about the general working of the world.

Now, maybe most of that historical lead-up to any particular theory will be inundated by any individual's conscious experience, and this leads to the continuous poor interpretation of that information, given the general structure of that relationship (transparency, for one).

However, I do wonder if a few different turns, perhaps a lucky strike of ten obvious Phineas Gage-like cases in the 17th century, would have led to a vastly different analysis of what consciousness is. Even without 20th century instruments, perhaps all the explanations and posits of “what consciousness is” may have been very different. Maybe we never come close to postulating a hard problem in the manner that we have done, or think that this subjective versus objective divide is so much a problem.

And speaking of subjectivity, which a lot of people espouse as a guard against scientific pontification on consciousness and qualia, see some of those 600 comments, I still find it a confusing mess.

The exact arrangement of the atoms/molecules of any given tree may be necessary to explain that tree's interaction within an environment. Think of thoroughly exploding a tree. To explain where every part of that tree ends up requires (I assume) a very particular (subjective?) analysis of that exact arrangement. The fact that such a singular analysis, given that this is the only exact arrangement of these particles in the world, is necessary to explain the future of these particles, seems banal to me. But it also seems like we are not doing anyone any favors by talking about the subjectivity of the tree as opposed to the objective way those particles will play out. There is no reason to divide the world into subjective and objective. What science aims for is a general analysis of why a thing happens. Say, why, generally speaking, all those tree parts will end up where they will. If humans care about exploding a particular tree in a certain way to send those parts to exact spots, they will have to particularize that tree. Hence engineering.

Following a Graziano like account above, I find people arguing over this about consciousness or first person subjectivity to be baffling. There is often this claim that the first person world is private and therefore science cannot measure it. Or that consciousness and qualia resides in a first person realm and science resides in a third person, and therefore the two shall not ever mix. This only has significant weight if you have posited some realm of consciousness that holds information in a way outside the way other materials, say computers, hold information. Let's say a computer runs a program with influence from external components. Yes, in order to measure everything that that computer engages in, you will have to provide a full account of the internal structure of that computer. But that should give you the full “subjectivity” of that computer. The same has to hold for brains, unless you posit some mental realm, some consciousness or qualia-like entity that is beyond science.

As in my previous parlance, if an experience you have is just information, it will be a unique representational structure. Only you are representing the information of “John, in NYC, my dog Shaggy is on the floor, I am reading the Huffington Post.” That is a very particularized set of information, and it is subjective in that sense. But there is no reason to think that information is outside of science anymore than information in the computer that you are reading this on has a unique array of information, or whose RAM is processing an unique array of information. In order to predict the exact behavior of that computer, or explain all its qualities, we will have to know everything about that particular object. That is just particularization, which is a form of subjectivity, I suppose, but only in a rather banal sense. People who think subjectivity or the first person means something different have not convinced me that there is something useful there. Of course, agreeing with Graziano, I believe they are likely to still be holding some poorly conceptualized idea of what consciousness is.

So, the idea that there is a subjective realm and an objective realm that science is going to have a difficult time parsing still baffles me.  


Monday, January 11, 2016

Metzinger's Model


I am working on another rehash of eliminating our concept of consciousness. 

Meanwhile, here is an enjoyable excerpt from Thomas Metzinger's the Ego Tunnel. Also, see his latest Edge.org answer.
We are Ego Machines, natural information-processing systems that arose in the process of biological evolution on this planet. The Ego is a tool— one that evolved for controlling and predicting your behavior and understanding the behavior of others. We each live our conscious life in our own Ego Tunnel, lacking direct contact with outside reality but possessing an inward, first-person perspective. We each have conscious self-models— integrated images of ourselves as a whole, which are firmly anchored in background emotions and physical sensations. Therefore, the world simulation constantly being created by our brains is built around a center. But we are unable to experience it as such, or our selfmodels as models. As I described at the outset of this book, the Ego Tunnel gives you the robust feeling of being in direct contact with the outside world by simultaneously generating an ongoing “out-of-brain experience” and a sense of immediate contact with your “self.” The central claim of this book is that the conscious experience of being a self emerges because a large portion of the self-model in your brain is, as philosophers would say, transparent. 

We are Ego Machines, but we do not have selves. We cannot leave the Ego Tunnel, because there is nobody who could leave. The Ego and its Tunnel are representational phenomena: They are just one of many possible ways in which conscious beings can model reality. Ultimately, subjective experience is a biological data format, a highly specific mode of presenting information about the world, and the Ego is merely a complex physical event— an activation pattern in your central nervous system.



And here is a refurbish of an earlier post by me (Moving Towards Eliminativism):

Following above, I am going to try my hand at denying “consciousness,” at least as some fundamentally new property or object. I am going to try to reduce it to something simpler and say that it is nothing above that analysis. For more on ontology and reduction see my previous post on John Heil, who lays out similar key ideas.

I am essentially going to reduce consciousness to a rapid process of (self-)monitoring, (self-)representing, and action (of the self). But all of those states in a non-conscious, non-unified way. Some of those processes are self reflective. Consciousness here is the “what-it-is-like” conception. It is feels and redness. It is the presentation of sensation “as it pops up before one's self.” (That last part is tricky because there is no adequate way to say “it presents itself to itself,” or simply “it is,” neither which is quite right). I aim to deny that it is actually a property, actually a thing.

First, let us accept something. The book on the table is not actually a book. The book-qua-book has created no new properties in the world, I assume. If it does something to some person/brain/being, other than what physical objects usually do, that will be because of that person’s representation of the symbols. But if there is a new, emergent property in that representation of the book, that property is in the brain (which is the question we are asking about), and is not in the unique structure of atoms as regards the book and the properties that book manifests.

[Aside, similar to “Does the computer know how to play chess?”, the relational aspect of the letters of a book, thus atomic arrangement, should be seen within the span of human cultural creation. Thus, the relational aspects of those atoms formed in letters allows additional humans to take in that information and build a tool shed. From single-cells, the process of evolution, sensory development, human brain development, and then cultural development has created “letters” that cause robust activities in the world. That is the manifestation of the relational aspects of atoms of letters and the whole story of epistemology.]

Back to representation and to consciousness. The chess computer or Watson can represent and create "intentional" structures in the world without the arising of consciousness or of new emergent and non-reducible properties (see previous post on Deflating Consciousness). An amoeba can “represent” and create self-beneficial action as it regards the sun or saltwater, again without consciousness or seemingly emergent properties. The idea being here that evolution put, say, early bacteria into a structure that sensed (perhaps represented) external environments, and did things because of those perceivings. Biology is chemistry is physics. If something emergent has happened in the universe at that time, it is no more usefully emergent than the first creation of hydrogen atoms. The structures of the behavior readily falls from the structures of the world.

Carrying on, animals got more complicated in sensing and representing their world. Again, this is what evolution does. Animals with more complex sensory systems and informational parsing structures increased in number because those modifications were useful in their environment. In humans, our representations reach a point where we had enough internal representations of our selves at the center of a modeled world, that we also modeled that we exist, that "I" experience.

My thesis here is that such representing in higher animals, such that represents and organizes behaviors and even “thoughts” around external and internal events, is doing so in what we are calling a conscious way. But this “consciousness” is actually nothing. The rapid presentation of representational structures, as arising from sensory information along with emotional effects, presents and circles around a representation of the animal, including tons of representations of the self, of an I at the center of the world, aided by linguistic explosion. But this is not actually a different process than the particular makeup of the rapid representational structures. It simply is that very fast run-through of those countless representations. There is not some central unified vision or property within that representation. If an animal represents that a predator is near, those representations are mixed with (really inseparable from) chemically induced sensations and feelings. A new property or object has not emerged of a non-reducible quality, of consciousness. This probably parallels Antonio Damasio a bit (see the Self Comes to Mind and The Feeling of What Happens), except I am more dismissive of consciousness as a robust phenomenon.

Likewise, human consciousness has a greatly expanded self-awareness. That is, it has representations and a model (and occasional representation of that model) of "I," my self, at this computer, at this date in time, at this place in the universe. But this is merely a representational sequence, and there is no "qualitative feeling" to it.

However, the “what-it-is-like” is special. Any complex representation (or maybe series of representations) is a singular representation unlike anything else in the world. But that does not mean it is an emergent property, unless we want to say that the unique atomic structure of "that rock" is emergent. Nor is it non-reducible. The representation is cashed out in its micro-structures which have been ordered that way by evolution and the history of this individual. If a representational schema plays cool functions in the world, allows you to do badass things with that trombone over there, it is because that is what evolution does. It puts material together in cool ways that can further manipulate the world for their own benefit. But this is always reducible to the physical level which was put in its cool situation by historical accident. So nothing emergent or new was created, except in some banal sense.

But consciousness is something. Sure, just as this book is something . . . because we represent it “as something.” But we are humble about what that book is: an interesting structure of material. We do not claim it is emergent or non-reducible. Mental properties are the same. Yes, they are processes. Yes, consciousness is a state of the world (a structure of atoms in the brain organized in an interesting way). Consciousness is no more an object or a process than water going from complete solid to complete gas is a process. Or better, than the computer going from the 1st move in a chess game (with an external “opponent” occasionally moving a piece) to the last. Consciousness is not different than that computer process, except that we have a great many more representations of our selves at the center of a modeled world, representing our selves as the feeler of emotions and pains, and so on.

In humans, the rapid presentation of images and concepts and ideas creates an additional representation of a being at the center experiencing such, but that representational process does not literally create (me or I). The postulated central representation does not experience anything any more than the computer experiences a chess move. Consciousness as property or non-reducible conception does not exist, except again in the unique sense of individuality (there is nothing exactly like that set of atoms or that exact process of representations). Only you are representing the room you are in, a few body cues, a few memory insertions, and this exact sentence at this exact time. That representation is unique, and in certain ways non-reducible. In order to get all the exact processes that are you and your immediate environment, we would have to reproduce just that entire environment.

To drive the point home, there is not some point in time where an updated self-driving car will represent world/self in such a numerous way that it is now beyond representation and has entered consciousness. Those representations do not morph into “global awareness,” they merely present another aspect of the world that is, perhaps, re-represented by a future brain process. They do not reach a state of experiencing or consciousness. However, human consciousness may have a representation of self/world, that because of linguistic blossoming, far surpasses anything that a chimp or a Homo Neanderthal or a 2-year-old or an advanced self-driving car will represent.

According to this naive representer.