Update: For a more coherent explanation of Graziano's program check out selfawarepatterns.
Here are a couple of articles by Michael Graziano, one older from Aeon and the other recently from the NYtimes. I agree with his general program towards how we are going to deal with consciousness. The Aeon article is older than his short blurb in the NYTimes, and I think the former suffers from the fact that he was not upfront about eliminating and reducing consciousness, which it looks like he is now more willing to immediately own.
Here are a couple of articles by Michael Graziano, one older from Aeon and the other recently from the NYtimes. I agree with his general program towards how we are going to deal with consciousness. The Aeon article is older than his short blurb in the NYTimes, and I think the former suffers from the fact that he was not upfront about eliminating and reducing consciousness, which it looks like he is now more willing to immediately own.
Just
for the record, I am an eliminativist and reductionist. I
believe certain parts of the concept of consciousness are mistaken
ascriptions from our metacognitive stances, in which mistakes have
arisen because our first hand experience is a very poor tool for
analyzing what consciousness is. Much of our other ascriptions of the
concept of consciousness can be reformulated into a representational
or information type structure. Your brain is forming models and
representing (loosely speaking) different perceptual experiences and
how those present experiences fit into other models and other
previous representations that we have. Emotions and feelings are
bodily processes, some of which, particularly emotions, can also be
set into representational structures as we represent the state of our being. Feelings are more tricky, but we can probably ascribe some bodily activity to them or put them into some kind of representation schema (pain is a representation of the pin pricking your arm, say). Some day I will deflate (in the
least) the rest of those feelings. I will show that, on the bare, those
feelings and the power of such do not amount to much and were already
well established in lower animals. The rest of what we ascribe to
consciousness, as above, is window dressing and can be seen as
complex information crunching given the kind of powerful, parallel
machines that we are.
Some influences here include Dennett, Patricia Churchland, Thomas Metzinger, Stanislas Dehaene, Jesse Prinz, and Antonio Damasio.
But,
for now, my thoughts on Graziano.
The
argument is going to be that what we call “consiousness” is
really an attentional, representational structure. It is a model of a
self (body) that is in certain relation to all of these other models
of the world and other bodies (humans, animals). Conscious awareness,
under Graziano, is a robust, continuous representation of our self's
attention towards various objects. It is that internal
representation. Or “consciousness” is what we ascribe to that
constant process of relational representations at the center of that
focal process. Awareness
of our continuous attention processes is what we have modeled and
classified as consciousness. Awareness here is really just
another representational configuration.
We also have this to different degrees. Early mammals and even reptiles have a great deal
of representation of the world, say through visual processes. They
also have complex attention processes; they focus their activities
and perceptions onto different objects. Over time animals developed
more ability to direct their attention. A monkey, I assume, has
greater attentional control as well as a larger perceptual
repertoire than a lizard. In the end, animals have a representation of a certain
part of the world (their visual focus) and they also have the
beginnings of rudimentary modeling and representing of self/world
relations, which allows them to engage with more complex attention and discriminating processes.
That kind of attentional visual
awareness is still alive in homo lineages and more modern human consciousness
that arrive later. As language comes about, we now have even more
complex models and representations of self/word, including in the end
literally creating the concept of consciousness. As Graziano talks
about, animals are already modeling other animals behaviors and
anticipating their behaviors, which is a gradual buildup to a
rudimentary other-mind-analysis. In humans we started from those more limited world/selves modeling capacities, which are most robust in
mammals, especially in certain pack mammals, and maybe in some birds.
This kind of strong visual being with consistent, careful streaming
of the behavior of others as well as one's self, is where human
consciousness starts. But as we add language we arrive at a full
blown theory of mind, that is, we start actually pondering about the
inner workings of other people, and asking about their desires and
intentions (etc.), as well as coming to better grips with our own
immediately perceived psychological architecture. And eventually deciding that we are conscious.
Trying
to place other animals' consciousness as compared to humans is a
frequent pastime. When it comes to something like taking in a
specific visual scene, there are two bases of consciousness that seem central. One
is simple visual processing. In that sense, the chimp's immediate
visual processing of red seems like it has to have many similarities
to our visual processing of red, at least at some basic level. Now,
the main difference stems that an adult human's self/world modeling
is far more complicated, and any single experience is not
experientially bared to only visual processing. That is, our
consciousness of red is not some isolated percept. Some kind of
Buddhist like understanding of what we do when “we experience” is
far removed from what the concept of experiencing usually means. Any
single experience of red is tied into other representations and to
our modeling altogether. This is where some of the
higher-order-representation and self-representational theories come
into the picture. We are usually not only immediately processing
shape and color, but our experiencing is going to be saturated by
other representations, such as the background idea that "I am Lyndon
and I am experiencing red.”
It
makes sense that at least part of our present experiencing were
there in chimps and in early-non-linguistic humans. The reason why we
think a chimp “experiences red” differently than a gorilla “experiences red” and much differently than how a human
“experiences red” is because of the entire nesting of the visual
processes into their representational schemata, as well as obviously
differences in color discrimination.
This
also explains why any individual's experience of red is different at
different times. You may be seeing and processing the same shade of
red at the art gallery as you do when you are watching a firetruck at
a parade, but we cannot rend an “experience” from a broader
context, at least I do not think it makes sense. If we can find some
common ground to both experiences, then it will be by stripping out
other contexts of that representation and not fully relaying “your
experience” as you “saw a red firetruck at the parade.” How we
describe your experience of red in that particular situation will encompass more than the simple experience (representation) of merely seeing
red. Those representations, under my formula, also includes whatever
feelings/emotions that are created in such a scene. Say, if we find
some underlying singular program of "seeing red," where we try to isolate a particular representational state of seeing red, including the arising of particular
emotions/feelings that the brain produces along with that
representation, it will not describe the actual experiencing of any
particular life-event. Some shallower categorical move can be made,
such that there are similarities across our processing of a
particular wavelength of red that will be the same, but that simple
categorization will never exhaust any singular experience that we
have.
As
far as uniqueness and subjectivity. Your particular model and present
representation (I am John in Seattle getting coffee and my dog is at
home waiting to be fed, I need to call my doctor . . .) is of course
unique. Given that this is an “experience,” you are the only one
with anything close to that particular representation and model structure. Furthermore,
the exact set of emotions and feelings that surround your personal
representation is also unique. Given the above thought, the way that
such an immediate thought has a certain construction of bodily
processes, feelings, and emotions tied to it is rather uninteresting.
Likewise, certain socially-shared representations, given nature's
structures, will have some shared emotional responses with many other
people who have similar experiences. Being stood-up for a date, say,
is a common experience that has similar representations as well as
similar visceral reactions across people. The fact that your
representation/modeling is slightly different than anyone else's (no other John was stood-up by Julia Roberts on May 24th), is
entirely uninteresting. It is also uninteresting that a complexity of
that experience is tied to a particular set of emotions, which have
been tempered by all the other experiences you have had in the past as well as homo sapiens's givens. It makes sense that we can find similarities across other peoples' similar
experiences and across other of your own experiences. The uniqueness
of the present situation, the uniqueness and subjective factors of the exact representation of you amid your world/self models with the exact emotions that arise at this particular time, makes sense given the kind of representational and bodily creatures that we are.
Furthermore,
we can also shrug off as uninteresting the idea that your particular
representational substrate, that you represent certain wavelengths of
light within the representational schema that you do, is a mystery
fact. Models and representations are made for functional use. It does
not matter whether your map is drawn in crayon or pencil, what
matters is that such is functionally usable. Now, some internal
representation of what that map looks like will be slightly different
based on our's or nature's choosing of the exact substrate of that
map (pencil or crayon; emotion-based-thought-processing or cold-calculating-thought-processing), but again, the fact that such an internal
model has a particular, unique representational experience, since one
was drawn in pencil and one in crayon, is something we can shrug at. It seems obvious that a chimp's location mapping system may have a different "feel," or different internal representational characteristics, than a bat's location mapping system, given that one was first structured mainly by visual processes, say, while the other was structured by sonar processes. Or something to that effect.
The
bat's world modeling and perceptual representations being difficult
for us to represent, that is they are hard to image in the way the
bat does, makes perfect sense. Given the bat's perceptual structures
and brain structures, its particular experience of world representing and modeling is not something that should throw us for a loop.
No comments:
Post a Comment