A
little more on subjectivity
I
have always felt intuitively that the idea of the first-person
private sphere is overly stated. There may be a significant
impracticality to understanding what is happening in the
brain-that-is-you. That is, there may be an impracticality to
understanding your experience. But assuming that we as individuals
are machines programmed by our histories, with our own models and
representations, and endless personal knowledge and therefore associational structures that are only affiliated to us as
individuals, then the idea that you have some impossible-to-see
first-person experience makes sense.
Except
that I strongly deny that it makes sense, or at least, that it gives
us any epistemological purchase about the nature of consciousness.
The reason why I am walled off from your experience is because in
order to have “your experience” I would have to be a machine
built by genes and environment to be the exact machine that you are
at a given moment within that exact environmental space. Through
story telling and background sharing and explaining the connections
of the immediate environment that we as individuals are fixated on,
one individual can share certain relational aspects of their
experience with any other. In other words, there is a reason why us
Americans like the same television shows. They elicit some similar
experiences that we all share. Enough of our modeling, association
structures, and perceptual structuring is shared, and therefore great
similarities in our current representation of the present moment, say
a TV show, is shared.
Through
nature and culture similar machines (us people) are built across a
society or a family. Therefore our experiences are close enough aligned to elicit
similar reactions. That seems like a banal statement except so many
people hang so much on what such subjectivity amounts to.
Now,
can I experience the show in the exact same way as the person sitting
next to me experiences it? No. But again, given the kind of machines
that we are that are delicately programmed over many years to form
the exact repertoire of associations and emotions that we do when we
are present in a given environment, to enter into your exact
experience would require me to have your (near) exact programming.
What walls me off from understanding in perfect detail “your
experience” is not some divide of the specialness of consciousness.
It is instead the practical impossibility of programming enough of
your associations and models to elicit the exact way that your
perceptual representations and brain/body processes will play out.
Some
Examples
Given
the internal model or representation of what it is like when two
humans experience a needle in the arm, including shared structures of
pain response as well as shared cultural inculcation of pain response
behavior (etc.), our internal representation, our experience, will
have certain shared qualities. It will have other personal,
subjective qualities that other people do not have because their
associational, representational, and bodily responses are slightly
different. For example, two ten year old's who are scared of shots
will share more inner representational similarities than those ten
year old's will share with the greatest stoic out there while she is
getting a shot. Though, still, there are surely at least some inner
representational similarities, for instance, some aspects of
what-it-is-like when someone touches you on the arm will be shared by
all three. We can assume this is partially true given that we are all
generally wired in similar ways, our bodily representation models
share similarities, say.
The
simple story is that experience, that what it is like, is some kind
of internal monitoring of our conglomerate representations. These ten
years old's have significant representations that are centrally
focused on the representations of, “touching arm; this is supposed
to really hurt.” And the stoic has significant representations that
internally get focused on, “touching arm, pain means nothing in the
world.” I put the “representation” in linguistic form there,
but obviously there is immense complexities there which only at times
are influenced by social, linguistic mediation.
The
individuality and uniqueness of our experience, of our internal
representations of our self as we interact with the world is the only
way that possibly makes sense. If my computer was internally
representing everything within its self and what processes it was
carrying out, it would quickly be having unique representational
structures. It is the only computer with its repertoire of documents
and programs.
Likewise,
if a rock was given a modeling and representational system, it mapped
the information of different parts of its self, it would be the only
one with an internal representation of atoms in that exact
arrangement. If we then started talking about the rocks awareness of
its own self, it would be the only one that was presently mapping
that exact arrangement, as well as internally representing that it is
mapping that mapping of that exact arrangement. Hence, it is
subjective. It also makes perfect sense that its internal
representational system (its information system) may have
significantly different qualities than the next rock over. Say one
rock is using a scanning electron microscope to array its atoms
location and it draws such maps in crayons which it then internally is
shown a partial, singularly focused map of (its center of awareness).
The other rock uses an internal fMRI and draws maps or
representations in computer models which it then accesses and is
centrally focused on (or represents its attentional system as singularly focusing on). The individuality of those internal
representations, and the specificity of the given awareness beam, is
going to follow from those individual structural systems.
From
these examples, it follows that the internal representation of the
bat is likely more walled off from visual creatures than the internal
representation of visual creatures is to other visual creatures. We
may be able to glean some similar inner representational
similarities, say if bats have spatial mapping that is something
similar to our spatial mapping, there may be the slightest overlap in
what our representational systems are doing. In those ways perhaps a
bat and a human will have some similar experiences, like the kids and
stoic getting a shot above. That there may also be unimaginable
differences in internal representation because of significant
structural features, in this case perceptual differences, is no more
a useful insight than that you can never fully internally represent
the exact experience of the person next to you.
Lastly, language may be needed for some of the most acute internal representations and modelings. A bat and dog may have awareness and have robust experiences, but only humans (that we imagine for now) have an awareness, a representation, that we have awareness. There is good reason to believe that language provides much of the scaffolding for that kind of self-awareness. Language allows for an explosion of representing our selves as we are representing, and for delegating and dictating in finer detail all of the distinctions in the world, including as we attend to those different distinctions. It allows for thoughts on thoughts. For us linguistic beings, we may have a far tougher time imagining the experience of any nonlinguistic being than we do trying to understand specifically the bats perceptual differences.
No comments:
Post a Comment