Tuesday, September 29, 2015

Links

Jay Joseph at Mad in America has another good take on the state of behavioral genetics.

Jesse Prinz gives a good interview on social constructionism at Philosophy Bites (~20 minutes long). Also, see his book Beyond Human Nature which has some parallels to a lot that I write about on this blog. He has a more dense tome on consciousness, The Conscious Brain, which opened my eyes to a few things. He espouses an attended intermediate level representational theory (AIR) for consciousness. On social constructionism, see my post on gender and sexuality.

Coel Hellier at Coelsblog has a good post on morality and why subjective morals are the best we can hope for. I say discard moral language altogether and discuss the kinds of worlds and selves we can create and build. We can then shrug at those people who say, "That's all I ever meant and mean by morality!". Likely, they still wish to reproduce unnecessary discourses, and we should be wary, from a practical standpoint, of what they are sneaking into such discourses.

Joachim Krueger at One Among Many recounts some of the wrangling over the dual system model of cognitive processing. This is the fast versus slow thinking within Daniel Kahneman's work Thinking Fast and Slow.

More technical, an article by Steven Frankland and Joshua Greene on how the brain can form infinite meaning from infinite possible sentences.

At the cognitive level, theorists have held that the mind encodes sentence-level meaning by explicitly representing and updating the values of abstract semantic variables in a manner analogous to that of a classical computer. Such semantic variables correspond to basic, recurring questions of meaning such as “Who did it?” and “To whom was it done?” On such a view, the meaning of a simple sentence is partly represented by filling in these variables with representations of the appropriate semantic components.. . . Here, we describe two functional magnetic resonance imaging (fMRI) experiments aimed at understanding how the brain (in these regions or elsewhere) flexibly encodes the meanings of sentences involving an agent (“Who did it?”), an action (“What was done?”), and a patient (“To whom was it done?”).

Sunday, September 27, 2015

How footprints came to have knowledge about feet

Epistemology:

Brain/minds of adult humans are shaped by nature and experience to fit well into the world. Their models, representations, and response patterns do things unlike any object before. But this does not mean we need to posit that there is a world of knowledge and a world of objects.

It is like looking at a footprint and wondering how the footprint knows the shape of the foot that it represents. This is of course an idiotic way of looking at footprints. We are comfortable with the idea that a footprint was structured by the foot that previously landed in it. Such a foot left endless information traces on the dirt, ones that are relational to the foot itself. We should understand that brains are the same way. With our internal models and representations, our brains reflect many useful facts or facets of the world. They relate to the world in useful ways, and in ways that allow them to do remarkable things. But the story about brains and what they know, our epistemology if we must, should be seen similar to the epistemology of the footprint. They have been impressed by endless informational and relational structures about the world. This happens in neuronal assemblies similarly to the foot's physical indentations of the dirt.

We were fooled into thinking our epistemology was some category different from ontology because our first thoughts on the subject happened during fallow times. The story of human knowledge is merely the story of imprinting, whether through evolution or learning. As we deflate epistemology, we deflate knowledge. We take a mixture of pragmatic, eliminativist, deflationary, and realist accounting of the relationship between our knowledge and the world. By knowledge here we include both the structures within a brain and also our science and general beliefs. The real here may eventually come in where we are truly footprints of sand. Of course, it is questionable that our beliefs about knowledge were meant to be seen merely as such [see Rorty for much of this].

Our brains are limited however. They were not well designed to accurately reflect a large score of complex information about the world. The foot of universal knowledge was not meant to form a useful footprint on the brain. Our brains were designed to be flexible enough to take in enough information for surviving and thriving within our ancestors environment. That is (one reason) they cannot be imprinted with endless facts like Watson. It is why we imagine more easily particle behavior instead of wave behavior. If we were aware electron-bodies, we may be just as easily to be impressed by information about waves (given underlying evolutionary need for recognizing such behavior as atomic creatures). 

In the end, we are the only significantly aware object in the world. We have great impressions of worldly information, and we have it in a more usable and more self-reimpressing manner than any object we have seen elsewhere, including the robots we are now making. Our status, however mighty, does not pass mere impression. 



Wednesday, September 23, 2015

Links and a bit more on conscious awareness

Two informative podcasts:

Pete Mandik and David Pereplyotchik at Spacetimemind trying to discuss pain but break into a far cooler argument about the reality of our scientific theories, or our ability to know what is really out there. Also see their earlier exchange.

Neuroscientist Joseph Ledoux gives a good interview at Roger Dooley's Brainfluence podcast. There is a transcript attached if you would rather read. Joseph Ledoux has a new book out on anxiety. Here is a passage which touches on my consciousness rant below. Ledoux says that the brain's baseline behavioral response to danger is different than the feeling of fear itself:


The person has all of those things together in their conscience mind, but when you take these things apart in the brain what we see is that the threat detection system is separate from the system that gives rise to conscience feeling. If that same person were in the lab and I presented those stimulus subliminally, so the person didn’t know the stimulus was there, it had no conscience feeling of fear of anything else in response to that stimulus, and yet the person’s heart is beating, their muscles are tensing and so forth. Their brain has detected and responded to a stimulus that they don’t know even exists. There’s no feeling involved. The amygdala is lit up light a lighthouse in the brain when you present those stimuli either liminally or subliminally, but that’s not what causes you to feel fear.
The conscience feeling of fear is a cognitively constructed process involving the highest centers of the brain. For example, the prefrontal cortex and areas like that, that put together the fact that the amygdala is activated and it’s causing the brain to be highly aroused, and chemicals are coming from the body back to the brain. All of those things are happening. The amygdala is contributing in an indirect way, but the conscious feeling of fear is the representation that all that stuff is happening as a result of the amygdala activation together with the fact that you see there’s a stimulus there that you know from memory that is threatening. You may also retrieve personal memories of having been threatened by that stimulus, say it’s a snake. All of that comes together as an immediately present state of mind that we all fear.

Other links:

Scott Bakker discusses neuroscience threatening art

Karen Neander at Philosophy of Brains on content pragmatism, a poor solution to the problem of intentionality

If you liked that, see Alex Rosenberg's eliminativists take on intentionality

Lastly, NOVA on PBS had a good video on Homo Naledi



Conscious Awareness


We are “aware” that we are here, that we are humans, that we live on earth, that that man over there wishes me to pay him money, that we are part of a complex social, national, and family web. We form representations and models of these different relationships of our world and our selves. When we need one of those representations or some facet of information, we can turn our attention on it, and it will readily be there. “This is who I am. I live in the United States.”

None of this awareness, again, is going to transcend a robot that represents its self as being at “this location on the factory floor, in this position on the assembly line.” If the robot senses/represents, “I am out of screws, roll to far wall and get more.” (however such robots will do that), it will have awareness in a similar way that any of us humanrobots do at this moment. When we form the representations/thoughts that “I am out of food, need to go to store,” the representations and sensations we are aware of provides a great deal of information. This includes information and representations such as the location of the car, to the hunger in our body, to the place of the store.
The kind of awareness (vast representational stores) that adult humans have surpasses what higher animals, babies, and what our new robots have. As in the previous post, we have representational stores in abundance about our selves and our relationship to external events. We have such in ways that lesser “conscious” entities do not have, such as IBM's Watson or a chimpanzee. There is good reason to believe that language acquisition allows adult humans to have this kind of global awareness about our selves and the world.

But this high level of awareness, the vast informational stores about self and world, is not consciousness as often defined. It certainly is not qualia as often brought in. As explained in previous posts, your representational structures for seeing red are unique. No one else has quite the exact representational or informational repertoire as regards particular shades of red, and also do not have the bodily processes like feelings/emotions that will hang onto any particular shade. The person who played Big Bird for a long time will have representations and bodily attitudes (feelings of pleasure from remembrance) towards that particular shade in a way far different than anyone else. This should not be a bewildering fact. The only way to get a creature to represent and respond to a certain shade of red in the exact same away as a certain individual is to create that exact individual. That there is a great deal of overlap in our representations and bodily responses to a shade of color, just goes into the fact that we share the same general sensory mechanisms and share a great deal of environment and developmental structures. So, many of our representations and emotional responses will be similar to other humans, even if each of us has certain unique differences. Also, it is not surprising that we are walled off from imagining the basic representational repertoires of other beings, such as the representational structures of a dolphin.

Monday, September 21, 2015

Quick hit on Consciousness

See my previous posts: Thoughts on Consciousness and Moving towards Eliminativism

There is no consciousness. We represent things. We represent our selves, worldly objects, and more complex (~abstract) ideas. The robust representations that are quickly presented in our brain do not give rise to a separate entity, “consciousness.” Those representational processes only give rise to more and more complex representations. This includes representations that we are mapping the world and our selves, representations that show we are in the process of representing. 

If some definitions of consciousness only mean this, then so be it, but consciousness does not have ineffability and bizarre properties, say of the Chalmers-Nagel-Qualia kind. Our “consciousness” may be ineffable in a similar way that the only true representation of the United States is something that is exactly the United States. Any representation or map of the United States necessarily leaves off some kind of information (say the direct relationship between Nebraska and the moon of Io). There is a uniqueness to all of our programming, just as there is a uniqueness to all individual computers (assuming a minimal number of programs and word processing usage). If my computer looks inside its self, represents its self, it will be a unique representation. So, in that way we are unique and ineffable. Our representations (my deflated consciousness) are special in that way.

When I say our consciousness is only representation, I mean representation similarly to the idea that your computer right now is representing something. Or in the way that a Roomba (a computerized vacuum cleaner) will represent the world, so that it does not bump up against a vase, or in that it represents that “it can take up no more dust particles." The machine that is our brain/mind, including its forays into deep philosophical projects, never transcends those simpler kinds of representing processes. What it does do is have a whole lot more of them, and represents a self at the center of the world, at the center of action. It represents that self's immersion in social rules, in social relationships, and so on. But no single representation or series of representations (I am sitting here typing) moves beyond the more bare-boned representations of the Roomba.

The most important representations: "I am here. On earth. In the year 2015. There are many countries and mine is the United States. I exist . . . "

I see this as eliminativism of the concept of consciousness. Michael Graziano (The Social Brain, and his Aeon article) makes similar sorts of claims, and he struggles with whether he is presenting an outright elimination of consciousness or not. What has been called consciousness encompasses many things, so what remains under my best understanding of our world representing and modeling includes some of those things previously encompassed in definitions of consciousness. However, there has been too many claims that pit consciousness as something different and above representing and modeling that it is easier just to say we are eliminating consciousness as a robust concept. After this, we now can give a good accounting of brain/minds as they represent and interact with the world. We do so without claiming that some property of consciousness emerges. Though, with that said, creatures that represent the world in the complex ways we do obviously gain abilities that minimally representing forces would never achieve. Evolution can build cool structures like monkeys. It will not create the structures of many objects we see in modern society, at least it will not do so rather directly. So, some kind of accounting for what these more complex representational processes (adult humans) do that more modest representing processes (say bats) do not do, is still needed.

See also:

Stanislas Dehaene, Consciousness and the Brain

Douglas Hofstadter, I am a Strange Loop

Michael Graziano, The Social Brain

Thomas Metzinger, The Ego Tunnel



Wednesday, September 16, 2015

Links and a prelude to Cryogenics Post

Links:

Larry Moran at Sandwalk has a good take on scientific realism and philosophy. Also see his previous take on what kind of knowledge that philosophy dwells in. The answer (mine but also Moran's) is that we should see and define science broadly, and see good philosophy as being an arm to asking appropriate questions within such a realm. And that philosophy that spends endless time talking about god, or the possibility of god directing evolution, or of all of the logical moves of schmess (very similar to chess), really is not important. Similar to the NFL symposium on "What constitutes pass interference?". At least it is not important as regards arriving at our best understanding of the world in general, as opposed to more mundane things like figuring out how to play schmess well or delineating pass interference.

In the Guardian, Robotic Hands. See the video and the birth of Darth Vader. Posthumanism is upon us.

Two videos on consciousness. A new one at the Economist, features just enough Dan Dennett to counterbalance the Chalmers' position.

And a better, older video by Nicholas Humphrey.

Cryogenics

"The False Science of Cryonics" by Michael Hendricks is just a bad article all around. The comments underneath make most of the points I make below. The article had an ax to grind, but I've been meaning to write on this. 
My following post on cryonics will be more substantive and mostly a different track than the worries below. I more shrug at the idea of cryonics, and mostly that is because we have to deflate what consciousness is and the intrinsic value of any given consciousness. With that said, it is a bit of shrug, since also, there is nothing wrong with trying to make our conscious selves last as long as we can. But I will get into that later. Here are some of the bizarre excerpts from the MIT article. 
Synapses are the physical contacts between neurons where a special form of chemoelectric signaling—neurotransmission—occurs, and they come in many varieties. They are complex molecular machines made of thousands of proteins and specialized lipid structures. It is the precise molecular composition of synapses and the membranes they are embedded in that confers their properties. The presence or absence of a synapse, which is all that current connectomics methods tell us, suggests that a possible functional relationship between two neurons exists, but little or nothing about the nature of this relationship—precisely what you need to know to simulate it.
That is a nice summary of what we know now and what it looks like we will be able to achieve in the near future. But the dream of waking up one day hinges on the in-principle whether science could figure out that relationship between neurons (and the bigger picture). I am not sure he gives us good reason to think why in-principle we could not be able to do such in the future. Trying to speak of the in-principle as viewed from our standing on the ground today is the bizarre move, but we will get to that.
That means that any suggestion that you can come back to life is simply snake oil. Transhumanists have responses to these issues. In my experience, they consist of alternating demands that we trust our intuition about nonexistent technology (uploading could work) but deny our intuition about consciousness (it would not be me).
Many transhumanists do not hold that intuition. And they do not care so much about the “same as me” idea. And instead take a more deflationary and appropriate stance towards me-ness. (Again see the comments below the article)
No one who has experienced the disbelief of losing a loved one can help but sympathize with someone who pays $80,000 to freeze their brain. But reanimation or simulation is an abjectly false hope that is beyond the promise of technology and is certainly impossible with the frozen, dead tissue offered by the “cryonics” industry. Those who profit from this hope deserve our anger and contempt.
This article hinges on “promise of technology” to be a useful intuition. This argument is that our best present beliefs about what we will discover point to the idea that recovering information from these frozen neurons will be futile. But this is of course idiotic. Because those kind of conjectures can be very empty. Especially if we are talking about brain science 1500 years in the future, to go rather long. No, you may not wake up in a decade. And at first blush it is troubling that I will not wake up tomorrow. But the intuition usually holds, that as long as I wake up, I will be happy. That is what I want.
His worry may be germane if you have some wildly false hope that cancer will be fully cured in three days, because you know we have lots of people working on it and it seems eminently solvable. Then in such a case you have a poor belief in the “promise of technology.” But it seems a really stupid thing to say that humans will not have reached an exoplanet in 1,000,000 years in the future because current technology does not seem promising towards such. Anyways, it is a baffling argument. He might as well be Nostradamus. 

The main point here is that if you care about waking up one day in the future, just as you care about waking up in the morning, which we do not begrudge people wanting, then the only way that will be possible is through freezing. Even if it happens that the cryonics of today did not adequately preserve those connections or allow future scientists to reconfigure them, it is still the only chance. And it is becoming at least a little more plausible every year, both with knowledge acquisition and also with preservation techniques.
To go pragmatic, it may greatly help those scientists of the future if they had a good overview of who you are as a person. Say, especially, the language you spoke. We also cross over into the dangerous (or burdensome) identity talk. Since chocolate and vanilla are so ubiquitous a thought and desire structure, it may be relatively easily to “see” in your neuronal structures that you, your self, enjoy vanilla over chocolate and create that in you. Also, at this present moment you may have an inkling, a hint of a preference for oregano versus thyme, but it is a rather narrow memory and desire without as many representations. If scientists of the future fail to see “that connection,” they may just plug one in. Or if they don't give you a preference for oregano or thyme (even though one did exist) in your brain, if forced to choose a preference, you may conflate a desire preference from whatever limited knowledge they happened to encode about thyme and oregano in you.
The interesting thing (or not so much) may be that most of us would not practically care if such a trivial detail is not perfectly lined up. If somebody (or simply your brain/body) were to erase that desire preference while you slept, it may be rather uninteresting to you, something that you would shrug at. And you would not see it as a destruction of your self. And the far more important thing to you may be just that ~you~ wake up in the morning or in the future and have generally the same thoughts as before, even if a few things are slightly off.
Another point, on recreating “you” in silicon, is that if scientists had a good diary or video of “who you are,” I would think scientists would be greatly aided in putting you back together again. They would not need to recreate the exact wiring, but merely the general preferences and desires. And your body would tick on. If you go far enough in that direction many of us may get upset and say, hey, "that is not me”. Though, many of us also shrug at the thought that we could have selves that care slightly more for punk rock than classical jazz, and still see the basics of our selves in some more durable and lasting characteristics, instead of rather trivial or happenstance preferences. 
As one commenter put it, this article was both bad philosophy and bad science.



Tuesday, September 8, 2015

Self Expressions and Links

A good article on how language creates problems in conceptualizing the self:
I find the “your self” construction to be pleasantly playful and mildly useful. Often I cringe when I read it in pop-psychology, self-help, or other various instances. But I have written it in such a way for too long to stop now. I like it for two reasons. One is the best understanding of the self, including the idea of the self-model, as highlighted by Thomas Metzinger in The Ego Tunnel. Following that general understanding, Bruce Hood in his book, The Self Illusion writes the phrase “your self” continuously throughout. Hood's take on the self shadows my basic understanding of the self and also implements the rhetorical strategy of dividing “one's self” in language for similar reasons.
As the article avove highlights, inward looking metaphysics, “What is the self? What is consciousness?”, suffers from the general difficulty in assessing that inner world, especially from the way it simply appears to one's self, to one's consciousness. Furthermore, we have built theories and language structures around a poor understanding of that inward milieu. We did this because we overly trusted the inward looking eye to give us useful, relevant information. And now we are trying to unfold that cloth. So I find the “your self” construction both to help us continually question our given (or culturally embedded) description of the self, but also to remind us that we can play with our language and our discourses, and write ones that keep our descriptive positions a little more sanitized. And also a little more guarded.

Quickly, I will also point out that something similar goes for free will discourse. We can give our best description of humans, and best scientific accounts of human behavior, and there is no reason to think that we will be using the phrase “free will.” If we need to draw distinctions between when a brain or computer makes choices/moves from internal processing (as opposed to external compulsion or manipulation), it is a rather easy distinction to describe without possibly entering the foolish discourses of free will. Ontologically, the concept is dead. If we find it necessary to usefully separate green rocks from brown rocks within a narrow pragmatic discourse, most of us are going to find a better word than grue (or free will). Again, sanitizing our best descriptions from social stupidities (and social desires) is how we will eventually speak. Brain science (et al) is overthrowing, rewriting folk psychology. It is teaching that the descriptions we created from simply looking inside of us, inside our own heads, created some significant issues blocking our best understanding of those very entities.

A bibliography of various self books. Some of these are ones that I have read in the past and that informed my thoughts on the subject, but Hood's, Ravven's and Ananthaswamy's books are more recent takes that share much of my understanding. (I have only browsed Ravven and Ananthaswamy, but they both seem well done)
Thomas Metzinger, The Ego Tunnel
Bruce Hood, The Self Illusion
Anil Ananthaswamy, The Man Who Wasn't There
Owen Flanagan, Self Expressions
Antonio Damasio, Self Comes to Mind
Peter Berger and Thomas Luckmann, Social Construction of Reality
George Herbert Mead, Mind, Self, and Society


Other Links

Via Three Pound Brain, a paper on the inherence heuristic, where we give quick but often misleading characteristics and explanations to events.

And last, are all neurodegenerative diseases prion?