On this page you will find comments I made on other blogs that I am particularly fond of. Some of my most articulate thoughts have been comments on other blogs, because I have been inspired by the thoughts of the author, whether I agree or disagree with them. I see this as one of the main values of blogging.
Posted on Wintery Knight from 11/17/13 to 11/18/13
Introduction: Wintery Knight shared a paper purporting to show that ethics cannot claim objectivity without God. The article was flawed and attacked a straw-man version of moral realism, so, as a moral realist, I shared why the article’s argument against realism was flawed. This back and forth ensued.
Meister’s argument against an objective moral system (known by its proponents as moral realism) without God is question-begging, to say the least. He argues that realists must answer ontological questions and epistemological questions about moral properties separately. The realist can easily reply with the analogy of vision. We are epistemologically justified in believing what we see is in front of us is real (read: exists in our ontology) because vision is a reliable method of knowing, as confirmed by empirical knowledge. Thus, the ontological argument is that what exists in front of us is real because it is the best explanation for why we see what we do given epistemological justification for believing what we see. This same argument can be applied to moral properties: we perceive an action as wrong, and we are epistemologically justified in believing this because the ontological existence of the moral properties that make that act wrong figure in the best explanation for why we make that judgment. This is epistemological justification enough! Meister’s argument has been met, unless by justification he means a sort of greater purpose that justifies why morality exists. But this is to beg the question in favor of God, as surely this kind of reason could only be provided by a higher power. Thus, the moral realist can offer epistemological justification for moral truths, but Meister wants theological justification for moral truths because he presumes they are needed for an objective morality. Stated like this it is clear that he assumes the truth of his conclusion. Thanks for sharing!
This argument has nothing to do with epistemology. The argument is about ontology. What grounds the existence of objective moral values and duties? On atheism, there is no grounding. We are accidents, the universe is an accident, and there is no reason to believe that we have some special value or special obligations to one another that are independent of our opinions and preferences – which vary by time and place. For an atheist, all questions of right and wrong are like adopting clothing preferences. Standards of what is acceptable evolve and there is no objective way to measure better vs worse. Who is to say that saris are better than bow ties? The scary thing is when you realize that this is how atheists understand prohibitions on slavery and abortion, or endorsements of marital fidelity or charity.
I quote, “By arguing for a belief in or knowledge of morality without providing a justification for morality, atheists confuse moral epistemology (moral knowledge) with moral ontology (foundational existence of morality). […] As already noted, being moral and having a reasonable foundation or justification for being moral are two very different issues.” This seemed to be the only argument given against moral realism, so I pursued it, and showed how it is an invalid argument. Moreover, epistemology has everything to do with ontology because epistemological principles are needed to justify an entry into our ontology; epistemology is central to all philosophical pursuits.
As for what grounds the existence of objective moral properties: naturalistic physical properties, because moral properties are supervenient and emergent properties of physical states of affairs. The mind is a non-physical thing that emerges from the complex causal nexus of physical brain states. The mind supervenes on the brain because if we made an exact duplicate of a brain and both brains were in the same brain state, then both minds would be in the same mental state. A supervenes on B if there can be no change in A without there being a change in B. Likewise with moral properties: if two states of affairs are exactly identical in their non-moral properties then they are identical in their moral properties. On this account is not an “accident” that wrongness supervenes on the physical act of harming someone because there is a nomological and physically necessary causal relation between the non-moral properties and the emergent moral properties.
So, during the Jurassic age, when there were no human beings, there was no morality, right? It seems to me that what you’ve done here is given an account of morality that is relative. That is, it is relative to the chemical make-up of people’s brains. (Note: I am a substance dualist, not a materialist, but let’s go with materialism for the sake of argument). The chemical make up of people’s brains obviously change over time and from place to place. And, on your view, that means that morality changes from place to place and at different times. So, these leaves with relativism, which is what I argued before.
There is no objective morality on atheism, because the universe is an accident and we are accidents. What an atheist can do is describe what different groups of people with different evolved brain chemistry arrangements believe. An atheist can say “those people think slavery is right, because of their brain chemicals” and an atheist can say “those people think that abortion is right because of their brain chemicals” and an atheist can say “those people think Nazism is right because of their brain chemicals”. But there is no way for an atheist to make any objective moral judgments about different evolved customs and conventions in different times and places, because morality is arbitrary on atheism.
And that’s what concerns thinking people about atheism – the lack on an objective standard. It’s that atheists think that morality is like choosing what is appropriate dress or what is appropriate food. Brains evolved by chance, and what brains think is moral is also unguided and arbitrary, varying by time and place, with no way for one brain to judge another brain as “right” or “wring” except by majority rule.
Moreover, it seems to me that you are a materialist, so here are some other problems with morality on atheism. 1) Atheism means no free will, so you can’t make moral choices anyway. 2) Atheism means no ultimate judgment when you die for what you’ve done. So on atheism, you can do whatever you like to feel happy, because society’s conventions are just arbitrary by time and place. If you can get away with enslaving people, killing unborn children or engaging in sex tourism with children, by all means do it. Just don’t get caught and judged by your society’s arbitrary conventions. Self-sacrificial acts of goodness are particularly irrational on atheism. 3) Atheism means no human rights, such as the right to life. Human beings are accidents. They are just animals who evolved by accident.
Morality is a thing that simply doesn’t apply to atheists. Atheists can be moral if they feel like it, by sensing the moral values and duties that are set by a Creator and Designer. But then they are just sensing a realm of objective moral values and objective moral duties that they cannot account for in their own worldview.
No, I am claiming that during the Jurassic Age there were moral properties that obtained on physical states of affairs. Correctly stated moral propositions if true, would be objectively true. I described what I take to be the correct view (emergentism, otherwise known as property dualism, so you are wrong that I am a materialist) of consciousness to describe the concept of supervenience. I then noted that just as mental states are emergent and supervenient on brain states, so to moral properties are emergent and supervenient on non-moral properties. “Moral properties are emergent and supervenient on non-moral properties”. I never claimed that moral properties are emergent and supervenient on brain states per the “brain chemistry” theory you incorrectly attributed to me, that would indeed be relativism.
Think of it this way. There is a grouping of atoms, say two hydrogen atoms and one oxygen atom (H2o). It is in virtue of that chemical makeup that this molecule has the phenomenal properties of wetness, liquidity, transparentness, etc. We have called the entity that has these phenomenal properties ‘water’ since before we knew ‘water’ was composed of H2o, but nevertheless, when we pointed to water and said ‘water’, aside from pointing to the phenomenal entity that we experience, we were also pointing to H2o. Our interaction with the phenomenal properties of water was in virtue of our interaction with the atomic structure of water, such that the atomic structure causally regulated the use of our term ‘water’. Thus, when we discovered that the atomic structure of water is H2o, we posited a synthetic necessary identity claim “water is H2o”. It is a necessary identity claim because in all possible worlds anything that is called water is composed of H2o because H2o regulates the use of the term ‘water’, and it is a synthetic identity precisely because it is not an analytic identity: no amount of conceptual analysis of the term ‘water’ would have yielded the conclusion that it is composed of H2o.
The atheistic moral realist is well within his rights in positing something similar for moral properties. It is in virtue of the subvening non-moral properties of an action or state of affairs that it has the supervening moral properties that it does. So the moral property ‘good’ might be causally regulated by one or a set of non-moral properties, in that our experiencing the moral property ‘goodness’ was in virtue of experiencing non-moral properties. If this is so then our moral principles will be synthetic necessary identities because non-moral properties will causally regulate the use of moral property terms across all possible worlds. The result is that there is nothing “accidental” or “arbitrary” about morality for the scientifically-minded atheistic moral realist. A moral property supervenes on the non-moral properties it does as a matter of logical necessity. Because there is a necessary entailment between certain moral properties and certain subvening non-moral properties the moral realist has no difficulty in saying that the actions of Nazis were and are morally wrong, and that slavery was and is morally wrong.
As for your final notes.
1. I have already noted that I am not in fact a materialist, but a property dualist, such that whatever story you can provide for free will I can likewise attribute. There is a connection between property dualism and agent-causal libertarianism, the strongest pro-free will view now on offer. Atheists can coherently subscribe to property dualism, so it remains a mystery why atheists cannot believe in free will as you claim.
2. This point just assumes that without final judgment there cannot be moral objectivity. If moral realism is true then human agents can hold each other accountable for morally wrong actions. I endeavored above to show how moral realism is a viable position, so if you’re worried about objective standards and accountability moral realism provides it.
3. Negative rights, which are rights not to be treated in certain ways can be derived from moral duties, which are duties to treat persons in certain ways, and which the moral realist can adequately provide from the moral principles based on the moral properties that obtain in the world.
Introduction: Bloggingisaresponsibility posted a fascinating entry exploring the plausibility of computer consciousness, and whether we could ever know that computers were conscious if they were in fact conscious. Follow the link just above to check it out, I don’t want to steal his work. Our conversation was as follows:
One of the main ideas I see you pointing towards is the notion that ‘thoughts’, considered as a blanket term for outputs of a function that match our pre-theoretic intuitions on said mental concept, are neither necessary nor sufficient for consciousness. If I understand you correctly, it seems you are noting the need to establish what class or type of ‘thought’ is sufficient, or at least necessary, for us thinking we perceive consciousness, and this before we can attend to the harder problems of consciousness. If so, then I agree; how can we expect to uncover what physical networks are sufficient for the realizability of consciousness, if we cannot find a working epistemological process for being justified in thinking something is conscious?
Enter the Turing test. The test is often dismissed as too simple, but taken as an empirical basis for justifying our judgment on the presence of consciousness in an entity, I think it is ingenius, and for precisely the reason that it specifies what sorts of outputs (thoughts) are sufficient for consciousness. What we are looking for is evidence that we are justified in thinking that something is conscious, as it doesn’t seem we will ever (or at least not for a long time) be able to directly perceive consciousness.
Consider: to pass the Turing test a computer must lie. Not only would the human interlocutor be well advised to ask the computer whether it is a computer, they will ask the computer “personal” questions. Now, there is no doubt that a computer program can be programed to tell a certain story, such that, the computer isn’t exactly lying, per se. I think this is about the level current computers put up to the Turing test are currently at. But I think it is conceivable that soon a program will both know that it is a computer program without human features such as hair color, weight, etc, and be able to lie about the human features it has. So, if a computer passes the Turing test, then it will be because it knows how to lie effectively, and it will know how to lie because it is aware that it is actually a computer program. This, I submit, would be a self-awareness sufficient for consciousness, so that if something passes the Turing test we are epistemologically justified in positing that it is conscious.
Sorry for the long comment, but I wanted to address an important insight you expressed in your post. Cheers!
Very well put! Also, always feel free to post long comments (and never apologize for them!). I learn a lot from comments, so I definitely don’t want you or anyone else hesitating out of concerns for length. In fact, often the best part of these articles are the comments and the discussions that ensue.
Your first paragraph is mostly on the mark on what I was arriving at. I would however say that the kinds of activity that give rise to consciousness may not even be thoughts to begin with. That is, consciousness may arise from patterns that correspond to how thoughts are implemented on a particular medium (like the human brain), but that would be a byproduct, rather than anything essential to thinking.
Your thoughts on the Turing test are fascinating. I never thought of it as a consciousness test, just an AI one. The perspective on lying and consciousness is interesting.
Although I think a computer can still be conscious and concoct elaborate lies, as far as any test goes (and all the ones I can think of are inadequate), that Turing test might be the best one.
You’re right, Turing certainty meant his proposed test to be an assessment to answer the question ‘can machines think?’ In that way, it was meant explicitly to address the ‘intelligence’ of machines, and thus, the possibility of AI, as you noted.
I suppose my extrapolation from ‘thinking’ to ‘consciousness’ is routed in the connection between the Turing test and its ancestral routes in Descartes. Descartes, as a substance dualist, thought that mind and body were distinct substances, such that, though man’s body is a machine (his words not mine), we are more than machines because we have souls, and the evidence for this difference is the fact that machines, “could never use speech or other signs as we do when placing our thoughts on record for the benefit of others. […] And the second difference is, that although machines can perform certain things as well as or perhaps better than any of us do, they infallibly fall short in others” (Discourse on the Method of Property Conducting the Reason). Given Descartes dualist convictions, I read him as positing an early version of the Turing test to test for the soul, which I understand as consciousness. (Full disclosure: I’m not a substance dualist, though I am open to arguments for property dualism and functionalism).
Thanks for your response and kind welcome.
Connecting Turing to Descartes. Awesome!
The connection between thinking and consciousness is a very common one, and a natural one to make. Who knows, maybe I’m being too dismissive of that connection?
Let’s say you walk home and do so completely on autopilot, to the point that you don’t even remember walking home. Would you argue that you were conscious while walking home?
One of the premises behind my dismissal of thinking as essential to consciousness is the claim that such events are not conscious events. But is this justified? What if they were conscious events and were forgotten?
This opens up a can of worms, but it does bring up some confounding points between consciousness and memory that can undermine the premises I assented to in part 1 and re-iterated in this article.
Thank you for commenting!
The Turing Test is not a test for consciousness. There is no test. This is known as the ‘other minds’ problem. There is no test and there never will be. It is simply a fact that it would be impossible to demonstrate machine consciousness. It would be too paradoxical if the researcher were able to demonstrate the consciousness of his machine but not of himself.
According to Popper’s rules and any rules that I’ve come across, the theory that machines can be conscious is not scientific. Strictly speaking it is less scientific than panpsychism. As BiaR says in his essay somewhere towards the end, it is difficult to be sure that panpsychism is not true.
I don’t think it is difficult to conclude that the phrase ‘machine consciousness’ is an oxymoron by almost any definition of the terms, but we must concede that we can never demonstrate that it would be impossible, and for exactly the same reasons that we cannot demonstrate that it would be possible.
As always, I feel that it is a mistake to ignore Kant. He saw that more than computation was required for mental phenomena and categorical thought. Ordinary consciousness would depend directly on his fundamental phenomenon and would be impossible in its absence. I am therefore I think.
Right, passing the Turing test is meant to provide epistemological justification for our judgment that computers can be intelligent. The natural sciences use a coherestist model of justification, but as any foundationalist will tell you, coherentism does not guarantee that those beliefs that are justified are true, and the coherentist must and does acknowledge this. It is with this in mind that natualistic scientific methodology allows that we can be justified in a belief, without knowing for absolute certain that that belief is true, such as, for example, if that belief is not empirically verifiable given our current technology. I do not deny that the thesis ‘consciousness is a necessary but not sufficient condition for intelligence’ is controversial, though I have endeavored in this section to motivate that claim. My point is that if the Turing test can provide epistemic justification for holding that computers can be intelligent, then the Turing test can provide epistemic justification for the belief that computers can be conscious, in some appropriate sense. The problem of other minds, and solipsism more generally, continues to be relevant, but philosophy should and can provide epistemic principles that justify our beliefs about other minds, even if they do not guarantee the truth.
I see your point about doing things on auto-pilot, and certainly will concede that we do things without being conscious of doing them. What I think this suggests is a difference between consciousness, and conscious thought, where attention is focused on certain things. When I consciously decide to get up and get a glass of water it can be said that I am conscious that I want water. This is because I am reflecting on myself. There is the ‘I’ that exists throughout all my actions and makes those actions possible, and there is the reflective ‘I’, that is made possible by, and thinks about, the subsisting ‘I’. When you walk home on auto-pilot you still manage to get home, because your consciousness doesn’t stop existing when you are not consciously thinking about it. I am suggesting that the Turing test, or Descartes’ version of it can give us reason to think an entity possess the reflective ‘I’. As the reflective ‘I’ is made possible by a subsisting ‘I’, this gives us reason to think a computer possesses consciousness. (Note: I am borrowing slightly from the work of Edmund Husserl, the pioneer of phenomenology, here.)
In order to ensure I understand, I’d like to confirm we mean the same thing when we write “consciousness”.
I use it as a synonym for awareness. Unless I am aware of something, I’m not conscious of it, and if I’m completely unaware, then I’m completely unconscious.
Is that how you are using “consciousness”?
My view on consciousness, is that it emerges from the complex neurophysiological interactions in and of neural networks. When these interaction become sufficiently complex, with enough feedback loops, consciousness emerges. This consciousness is defined by it’s ability to be conscious of spatial-temporal entities, such as trees, predators etc. Understood in this way, gazelles have consciousness because consciousness is not co-extensive with intelligence. When the complexity of neural networks reaches a point where the being can become conscious of non-spatial-temporal entities, such as their own feelings, beliefs and desires, then they are intelligent.
Along these lines, when you are on auto-pilot, you possess consciousness, it is just the fact that in those moments you are non-intelligent. On auto-pilot you still respond to spatial-temporal entities, which suggests that some bare level of your being is conscious of them. It is just that you are not conscious of you being conscious of them.
Thanks for you question, I hope I addressed it clearly.
3. Posted on SelfAwarePatterns, in response to his “Science, Philosophy, and caution about what we know”
Introduction: SelfAwarePatterns examines various ways to separate philosophy and science conceptually. I highly recommend reading his piece as it is informative and well written. SAP seems to suggest that what is distinctive about philosophy is that we cannot be 100% certain that philosophical conclusions are true. I agree, but I think the same is true of scientific claims. Nevertheless, I offer another way of distinguishing philosophy from science, that falls along standard rationalist vs empiricism lines. Rather than just drawing this distinction, I attempt to argue that strongly rationalistic intuitionism gives us a wider range of knowledge than empiricism and science.