-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathmodthree.tex
380 lines (257 loc) · 62.4 KB
/
modthree.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
\part{What is The Mind? What is the Body?}
\addtocontents{toc}{\protect\mbox{}\protect\hrulefill\par}
\label{ch.modthree}
\input{mindbodyprob}
\stepcounter{chapcount}
\chapter{Part \thechapcount: The Mind-Body Problem}\setcounter{seccount}{1}
The mind-body problem is a very, very, old puzzle in philosophy. It is trying to explain how our `what-it's-like-nesses' relate to our bodily states. It's trying to explain how our thoughts, emotions, and other mental qualities relate to physical (bodily) events. Basically, the Mind-Body Problem is trying to answer a question of the form:
\begin{center}How is X related to Y?\end{center}
There are many questions out there which are of this general form, for example, we have questions like:
\begin{earg}
\item[]How is the cause related to the effect?
\item[]How is exposition related to multiplication?
\item[]How is matter related to gravity?
\item[]How are the results of actions related to morality?
\end{earg}
For the Mind-Body Problem, the question is:
\begin{center}How is the mind related to the body?
How is the body related to the mind?\end{center}
Answering the question “how does one thing relate to another?” isn’t too hard to answer in most contexts. To make it a problem worth thinking about, the things need to have some obvious connection between them \emph{and} there needs to be something about the connection which seems problematic, or leads to problems. Conspiracy theories often try to make a connection between two things, but either there’s not an obvious connection or the proposed connection is problematic (crazy jumps in reasoning, odd consequences, etc.).\footnote{If you are interested in the Philosophy of Conspiracy Theories, I have a collection of readings, and an order to read them in, which can be at an intro-level.}
In this case, the question ``how is the mind related to the body?" is not easy to answer and just about any claimed relationship between the two seems to have problems. But, it seems very obvious that the mind, my `thinker' so to speak, does have some kind of relationship with my body. I think of things, my body does them, I feel sad, my body cries. So, the core question is:
\begin{center}How are our mental states, beliefs, feelings, or thinkings, related to our bodily states, the events which go on physically?\end{center}
There are two dominate theories, \gls{physicalism} and \gls{substance dualism}, about this and both have their problems, which we will go into detail about.
\section{Part \thechapcount.\theseccount: Substance Dualism}\stepcounter{seccount}
This is a very classic stance that there are two sorts of things in the world, rather than just one. It was best put forth by a guy named Ren\'e Descartes (who we will go in depth on in Module \ref{ch.modsix}). Some people think that there’s only material, physical, things in the world. Descartes thought that there were two kinds of things:
\begin{earg}
\item[]Physical Substances
\item[]Mental Substances
\end{earg}
\subsection{Substance:}
In that tradition of philosophy, a \gls{substance} is a thing which has properties and can survive the change in those properties. For example, my car has the property ‘silver’ but I can paint it black and it would still be my car.
\newglossaryentry{substance}
{
name=substance,
description={The thing itself, which has properties and can survive changes in those properties.}
}
To say that there are mental substances is to say that there are non-material entities which exist independently of the physical stuff, like the body. The everyday term for a mental substance is `soul'. We should be careful, this is a very confusing area; mental substances, or souls, are not composite according to the dualist. They are, to use some modern terminology, simple. Your typical physical substance is composed, built-up of, smaller physical things, like wood and metal, then the wood and metal are built of smaller parts, all the way down to strings (if String Theory is correct) or some other basic building-block. Mental substances (souls) aren't supposed to be like that. They are not composed of non-physical stuff and they are not composed of physical stuff, they are simple, basic, non-composite This is to say that mental substances do not have parts, they cannot be broken up, so to speak, or divided.
\Gls{substance dualism} is sort of the default view which many people have when they enter into this kind of debate, they think that they have a soul or something like that, and there have been various reasons given to think that there is this sort of soul.\footnote{This is especially true if the person enters the debate with a prior belief in an afterlife or certain religious stances.} Some people think that there is some sort of afterlife. Though it is possible to get an afterlife (with consciousness) without a soul, the default view seems to be that one is required. Others claim that people have free will and this requires a non-material soul. Others still think that there are different properties had by \emph{you} and your \emph{body}. If two things are the same, then there wouldn't be that kind of difference. Here we will look at some of the classic and modern arguments for Substance Dualism, many just listed.
\newglossaryentry{substance dualism}
{
name=substance dualism,
description={The stance in that the world consists of two general kinds of substances, in the case of the Mind-Body Problem, these are mental and physical.}
}
\subsection{Ibn Sina's Floating/Flying Man Thought Experiment}
The concepts and `building blocks' for the stance which became Substance Dualism have a long history in the western world, with different generations and even different cultures building off of each other to construct it. Some of the earliest accounts of it come from Plato (likely using Socrates as a mouthpiece for his own ideas). Plato held that everything in the physical world was an imperfect copy of the `forms' which existed independently of the physical world and that \emph{you} weren't your physical body, rather \emph{you} were this perfect form which continued after your death and my be reincarnated. After this, you have Aristotle, whose views have been very influential in the evolution of Christian and Islamic thought. Aristotle did not believe that everything was an imperfect copy of the forms, he didn’t believe in the forms (like this), rather held that they were names and properties embodied/expressed by physical entities. Aristotle did, however, hold that the intellect, the mind, was not embodied physically and was distinct from the physical world. The next great advancement came after the rise of Islam. The Islamic world got their hands on Aristotle's work and many absolutely loved it, creating the Islamic Golden Age, where Muslim regions were the intellectual center of the world. This is where we get Ibn Sina (980-1037)\footnote{Ibn Sina is often known to the rest of the wold by the name which his translators gave him, Avicenna.} and al-Ghazali (1056-1111). Sina argued that the mind and body must be separate using his Floating Man thought experiment and Ghazali refined this further for better merging into Islam. When Sina's and other's works were translated into Latin, they were very influential on St. Thomas Aquinas, who refined them to better fit within Christian thought.\footnote{In fact, in many paintings of Thomas Aquinas, you will see him in a throne or otherwise supported by many people, one of those people is almost always identified as Ibn Sina or another Muslim philosopher Ibn Rushd (Latinized as Averroes).} But, taking a step back in time, let's take a look at Ibn Sina's argument for Substance Dualism:
\thoughtex{The Floating Man}{One of us must suppose that he was just created at a stroke, fully developed and perfectly formed but with his vision shrouded from perceiving all external objects – created floating in the air or in the space, not buffeted by any perceptible current of the air that supports him, his limbs separated and kept out of contact with one another, so that they do not feel each other. Then let the subject consider whether he would affirm the existence of his self. There is no doubt that he would affirm his own existence, although not affirming the reality of any of his limbs or inner organs, his bowels, or heart or brain or any external thing. Indeed he would affirm the existence of this self of his while not affirming that it had any length, breadth or depth. And if it were possible for him in such a state to imagine a hand or any other organ, he would not imagine it to be a part of himself or a condition of his existence.\autocite[p.155-156]{Goodman1}}{floatingman.jpg}{The head of Ibn Sina floating in a sea of thoughts.}
In this thought experiment, Ibn Sina is essentially asking you to imagine being in a sensory depervation chamber, you cannot feel your body, you cannot see anything, you are floating, you have nothing but your thoughts. Without any knowledge of having a body, one would say that something like a hand would be foreign, not a part of them. All they could assent to is the existence of their mind. So, it seems that because of this distinctive feature, the two must be different. Which gives us this argument:
\begin{earg}
\item[1 ] I can imagine myself without a body.
\item[2 ] If two things are the same, then if you imagine one, you imagine the other.
\item[3 ] Therefore, I am not my body.
\end{earg}
From this we get that I am a soul, something non-physical. This is a relatively simple argument and one which Ren\'e Descartes essentially rips off in his Meditations on First Philosophy (covered in Module \ref{ch.modsix}). But, simple does not mean that it's good. Sometimes a simple argument is good and accurate, which is what we should all shoot for, but other times simplicity opens up the possibility of more modern rebuttals breaking it down. For example, take this counterexample, this alternative thought experiment:
\thoughtex{Clark Kent and Superman}{Imagine that Superman and Clark Kent are the same person but no one knows this. Clark Kent is a reporter and seems meek and unassuming. Superman is a hero, confident, and loved by all. Lois Lane loves Superman more than most because of the countless times he has saved her life. But, at the same time, she thinks that Clark Kent is a wuss who is too chicken to talk with her. It seems that, though Clark Kent and Superman are the same person, Lois Lane could imagine one without the other or even imagine a case where the two are side-by-side (showing that they are different).}{clarkkentsuperman.jpg}{Superman and Clark Kent standing next to each other.}
It seems clear and obvious that Clark Kent is Superman, they are identical, but Lois Lane can imagine them as separate, as distinct. So, why think that the mind and body are any different? One could argue that though we can imagine them as different, this is because we experience them in different contexts, but they are actually the same thing.
Another argument using this same sort of idea\footnote{The idea that if they have distinctive properties, then they must be distict.} follows from the notion of an afterlife. Many people believe that an afterlife requires some notion of a soul, giving an argument like this:
\begin{earg}
\item[1 ] The mind is immortal, the body is not.
\item[2 ] If the mind and the body were the same thing, this would not be the case.
\item[3 ] Therefore, the soul is different from the body.
\end{earg}
This is making two assumptions which are closely linked and we would need to look at. This argument is assuming that there is an afterlife and this afterlife requires some kind of soul or immaterial mental substance. Both of these are far from settled claims. Dualism could be correct without some kind of afterlife and it's at least theoretically possible for their to be an afterlife without Dualism being correct. For the latter, the idea is that upon your death, God (or whatever deity might be at play) does a `copy-paste' of your body, edits it to give you your perfect, ideal form, and then lets you live in Heaven (or whatever afterlife might be involved). This copied body would be you, and there was no reason to bring souls into the process.
\subsection{The Mary's Room Thought Experiment}
This is a relatively recent thought-experiment, a case to think about and try to understand with interesting implications. It was written in 1982 by Frank Jackson. The point of it is to give reason to think that everything is not just physical. This thought experiment leads to an argument for some flavor of dualism. I know that it is not very realistic, but I have amended it to make it more so. Here is the case:
\thoughtex{Mary's Room}{Two undercover spies fell in love and had a child. Due to the nature of their work, the government took the child and locked her a way in a room. The child in named “Mary”. Mary was forced to grow up in this room, but here’s the kicker, there’s only black and white. Mary never experiences colors at all. Mary, in this room, grows to be a brilliant scientist. She specializes in the neurophysiology of vision and acquires all the physical information there is to obtain about what goes on when we see a red rose, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal chords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’... Over the years, the political climate has changed and this sort of undercover work and the cloistering of the children becomes very taboo. The president finds out about Mary’s plight and orders her to be released. What will happen when Mary is released from her black and white room? On the day of her release, the president is present and hands her a red rose. Does she learn anything in that moment?}{Marysroom.jpg}{A black and white room with a young woman looking at a computer.}
\textbf{The Mary's Room Argument}
\begin{earg}
\item[1 ]Mary knows all of the physical facts about color vision.
\item[2 ]Mary has never experienced color.
\item[3 ]Upon seeing color for the first time, Mary learns something.
\item[4 ]If you learn something, then that thing is a fact you did not know before.
\item[5 ]So, Mary did not know a fact about color vision.
\item[6 ]If Mary did not know a fact about color vision, that fact must be non-physical.
\item[7 ]Therefore, there are some non-physical (mental) facts.
\end{earg}
This argument will \emph{either} get you substance dualism or a particular version of physicalism, but it will not get you certain kinds of physicalism. There are ways out for the physicalist, which we will see later.
\subsubsection{How Do These Substances Interact?}
Now we need to ask how mental substances cause physical events and vice versa. The stance that there is this sort of causation between them is called \gls{epiphenomenalism}. It seems clear that certain bodily states, like stubbing my toe, result in mental states (the feeling of pain) and that certain mental states, like feeling sad, result in bodily states (crying). But, how does this work?
The Causation Problem is essentially asking ``how do body states cause mental states?" and, of even more importance, ``how do mental states cause bodily states?" Many philosophers go with Epiphenomenalism because they think that we have some kind of free will. The various notions of free will and the arguments for and against it will come in Module \ref{ch.modFour}. But, for now, let's just say that by `free will' I mean that (at least some of) our choices are not deterministic, that some super-computer could not predict the choices we make before we make them. The arguments are typically of the following form:
\begin{earg}
\item[1 ] I have free will.
\item[2 ] If I was just physical, then I would be subject to the laws of nature (no free will).
\item[3 ] Therefore, I am not just physical.
\end{earg}
\newglossaryentry{epiphenomenalism}
{
name=epiphenomenalism,
description={The stance in that certain physical events can cause mental events and vice versa.}
}
If our choices are not deterministic, then they must be from outside of the laws of nature (because the laws of nature are deterministic (if they aren't, that doesn't help either)). If they come from outside of the laws of nature, then they must come from something non-physical (because the laws of nature govern physical things). So, something non-physical must be able to interact with something physical (EG the mental with the physical).
Boiling all of that last paragraph down, if our actions are non-deterministic, then something non-physical must be able to interact with something physical. This is far from a settled claim, as we will see when we cover free will. But, if you go with substance dualism to get free will, then you have a major problem. You need to have a way of getting the mental substances and the physical substances to interact, at least mental to physical.
Princess Elisabeth of Bohemia (1618-1680) was a philosopher and contemporary of Ren\'e Descartes. She, sadly, is mostly known today for her letters and correspondences with Descartes. In her very first letter to Descartes, Elisabeth questions the possibility of mental and physical substances interacting:
\factoidbox{M. Descartes,
I learned, with much joy and regret, of the plan you had to see me a few
days ago; I was touched equally by your charity in willing to share yourself
with an ignorant and intractable person and by the bad luck that robbed me
of such a profitable conversation. M. Palotti greatly augmented this latter
passion in going over with me the solutions you gave him to the obscurities
contained in the physics of M. Regius. I would have been better instructed
on these from your mouth, as I would have been on a question I proposed to
that professor while he was in this town, and regarding which he redirected
me to you so that I might receive a satisfactory answer. The shame of showing you so disordered a style prevented me, up until now, from asking you
for this favor by letter.
But today M. Palotti has given me such assurance of your goodwill toward everyone, and in particular toward me, that I chased from my mind all
considerations other than that of availing myself of it. So I ask you please to
tell me how the soul of a human being (it being only a thinking substance)
can determine the bodily spirits, in order to bring about voluntary actions.
For it seems that all determination of movement happens through the impulsion of the thing moved, by the manner in which it is pushed by that which
moves it, or else by the particular qualities and shape of the surface of the
latter. Physical contact is required for the first two conditions, extension for
the third. You entirely exclude the one [extension] from the notion you have
of the soul, and the other [physical contact] appears to me incompatible
with an immaterial thing. This is why I ask you for a more precise definition of the soul than the one you give in your Metaphysics, that is to say, of
its substance separate from its action, that is, from thought. For even if we
were to suppose them inseparable (which is however difficult to prove in the
mother’s womb and in great fainting spells) as are the attributes of God, we
could, in considering them apart, acquire a more perfect idea of them.
Knowing that you are the best doctor for my soul, I expose to you quite
freely the weaknesses of its speculations, and hope that in observing the
Hippocratic oath, you will supply me with remedies without making them
public; such I beg of you to do, as well as to suffer the badgerings of
Your affectionate friend at your service,
Elisabeth\autocite[Originally written in French 6 May 1643.][p. 61-62]{Elisabeth1}
}
We should now extract this argument, put the propositions in an understandable order and see how the reasoning flows.\footnote{This is not to say that Princess Elisabeth's reasoning or phrasing is bad, far from it, we just need to remove the polite flowery language surrounding the core thoughts to see the argument with clarity.} In this letter, also, are references to Descartes' Meditations on First Philosophy, which you will encounter later in Module \ref{ch.modsix},\footnote{Or, rather, you will encounter some of the relevant parts concerning Epistemology and this Mind-Body Problem.} so we should also ignore those references, as they are not relevant to the core point she is making. Her argument, extracted in this way, comes out like this:
\begin{earg}
\item[1 ] If a physical substance moves, then it must have a cause.
\item[2 ] If there is such a cause, then it must make contact with the physical substance and must have extension (a shape).
\item[3 ] A mental substance (a soul) does not have extension (a shape) and cannot `touch' or make contact with anything else.
\item[4 ] Therefore, a mental substance cannot cause a physical substance to move.
\end{earg}
The conclusion, in other words, is that there cannot be interaction, causation, going from the mental to the physical. This argument could go the other way as well, since a mental substance lacks extension and the ability to contact the physical, there could not be causation going from the physical to the mental.
Presenting Princess Elisabeth's argument in this way exposes some of the assumptions she is making about the nature of causation. The first of these assumptions is that an object can only be effected by something which it is in direct contact with (touching). Physics today, which Princess Elisabeth could have had no way of knowing, calls this into question. We see every day one object moving another without touching it. For example, magnets move metals closer to them without touching them (though sometimes eventually they do touch) and for a more stellar example, we have gravity: Massive objects, through gravity, draw objects closer to them without touching them (like magnets). The second assumption about causation which Princess Elisabeth makes is that the `causally active thing' must have extension, a shape. This too the physics we have today calls into question. Some claim that black holes are `pointy' or the size of a point, they do not have extension and in another context, there could be point-sized particles which lack extension as well. If either or both of those claims are accurate, then we have physical objects which lack extension but still are causally active.
While those counterexamples to Princess Elisabeth's counteragrument do poke holes, they do not completely remove the Causation Problem. The Princess was still on the right track. Despite us now knowing of things like pointy particles and contactless causation, it remains true that for a physical substance to move, there must be a cause and that cause must have a \emph{location}, there needs to be a place from which it exerts its forces. Even point-sized particles have a location in space. So, for a soul or mental substance to have causal force on a physical substance, it would need a location in space; but where is your soul? A mental substance is not the sort of thing which I can poke. This means that all we need to do to make the Causation Problem fit with our understanding of physics today is replace Princess Elisabeth's assumptions with a location, like so:
\begin{earg}
\item[1 ] If a physical substance moves, then it must have a cause.
\item[2 ] If there is such a cause, then that cause must have a location in space.
\item[3 ] A mental substance (a soul) does not have a location in space.
\item[4 ] Therefore, a mental substance cannot cause a physical substance to move.
\end{earg}
As before, this means that there cannot be causation from the mental to the physical and, as before, this works going the other way (there cannot be causation going from the physical to the mental). To get out of this argument, we would need something which exterts a force on a physical substance, makes it move, but does not have a location, a point or region of space which it occupies. That thing cannot be a mental substance because those are the things which we are trying to show do have these features. This might seem like an impossible task and maybe it is.
\subsubsection{Pre-Established Harmony}
There is one theory which accepts, whole hog, the idea that there are two kinds of things in the world, mental and physical, and that the two cannot affect each other. This view, put forth by Leibniz,\autocite[p. 33]{Leibniz2} states that a person has two things, a soul and a physical body (a person is composed of a mental substance and a physical substance), but makes three further claims:
\begin{enumerate}
\item[]No state of a mind can cause a state in another mind or body and no body can cause a state in another body or mind (basically, minds and bodies can't interact, minds and minds can't interact, and bodies and bodies can't interact).
\item[]Every state of a substance which wasn't a miracle and wasn't its starting state, was caused by the previous state of that substance (basically, how some substance was determines how it will be).
\item[]Minds and bodies are programmed (or pre-determined) to behave in mutual coordination with each-other.
\end{enumerate}
This is the Pre-Established Harmony stance. The first claim gives us our answer to the Causation Problem, namely, there's not any causation between the mental and the physical. Rather, because of the third claim, it merely appears to be causation. There's correlation but not causation. The second claim smooths out any wrinkles which may appear in the stance because it gives us that the world is deterministic, lights and clock-work.
For most versions of Pre-Established Harmony out there, it would seem, the `programming' of the substances is arranged by God or some other divine architect.\footnote{This can be related to divine foreknowledge as a problem for free will.} Many people think that God is all-knowing, and this will appear again when we discuss arguments for and against the existence of God. If this is correct, then we can explain this by saying that God was the one who programmed the substances. But this leads to a further worry, and a potential problem.
As I mentioned before, many people like Substance Dualism because it will get you some kind of Free Will, but Pre-Established Harmony denies the possibility of a substance doing other than how it was programmed, everything in the world is deterministic. This denies the possibility of Free-Will. People will have the illusion of control, but that control is much like a little kid holding a toy steering wheel. Sure they may mimic, without realizing, the movements of the driver perfectly, but they are not the one driving the car.
\section{Part \thechapcount.\theseccount: Monism/Physicalism}\stepcounter{seccount}
Another way to solve the causation problem and Mind-Body Problem is to say that there’s actually only one kind of substance in the world. This is a rejection of dualism and is called \gls{monism}. It can come in two forms.
\newglossaryentry{monism}
{
name=monism,
description={The stance that the world is comprised of only one kind of substance.}
}
The first of these forms is called \Gls{idealism}.This is the stance that there are only mental substances in the world. So, and I mean this jokingly, be nice to your table, it has feelings. The second is the real one which we will be concerned with for this class, but idealism does still have its adherents, is called \Gls{physicalism}.
\newglossaryentry{idealism}
{
name=idealism,
description={The stance in that the world consists of only one general kind of substance, mental.}
}
\newglossaryentry{physicalism}
{
name=physicalism,
description={The stance in that the world consists of only one general kind of substance, physical.}
}
As a stance in the Mind-Body Problem as well as in other areas (though not as common), Physicalism comes in several different forms. But to really understand the distinction between these, we should cover what is meant by the terms ``type" and ``token." A type is a general class of things. For example, `tree' is a type, same with `car'. There are many individual things which are labeled as trees or as cars. Tokens, on the other hand, are individual instances of a type. When we talk about various things, it's useful to be careful about whether we are dealing with types or with tokens. For example, if someone were to claim that `lying is morally wrong', we would need to know whether they are talking about all cases where a person knowingly misinforms another or just an individual instance of doing so. If they are talking about type of action and labeling all cases of lying as wrong, then all we need to do is point to a cases where it's OK to lie and that would disprove their claim. On the other hand, if they are talking about a token of the action, then we would need to look closely at the individual case to try and prove them wrong.
\subsection{Reductive Physicalism}
The different kinds of physicalism all share something in common, namely, that the mental states are the physical states, but they differ in whether they think in terms of types or tokens. The first, called ``Reductive Physicalism" thinks in terms of types, they claim that the mental states you have are identical to your physical states, meaning that a type of mental experience maps to a type of physical state (likely in the brain). This kind of theory is far more empirical in nature than the others which philosophers typically deal with and would require a ton of brain scans to set up. The model for this kind of identity (type-identity) is often found in science, where they identify a general class of things with another. For example, how hot an object is and the mean kinetic energy of its molecules.
Reductive Physicalism is the strongest one which you are going to find, it's making a very bold claim. Some claim that this is too bold of a claim. Reductive Physicalism entails that if two people have the same thought, then their brains had to be lit up (so to speak) in the same way. To some, this just doesn't seem plausible, as people come to the same conclusion about things all the time, but all brains are fundamentally different. One could bite the bullet and say that people have similar thoughts, but never the same thought, but that too requires a ton of experimental data.
All that being said, the Reductive Physicalist does have a solution to the Mind Body Problem and the Causation Problem. For the Mind-Body Problem, the solution is that the mind is the body, there's no difference and for the Causation Problem, it's just physical to physical causation, so who cares?
\subsection{Non-Reductive Physicalism}
Some people want to keep the physicalism, but don't want to say that all people have different thoughts, that two people can never have the same thought. This is where we get the other major kind of physicalism, Non-Reductive Physicalism. Rather than dealing with types, Non-Reductive Physicalism deals with tokens. Like Reductive Physicalism, Non-Reductive Physicalism says that there's only one kind of substance, namely physical, but Non-Reductive Physicalism claims that there are two different kinds of properties.
One way to think about this is in terms of colors. There are many different ways in which a certain shade of green can be produced. This can be from the particles on the surface of an object being arranged a certain way and having uncolored light bounce off of it or it can be from having colored light bouncing off of it and its particles being arranged in a different way. But, regardless, it's still the same shade of green. Similarly, the mental state, your thought, can be produced from a whole bunch of different arrangements of neurons in your brain. For the Non-Reductive Physicalist, the identification between the mind and the body is one of supervenience.
Supervenience is a bit of a tricky topic, but mostly because it's a word that you hardly ever see, we encounter the concept all the time without realizing it. Supervenience is a kind of relation between two things. It is basically that one thing supervenes on another when there can’t be a change in the first without a change in the second. The first depends on the second. So, for example, whether or not something is beautiful supervenes on its arrangement. If you want to make something more or less beautiful, you fiddle with how it's arranged. Similarly, some claim that the morality of an action supervenes on the results, so if you want to make the right action, choose the one with the best results, you can't change the morality of an action without changing what the results of it were. Looking a little more politically, societies supervene on the people. So, if you want to change a society, you need to change the people in it (typically this means convince them of something). And, finally, if the color example worked for you, the color of an object supervenes on the arrangement of the particles on the surface and the light striking it.
Going back to the point at hand, Non-Reductive Physicalism claims that the mental supervenes on the physical, meaning that there can be no change in the mental without a change in the physical. If you want to see this in action, look at videos of people getting fMRI scans. This is a sort of have your cake and eat it too kind of stance, they get that there is something mental `up-stairs', but they also get all of the scientific power of Physicalism. This is likely the reason why most philosophers today are Non-Reductive Physicalists.
Non-Reductive Physicalism gets all of the same answers to the Mind-Body Problem as well as the Causation Problem as the Reductive Physicalist, but it is less committed to such bold claims and it gets various other fun results.
\section{Part \thechapcount.\theseccount: Current Developments}\stepcounter{seccount}
Non-Reductive Physicalism and Reductive Physicalism both have the massive support which they do for a reason, but they don't paint the whole picture. If the question of the Mind-Body Problem was just ‘how does the mind interact with the body (and vice versa)?' then the Physicalist, of either form, has an easy answer and would seem right. But assuming Physicalism is right, this leads to another problem.
\factoidbox{We know enough to know that the world is completely physical. So if the mind exists, it too must be physical. However, it seems hard to understand how certain aspects of mind—notably consciousness—could just be physical features of the brain. How can the complex subjectivity of a conscious experience be produced by the grey matter of the brain?}
This is the biggest question in the Philosophy of Mind at the moment, I know it as the Hard Problem of Consciousness, basically, it's asking `how do physical things make something as complicated as the mind?' Assuming that everything is physical, where does consciousness come from? This is the end of the road, paths here are still being paved. In the next section of this module, we will explore this problem, and others, in relation to contemporary computer science and artificial intelligence.
There is also a stance, which is an epistemological one, related to this which is called mysterianism which states that the problem of consciousness and the Mind-Body Problem in general is not possible for us to solve. It can also be phrased as saying that we know enough about the world to know that Physicalism is true, but how/why this is the case is beyond the ability of the human brain/mind to answer. Remember that an epistemological question is one which concerns whether or not one can or does know something where as a metaphysical question is about whether or not something is the case. Mysterians hold that Physicalism is true, which is a metaphysical stance, but at the same time say that it is impossible for us to know how that works.
\input{mindsbrainsprog}
\stepcounter{chapcount}
\chapter{Part \thechapcount: Can Machines Think?}\setcounter{seccount}{1}
\section{Part \thechapcount.\theseccount: Artificial Intelligence and the Mind-Body Problem}\stepcounter{seccount}
We encounter, in our modern lives, Artificial Intelligence (AI) more often than we would think. The spell-checker on our word-processors, Facebook's advertisements, driving directions on our phones/computers, some baby toys, speech-recognition software, and many other things all have AI built into them. They are made to make the machines intuitive to us and useful. These are examples of what is called `weak AI' (a more precise definition coming later). When we think of AI, our minds are often plagued by thoughts of I, Robot, Star Trek, and other Sci-Fi stories. But, in the real world, can a machine, just lights and clockwork, have a mind? Can a computer be conscious? Can something like that really understand and really learn? Can there be a ghost in the machine?.
Those questions are all different versions of a seemingly simple question ``can there ever be strong AI?" (again, more precise definition is later on). The philosopher John Searle in his paper Minds, Brains, and Programs was trying to answer just that. Is it possible for a machine to think?
In our exploration of the Philosophy of Artificial Intelligence\autocite{SEPAI}. as it relates to the Mind-Body Problem, we will mostly be looking into the argument made by Searle in that paper and the replies to it, but I will be including examples to make it more relevant to today (as it was written in the 80s) and references to related more recent works.
\section{Part \thechapcount.\theseccount: Weak Vs Strong Artificial Intelligence}\stepcounter{seccount}
Searle starts us off by going down the usual path for philosophy, making distinctions. We don't want to confuse simple forms of AI, like the kind found in a cell-phone, with complex forms, like the kind found in Data from Star Trek. Otherwise, if we were to think that all kinds of AI are the same, we would think that iPhones are iPeople. The kind of AI found in your phone and a lot of other computer-devices (including the one you are reading this on) is what we will call `Weak AI'. This is:
\factoidbox{A form of machine intelligence, focused on a small task or a narrow range of (interconnected) tasks. This is also called `narrow AI'. The principle value or purpose of weak AI is to solve problems in a methodological and precise way which humans either don't have the brain-power or the time to do ourselves.}
Weak AI simulates a person's thinking, typically in an ideal way, to get the best results. AI machines will not have the same irreliable aspects as a human's mind. For example, a suitably robust AI will not experience emotional fatigue which could result in a bias or missed factor, it will not (unless the programmer built it in) experience the cognitive biases which we have seen in the past, and it will not experience becoming tired of doing the same task over and over again. For example, imagine that you are driving in an area which you are not familiar with and are using your GPS (in a phone or some other device) and you take a wrong turn. The GPS receives the data about your current position and from the algorithms figures out that you are not on the route. From this, it runs other algorithms to generate the fastest route to the destination given your current position. In a very short period of time, it generates the new route and the instructions for you to get to the destination. Imagine having your friend in the passenger's seat (riding shotgun) and having them be your GPS. The speed and accuracy of their directions would be nothing compared to that of the weak AI in the GPS.
Your computer or phone or what have you likely has many different compartmentalized weak AI programs in it, each activated for a particular task. The GPS AI is most likely not the same as the predictive-text AI. Human persons (a concept we will encounter in Module \ref{ch.modnine}) aren't like that. Our intelligence is more holistic. The `mind' which figures out your math homework is the same one which imagined the above example. This is where we get to Strong AI. This is:
\factoidbox{A form of machine intelligence which is not focused on a small task or on a narrow range of tasks, but can handle just about any form of task which is thrown its way. Rather than merely simulating a person's thinking, the AI is, in fact, thinking. The principle value here is the same as that of any conscious human. Strong AI machines have minds.}
Science Fiction is full of what we can use as examples for Strong AI. I have already used Data as an example, but also we have R2D2 from Star Wars, Sonny from i-Robot, Wall-E (from the movie with the same name), The Terminator (and many others if we include the plots of video games, like Cortana in Halo). Searle has no problems with weak AI and, looking at the progress of technology, he was likely correct not to have any metaphysical qualms with it (ethical qualms are another story). But he has problems with the notion of strong AI. He does not think that strong AI is possible.
\section{Part \thechapcount.\theseccount: Roger Schank's AI}\stepcounter{seccount}
Searle uses Roger Schank's AI as an example, because it's the one he's most familiar with. It should be noted that this AI is not special, but rather the basic way it works is found in AI even today, as I will explain in a moment. Schank's goal with this machine was to simulate how a person interprets a story when they're not given complete information. For example, take this story/question:
\thoughtex{Burnt Burger}{A man walks into a restaurant and orders a burger.
It comes to him burnt to a crisp and he storms out angrily without paying or leaving a tip.
Did the man eat the burger?}{burntburger.jpg}{A man looking angry at a burnt burger.}
An answer to this question is not given in the story; it's not like you were asked ``did he order a salad?" The answer is not spoon-fed to the machine or to us, rather we need to make logical conclusions given background information. In real life, many of the questions which we encounter do not contain all of the information which are necessary to answer them accurately. Most of the time, especially in the `real world', we are going to need to make certain jumps in our reasoning based on background information. Such as our knowledge of normal human behavior.
More than likely, you answered the above question with something like ``no, he didn't". The AI also generates the answer ``no". This is because the AI in the machine was programmed or trained with cases involving reasonable human behavior and made predictions based on those assumptions. Next, take the following story (starting the same way):
\thoughtex{Nice Burger}{A man walks into a restaurant and orders a burger.
When it arrives, he is pleased with it and leaves a large tip before paying his bill and thanking the waiter.
Did he eat the burger?
}{Goodburger.jpg}{A man looking pleased with a pleasant looking burger.}
Yet again, the answer is not spoon-fed to the machine or to you. Rather, we need to use some background information. While it is possible that the man did eat the burger in the first case and did not in the second, this is far from likely. You probably answered the second question with ``yes, he did". Similarly, the AI also says ``yes". In this case, too, we are relying on our training and experience in dealing with real world situations, our past. If, for example, we had trained the machine or a baby with only experiences which were the opposite of the normal, then the baby and the machine would likely generate the opposite responses to us with our normal upbringing.
But, how do AI machines do this? Well, the bedrock level methodology has not changed much in the years, only getting faster and having larger data-bases for the relevant cases. The heart of it is a decision engine. This is some system of algorithms or if-then-else style statements which take in some input and produce some output. For example, we could have something like this going on:
\factoidbox{If a person did not leave a tip, assume that they did not like the food. If a person did not like the food, assume that they didn't eat all of it.}
The current rage in computer science involves using what are called neural networks\footnote{They are called neural networks because the connections between stimulus and response resemble the connections between neurons in the brain and they are strengthened in much the same way (by repeated exposure).} and machine learning (also called `learning algorithms'). In this case, the decision engine is generated by the machine itself. The AI starts off with a large data-base of cases along with an `answer-key' of sorts. The easiest example would be a large number of pictures of hand-written numbers. The machine scans the picture and then makes a guess using some previously given (likely by the programmer) algorithm. Often, this is based on the contrasting pixels of the image and arrangements of similarly colored pixels. If it generates the correct number, great and it moves on to the next. If, on the other hand, the machine gets the wrong number, it adjusts the amount of `weight' it gives some factor in the image (in this case the pixels) until it generates the correct answer. Doing this millions of times with millions upon millions of examples creates a decision engine which can accurately predict the answer for cases like the ones which it has been programmed to handle.
My case above using hand-written numbers is very oversimplified. For a more detailed account of what is really going on, the YouTuber 3Blue1Brown has several videos explaining this.\autocite{BlueBrown} This methodology for machine learning and creating AI is not limited to cases like hand-written numbers and stories, rather it's found all over the place.
One shocking example of this in the real world is the case of Ashok Goel's Jill Watson\autocite{JillWatson1}. Jill is an AI built to be a Teaching Assistant for Georgia Tech's MASSIVE online courses on programming artificial intelligence (meta, I know). Coming from experience, for massive courses like these, the Teaching Assistant is often bogged down by answering the same question hundreds of times a day. Though the questions are phrased differently (like how the numbers in the hand-written cases look different), they all boil down to the same answer. So, the professors at Georgia Tech collected a data-base of questions, sorted them by type and gave Jill an answer key. After successfully generating correct responses for the initial data-base, Jill was given real-time questions being submitted in a real class (but not actually able to reply, she replied in a mirror-forum) and then graded by a real person on her responses. Once she had a 97\% success rate (which is higher than some people I know), they let her loose into a real classroom forum. The remarkable thing is that very few people figured out that Jill was an AI. And even the ones who did, only did so because the class was on AI and they were already skeptical. In fact, the professor for the course needed to tell them that she was a machine. In the latest version of Jill which I know of, none were able to identify her as an AI.
\section{Part \thechapcount.\theseccount: The Chinese Room Thought Experiment}\stepcounter{seccount}
Given Jill Watson, hand-writing recognition, and Roger Schank's AI, we have some basic knowledge of how AI machines make decisions and generate their answers, which is enough to get the point of this. The core commonality across all AI programs is a decision engine. How that engine is generated is not all that important (though using the machine learning and neural network systems seems to be the most efficient and most accurate), but what is important is that there's this engine. All Searle needs to make his argument work is that the machine takes input, runs it through an engine like Jill's, and then spits out an output.
\thoughtex{The Chinese Room}{Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.}{Chineseroom2.jpg}{A young woman in a room in front of a lecturn surrounded by boxes of Chinese characters.}
To help tie this together, in the case of Jill, the boxes of Chinese symbols is her data base of different answers. Similarly, in the case of Schank's AI, the possible responses to questions about stories are the Chinese symbols. The instruction manual is the core thing which we need to look into, this is the decision engine or sorting algorithm used by the computer to generate the answers. This can be written by a programmer with way too much time on their hands or through the kind of machine learning and neural networks I described before, that really doesn't matter. But what really matters is that all the machine is doing is, basically, crunching the numbers, running it through a bunch of `if-then-else' style sorters to generate the answer.
If I were to lock you in the room with symbols/words not of your native tongue (one that you don't know), you would essentially be doing what the machine is doing, looking at the input, running through the instruction manual, and then giving the output. There would be no real understanding going on, no real learning in the process. If, on the other hand, I put you in a room full of symbols from your native tongue, there would be something more going on, something extra. There's actually interpretation happening. I don't know a lick of Chinese, but I do know English, Latin, and the basics of a few other languages. In the case of Chinese, I would be just spitting out uninterpreted symbols. In the case of English or Latin, I would be interpreting the symbols, there would be intention or thoughts behind the answers given.
\section{Part \thechapcount.\theseccount: Strong AI Claims and Replies}\stepcounter{seccount}
Generalizing off of the Chinese Room Thought Experiment, the proponent of Strong AI would make two claims:
\begin{enumerate}
\item The appropriately programmed AI (in this case the entirety of the Chinese Room) can be said, truthfully, to literally understand the applied case (in this case, the meaning of the Chinese symbols).
\item The machine and its programs explain the human ability to understand and reply the way we do in such cases (in this case, linguistic comprehension).
\end{enumerate}
Searle thinks that neither the evidence nor the way in which even the most broad AI would function support these claims. Here is is reasoning:
In regards to the first claim, that the machine literally understands the applied case, it seems clear that the person in the Chinese Room is the same as the computer (or what have you) running the program. Though the person might generate answers indistinguishable from those generated by a native Chinese speaker, thereby be able to pass the Turing Test, there still would not be any real understanding, like we would have if an English speaker was put in the room is English symbols. This gives us the first argument:
\begin{earg}
\item[]If strong AI is possible, then the mere manipulation of symbols in a language would be enough to understand that language.
\item[]The mere manipulation of symbols in a language is not enough to understand that language.
\item[]Therefore, strong AI is not possible.
\end{earg}
For the second claim, that the machine explains the human ability to understand and reply in the way we do, Searle thinks that the programs described do not provide the sufficient conditions for understanding (if I am running the program, then I am understanding). This is because of core way in which they operate, formal symbol manipulation. It's possible to run the program without understanding (in the case of the Chinese Room). On the other hand, do the programs provide a necessary condition for understanding (if I am understanding, then I am running the program)?
The person who thinks strong AI is possible might go down that route, claiming that when I understand a story or dialogue in English, I am doing symbol manipulation, just of a far more complex and intricate kind. Searle is smart to point out that he did not show that this is false ( claiming only that the machine does not give the sufficient condition for understanding). But, he does go a bit further, claiming that this is a truly incredible claim (an unbelievable claim). The plausibility of this claim rests on two claims. First, it is, in fact, possible to make a program indistinguishable from a native speaker of a language. Second, human persons are, at some level of description, programs. If you deny either of those, then you can't think that strong AI is possible.
In the case of the first claim, that it's possible for a machine to be indistinguishable from a native speaker, the jury is still out on that. All of the examples which I have encountered (and I will update this if ever I encounter a case otherwise) of a machine managing to trick a person into believing that it was a person, they were very sophisticated machines with a lot of exposure to various texts and responses but they were programmed to simulate a non-native speaker to play on our sympathies and make us hold them to a different standard than we would a person who was clearly fluent in English and not prone to make simple errors. In order to prove that it is possible, it would take an advancement in computer science which we are currently awaiting, if it is possible at all.
In regards to the second claim, if one is a physicalist, then we could be willing to accept that human persons are on some level machines and functioning according to our programming (more on this in the Free Will Debate). However, there is a massive gap between the explanation in terms of the way our neurons are firing and the feeling, sense, of the world around us. Much like the Chinese Room, there is no room for the mental lives, understanding, feeling, in a purely physical explanation of the brain. The understanding must be over and above the matter in the brain (at least, that's what Searle is hinting at).
\section{Part \thechapcount.\theseccount: Other Potential AI Formats and Rebuttals}\stepcounter{seccount}
Searle, during his time, presented this thought experiment around and got a few different replies to it, which he numbers and then gives the general regions where he got those replies. Many of the replies can be seen today in how some computer scientists are trying to make even more powerful AI. So, here we go (and these are fast spark-notes, for most there are far more detailed and extra rebuttals):
\subsection{The Systems Reply}
\factoidbox{While it's true that the person in the Chinese Room does not understand Chinese, the system as a whole does. Understanding should not be ascribed to the individual, but to the system as a whole.}
For this, Searle's reply is quite simple, imagine that the person internalized all of the manual. The person has been locked in the room for so long that they have memorized the symbol manipulation rules. Would that person understand Chinese? Just as before, there seems to be something missing, some intentionality, which marks the real understanding. We see this often in second-language classrooms, or at least the old-school ones which I dealt with for a time. Memorize the rules, some vocabulary (not knowing what the words mean), and then get trust into the language. Sure, the person might make the right replies in the right circumstances, but, essentially, the lights would be on and no one would be home.
\subsection{The Robot Reply}
\factoidbox{Rather than the program being in a immobile computer, suppose that we put it in a robot. The robot would not be `bolted to the floor', but rather would be free to move around, eat, drink, make coffee, it would get sensory input from cameras and sensors, all of this would be controlled by the computer `brain'.}
Something akin to this has been made, and is one of the more successful examples of AI, not quite 100\%, still has the problems. The difference between this machine and the first kind and Jill or Schank's AI is that they concede that the machine needs more than just formal symbol manipulation, more than just inputs and outputs, but needs to have the ability to interact with the world and develop a data-base from the real world experience. But, adding in the `perceptual' and `motor' qualities doesn't add anything to the base-level way that machines `think'. For example, suppose that we replay the Chinese Room case, but this time the input is coming from a camera and sensors in a robot. The outputs would need to be more complex, obviously, but at the end of the day, it's still just a decision engine, there's no understanding in the machine.
\subsection{The Brain Simulator Reply}
\factoidbox{Suppose that we make a program which doesn't represent information which we have about the world, but rather simulates the actual sequence of neurons firing in a human brain. It takes in stories and simulates how the brain fires upon seeing the scripts and acts like the brain would command a body.}
Now, where is the understanding in this system? Calling back to my A\&P courses when I went to Community College, the brain is, simply put, an arrangement of neurons. The neurons fire and cause others to fire across the brain according to their arrangement. This is just a really, really, complicated decision engine, a really, super, complex set of if-then-else statements. It would only be simulating the structure of the brain, not the mind. If Non-Reductive Physicalism is true, then this, we could say is conscious… Maybe…
\subsection{The Combination Reply}
\factoidbox{What if we combined all three of them together. While each had a problem, namely versions of the Chinese Room, if we combined them we could get a way out. For example, what if we made a robot with a body indistinguishable from a human's. In the `skull' cavity, there is a brain shaped computer. This computer would run a simulation of a human brain. We raise it as a human child, making no special circumstances for it (beyond those a parent would give their child), and so on. Imagine that the behavior of this robot is indistinguishable from a human person's. Surely, in this case we would say that it has intentional states (feelings and understanding).}
In the real world, there have been attempts at this, with the robot (though we can't simulate the brain yet) given a blank slate and raise as a child. The results of these cases were quite promising, but due to the limitations on technology, the brain power never really got far. Searle wants to point out a difference between appearance and reality. Certainly, if we come across a human on the street and they behave in the ways which we have come to expect people to behave, we would attribute to this human intentional states (feelings). But these are all appearances, NPCs in video-games have gotten quite good at appearing to have emotions and reactions, it's what makes contemporary video-games so awesome. For some interesting content on this, check out the The Uncanny Valley\autocite{extrahistory2012} (in this case, it's the reactions of the NPC, not the appearance, but the case still applies). But does an NPC really have those emotions? No, they don't, they are running a script. At the end of the day, the machine is still lights and clockwork. But a rebuttal to this reply leads to:
\subsection{The Other Minds Reply}
\factoidbox{How do you know that other people have intentional states (feelings)? I have first person access to my mental states, but other people's states are totally blocked to me. All I have to go on is their behavior. So, if you're going to attribute mental states to other humans (who can pass our intuitive tests), then by the same principle you must attribute them to a computer which can pass the tests.}
This reply is, at first, very intuitive and seems to cut to the core of Searle's objections. Basically, the only reason that you think that I have a mind is because I look like you and I act a certain way. Isn't that enough for me to actually have a mind? However, as is often the case in philosophy, the beauty of a reply is only skin-deep. This reply is making a jump between an epistemic question (a question about what and how we know) and a metaphysical question (a question about what is actually the case). The core question for the strong AI proponent is not whether we know or believe that a machine as feelings, but rather whether the machine actually does have feelings. The study of knowledge is a separate question which we will cover in Module \ref{ch.modsix}. In psychology and in our daily lives it is presupposed the reality and knowledge of other minds, just like how in the physical sciences and in our daily lives it's presupposed the reality and knowledge of the external world. These presumptions are not, necessarily, actually the case.
\subsection{The Many Mansions Reply}
\factoidbox{The core of Searle's argument makes an assumption, that AI is only about analog or digital computers. This just so happens to be the present state of technology, but what about the future? Whatever processes necessary for these intentional states, eventually, some system will be able to replicate it and that's what we will call strong AI. While we have weak AI today, nothing presupposes that some other means of making AI can't generate a strong one.}
The only reply to this is that it moves the goalposts. The reason we are making AI in the way we are, aside from making life easier on us, is to hopefully explain some aspect of human intentionality. If we define strong AI as whatever gets us a really understanding and feeling thing, then making a baby is making strong AI. The core thesis, found in AI researchers even today, is that the mind is the brain, physical (remember that this is the core premise of physicalism!). If this thesis or claim is reframed or redefined so that it's no longer physicalism, then the objections no longer apply and they no longer have a testable hypothesis.