Artificial intelligence (AI) has progressed and developed rapidly in recent years. Today we have computers, phones and other types of hardware that exhibit abilities and intelligence that make humans in some respects seem primitive.1 Due to the rapid progress of this technology, many postulate that AI may even become conscious, and that the implications may undermine religious narratives. If AI can become conscious, then there could be a physical explanation for what makes us human.2

The concept of the soul in Islam, known as rūḥ in Arabic, is something about which we have little revealed knowledge. However, we can say that it is something "invisible," coming from a transcendent reality. Considering the physicalist perspective, if the soul, which is the immaterial thing that moves the body, could be replaced by something physical, religion would be undermined.

The physicalist may argue that consciousness and the ability to experience subjective conscious states (also called phenomenal states) can be explained by artificial intelligence. Consciousness becomes analogous to a computer program. However, there is a difference between weak AI and strong AI. Weak AI refers to the ability of a computer system to exhibit intelligence. This may include answering complex mathematical equations or beating several opponents in a game of chess in less than an hour. Strong AI refers to computer systems being truly conscious. In other words, having the ability to experience subjective conscious states, which includes giving things meaning. Weak AI is possible and has already been developed. However, strong AI is not possible to achieve. The reasons for this are as follows.

The first reason, which is more of a general point, is that computers are not independent systems that have the ability to reason. For something to be characterized as conscious implies that it has an independent source of rational thought. However, computers (and computer programs) were designed, developed and manufactured by human beings who are independently rational. Therefore, computers are just an extension of our ability to be intelligent. William Hasker explains it in these words:

"Computers function as they do because they have been built by human beings endowed with rational perception. In other words, a computer is simply an extension of the rationality of its designers and users; it cannot become an independent source of rational thought any more than a television set can become an independent source of news and entertainment."3

The second reason is that humans are not only intelligent, but their reasoning has intentionality. This means that our reasoning is about something or about something and that it is associated with meaning.4 In contrast, computer programs are not characterized by meaning. Computer systems simply manipulate symbols. For the system, symbols are not about something or about something; whatever computers can "see" are nothing more than the symbols they are manipulating, regardless of what we may think about the symbols. Computer programs are simply based on syntactic rules (the manipulation of symbols), not semantic (meaning).

To understand the difference between semantics and syntax, look at the following sentences:

  • I love my family.
  • αγαπώ την οικογένειά μου.
  • আমি আমার পরিবারকে ভালবাসি.

These three sentences mean the same thing: I love my family. This refers to the semantics, the meaning of the sentences. But the syntax is different. In other words, the symbols used are different. The first sentence uses "symbols" in English, the second in Greek and the last in Bengali. From this the following argument can be developed:

  1. Computer programs are syntactic (syntax-based);
  2. Minds have semantics;
  3. Syntax by itself is not sufficient for semantics, nor is it constitutive of semantics;
  4. Therefore, computer programs themselves are not minds.5

Imagine that an avalanche somehow arranges the rocks falling from the mountains to form the words "I love my family". To claim that the mountain knows what the arrangement of rocks (symbols) means would be untenable. This indicates that the mere manipulation of symbols (syntax) does not give rise to meaning (semantics).

Computer programs are based on the manipulation of symbols, not meaning. Similarly, I cannot know the meaning of the Bengali sentence simply by manipulating the letters (symbols). No matter how many times I manipulate the letters in Bengali, I will not come to understand the meaning of the words. That is why for semantics we need more than just having the correct syntax. Computer programs work with syntax and not semantics. Computers do not know the meaning of anything.

John Searle's Chinese room thought experiment is a solid example to show that mere manipulation of symbols does not lead to understanding what they mean:

"Imagine you are locked in a room and in the room are several baskets full of Chinese symbols. Imagine that you (like me) don't understand a word of Chinese, but you are given a rule book in English for manipulating Chinese symbols. The rules specify the manipulation of symbols purely formally, in terms of their syntax, not their semantics. So, one of the rules might say, "Take the squiggle so-and-so out of basket number one and place it next to the squiggle so-and-so in basket number two." Now suppose some other Chinese symbols are introduced into the room and give you more rules so you can return Chinese symbols out of the room. Suppose that, unbeknownst to you, the symbols you enter the room are called 'questions' by the people outside the room, and that the symbols you return outside the room are called 'answers to the questions'. Suppose further that the programmers are so good at designing programs, and that you are also so good at manipulating the symbols, that pretty soon your answers will be indistinguishable from those of a native Chinese speaker. And there you are locked in the room shuffling your Chinese symbols and dealing out Chinese symbols in response to incoming Chinese symbols... The point of the story is that, from the point of view of an outside observer, as implemented by the formal computer program, you behave exactly as if you understand Chinese, but the truth is that you don't understand a word of it."6

In the Chinese Room thought experiment, the person inside the room is simulating a computer. Another person is handling the symbols in a way that makes the person inside the room appear to understand Chinese. However, the person in the room does not understand the language; he or she is simply imitating that state (of understanding). Searle concludes:

"Having the symbols by themselves-having the syntax alone-is not enough to have the semantics. Merely handling symbols is not enough to guarantee knowledge of what they mean." 7

The objector might respond to this by arguing that, although the computer program does not know the meaning, the system does. Searle has called this objection "the system response."8

However, why do we know that the program does not know the meaning? The answer is simple: it is because it has no ability to assign meaning to symbols. Since a computer program cannot assign meaning to symbols, how can a computer system, which depends on the program, understand the meaning? Understanding cannot be produced just by having the right program. Searle presents an extended version of the Chinese room thought experiment to show that the system as a whole cannot understand meaning: "Imagine that I memorize the contents of the baskets and the rule book, and do all the calculations in my head. You can even imagine that I do it outside (of the room), in the open air. In reality, there is nothing in the 'system' that is not in me, and since I don't understand Chinese, neither does the system."9

Lawrence Carleton postulates that Searle's Chinese room argument is invalid. He argues that Searle's argument commits the fallacy known as the 'denial of the antecedent'. Carleton argues that Searle commits the fallacy because "we are given no evidence that there is only one way to produce intentionality."10 He claims that Searle is assuming that only brains have the necessary processes to handle and understand symbols (intentionality), and computers do not. Carleton presents the fallacy as follows:

"To say, 'Certain equivalents of brain processes produce intentionality' and 'X has no such equivalents,' therefore, 'X has no intentionality,' is to commit the academic fallacy 'denial of the antecedent.'"11

However, Dale Jacquette argues that Searle does not commit the academic fallacy, since we can accept that one interpretation of Searle's argument is:

"If X is (intrinsically) intentional, then X has certain brain process equivalents."12

Jacquette believes that Searle's argument is a concession to functionalism. He argues that functionalists "hold that there is nothing special about the protoplasm, so that any properly organized matter, which instantiates the correct input-output program, duplicates the intentionality of the mind" .13 Searle may also appear to admit that machines could have the capacity to understand Chinese. However, he states that "I see very strong arguments for saying that we could not say such a thing about a machine, since machine functioning is defined solely in terms of computational processes on formally defined elements ..."14

If computers cannot give meaning to symbols, what kind of conscious machine is Searle referring to? Even if a robot were postulated (something Searle rejects), it would still present insurmountable problems. Machines are based on "computational processes on formally defined elements." It seems that the mere possibility of a machine having understanding (attributing meaning to symbols) would require something more than the processes and elements mentioned above. Does such a machine exist? The answer is no. Could they exist? If they could, they would probably not be described as machines, since something more than "computational processes on formally defined elements" would be required.

According to Rocco Gennaro, many philosophers agree with Searle's view that robots cannot have phenomenal consciousness.15 Some philosophers argue that, in order to build a conscious robot, "qualitative experience must be present."16They are pessimistic about it. Others explain this pessimism with the following words:

"To explain consciousness is to explain how this subjective internal appearance of information can arise in the brain, so to create a conscious robot would be to create a subjective internal appearance of information within the robot ... no matter how advanced it is, it probably won't make the robot become conscious, since phenomenal internal appearances must also be present. "17

AI cannot give meaning to symbols, it simply manipulates them in very complex ways. Therefore, there will never be a solid version of AI. In conclusion, religion cannot be undermined.

 

Glossary:

Transcendent: Which is beyond the limits of any possible knowledge.

Physicalism: Physicalism is the metaphysical thesis that consciousness can be reduced to or be identical with physical processes. Physicalism is also a philosophical doctrine about the nature of the real, which asserts that what exists is exclusively physical, including the mental or "soul". Physicalism is a form of monism and is closely related to materialism and naturalism.

Phenomenal: Pertaining or relating to the phenomenon as an appearance or manifestation of something.

Analog: Having analogy (relation of similarity between different things) with something.

Perception: Inner sensation resulting from a material impression made on our senses. Knowledge, idea.

Constitutive: That which forms an essential or fundamental part of something and distinguishes it from others.

Fallacy: Deception, fraud or lie intended to harm someone.

Intrinsically: Intrinsic: Intimate, essential.

FunctionalismSociological and anthropological doctrine that considers that society is made up of parts that function to maintain the whole and in which the malfunctioning of one part forces the readjustment of the others.

Linguistics: A set of linguistic currents according to which linguistic elements are defined by virtue of their function in the linguistic system.

Protoplasma: Cell substance comprising the cytoplasm and the nucleus. Cytoplasm: cellular region between the plasma membrane and the nucleus, with the cellular organs it contains.

Subjective: Pertaining or relating to the subject's way of thinking or feeling, and not to the object itself.

Undermine: To weaken something or someone, especially in the moral aspect.

 

Author: Hamza Andreas Tzortzis

Translator: Sh. Mohammad Idrissi

Article taken from Sapience Institute

 

References:

1 Physicalism is the view that consciousness can be reduced, explained, or otherwise be identical to physical processes.

2 In the philosophy of mind, physicalism or materialism are synonymous terms, although they have different histories and meanings when used in other domains of knowledge.

3 Hasker, Hasker. Metaphysics (Downer's Grove, IL: InterVarsity, 1983), 49; also see "The Transcendental Refutation of Determinism," Southern Journal of Philosophy 11 (1973) 175-83.

4 Searle, John, Intentionality: An Essay in the Philosophy of Mind (Cambridge: Cambridge University Press, 1983), p. 160.

5 Searle, John (1989). Reply to Jacquette. Philosophy and Phenomenological Research, 49(4), 703.

6 Searle, John (1984) Minds, Brains and Science. Cambridge, Mass: Harvard University Press, pp. 32-33.

7 Searle, John (1990) Is the Brain's Mind a Computer Program? Scientific American 262: 27.

8 Ibid, 30.

9 Ibid.

10 Carleton, Lawrence (1984). Programs, Language Understanding, and Searle. Synthese, 59, 221.

11 Ibid.

12 Jacquette, Dale. "Searle's Intentionality Thesis." Synthese 80, no. 2 (1989): 267.

13 Ibid, 268.

14 Searle, John (1980b) Minds, Brains, and Programs. Behavioral and Brain Sciences 3, 422.

15 Gennaro, Rocco. Consciousness. (London: Routledge, 2017), p. 176.

16 Ibid.

17 Ibid.