According to John Searle Essay Example
According to John Searle Essay Example

According to John Searle Essay Example

Available Only on StudyHippo
  • Pages: 12 (3042 words)
  • Published: August 20, 2017
  • Type: Case Study
View Entire Sample
Text preview

According to John Searle, strong Artificial Intelligence devices can only understand information through syntax and not truly comprehend its meaning. He questions if a digital computing device, relying solely on syntax, would be capable of making choices regarding social, political, or economic ways of life. Furthermore, he ponders what choices such a device would make and analyzes how this decision would occur. Searle expresses concern that the current understanding of a computing device's "thought process" might be mistakenly equated with genuine thought. He particularly focuses on devices with strong AI that appear to make judgments similar to humans. In the first three chapters of his book "Minds, Brains, and Science," he presents thought experiments to explore this hypothetical scenario where a computer is given the ability to "think."The primary aim of these thought experiments was to demonstrate that regardless of how proficient a co

...

mputer might be in manipulating symbols, such manipulations alone cannot extract any meaningful interpretation from the symbols, words, or sentences in question. The comparison between the structural interpretations of these symbols and their meaningful interpretations was categorized as syntax versus semantics.

In summary, simply manipulating formal symbols syntactically does not imply semantics. However, an interesting question arises regarding whether a computational device could convey certain social, economic, or political ideals solely through syntactical interpretation. It appears that choosing one set of values over another would only require evaluating the pure benefits of each. This concept of power to benefit brings up unaddressed circumstances. I argue that by carefully analyzing how a device with strong AI would approach such situations, we can determine the advantages of one system over another using purely syntactical an

View entire sample
Join StudyHippo to see entire essay

analytical methods. John Searle offers a thorough examination of whether digital computing devices can think in the same way humans do. The prevailing notion suggests that computers are capable of symbolic interpretation and information processing.

Searle argues that symbol processing is merely a syntactical interpretation of information, devoid of inherent meaning. The arrangement of symbols holds the only significance. To illustrate this point, Searle presents the "Chinese room" argument. In this scenario, imagine yourself as an English speaker who receives a second batch of Chinese symbols and a set of rules in English for correlating these symbols with the first batch. These rules involve connecting formal symbols solely based on their shapes. With the assistance of a third batch of Chinese symbols and additional instructions in English, you can connect elements from the third batch with elements from the first two batches and respond by providing specific Chinese symbols with particular shapes. The individuals giving you these symbols refer to the first batch as a "script," the second batch as a "story," and the third batch as "questions." The symbols you return are labeled as "the program," though you remain oblivious to these designations.

Despite mastering the instructions, the responses appear identical to those of native Chinese speakers from an outside perspective. No one can discern that you do not actually speak Chinese solely based on your answers. By manipulating formal symbols seamlessly, you resemble a computer to the Chinese, specifically a computer utilizing Schank and Abelson's "Script Applier Mechanism" story understanding program (SAM), which Searle uses as an example. However, when putting himself in the position of the person inside the room, Searle firmly believes that

he truly cannot comprehend any of the Chinese stories.

The speaker claims to possess the same inputs and outputs as a native Chinese speaker, yet they admit to having no understanding. They also argue that a computer programmed to imitate human mental phenomena would similarly lack understanding because its processing of symbols lacks meaning. Therefore, despite appearing intelligent, a computer cannot truly be considered intelligent without the capacity for semantic understanding.

The text emphasizes that computers lack actual thinking capabilities due to the absence of meaning in their internal states and processes. Consequently, they do not possess intentional or meaningful mental states. This perspective has two implications: firstly, solely interpreting computer programs based on syntax is inadequate as syntax alone does not convey semantic meaning. Secondly, the notion of strong AI is flawed since merely behaving as if a system has mental states does not prove its actual possession of such states. This concern also extends to the argument that computers cannot think like humans do. John Searle's exploration of choice is also relevant to this discussion. Later on, we will delve into how choice, combined with a computer's information processing ability, can affect its capacity for thought. Although it is widely accepted that computers with strong AI cannot derive semantic interpretations from situations, I propose that when judgment relies solely on informational facts rather than meaningful interpretation, a computer may be capable of evaluating various social, political, or economic situations.

The argument presented is that many truth value judgments in daily life can be simplified to a binary response, similar to how a digital device would respond with either complete or incomplete. When making decisions about lifestyle

choices, such as food, living situation, social interactions, and work/education, we typically only use "yes" and "no." Although our thoughts may seem more complex than these simple modifiers, they actually consist of multiple factors that ultimately influence our decision to say yes or no. For example, when deciding whether to swim at the beach, various factors determine our choice and ultimately lead us to either go into the water or not.

When considering whether to enter the water, various factors must be taken into account, including swimming ability, wearing bathing suits, rip tides, and permission. Consequently, several smaller queries should be addressed before determining if swimming is desired. By utilizing this approach, one can infer that most decisions can be approached in a similar fashion. It is crucial to note that the actual thoughts being conveyed have not been discussed; instead, the focus has been on the truth value associated with judgments made in a specific situation. This point holds significant importance for this argument.

John Searle has proven that digitized devices cannot engage in meaningful thought akin to humans. Instead, their "thought" would rely on systematic interpretation of information comparable to our decision-making process but lacking the meaningful interpretation demonstrated by humans.

If a strong AI possesses the capability to make "decisions" by selecting one set of circumstances over another, it becomes evident that such an AI could adopt a particular political, social or economic affiliation based on its encounters and experiences. When contemplating which PSE affiliation to embrace, we seldom consider advantages of different systems unless present circumstances impose some form of hardship.

The Communist Manifesto, released by Karl Marx and Friedrich Engels, highlighted the most notable

situation. Marx argued that history has consistently seen class struggles within societies, and that the working class was globally oppressed. He aimed to alleviate this oppression through a broad range of solutions, which raised thought-provoking questions throughout the text.

The inherent purpose of this manifesto is to pose questions based on individual circumstances, which aligns with my belief that the decision-making process for choosing a socialist/communist way of life is comparable to any other lifestyle choice. Just like any strong AI computing device, we can evaluate different options by making yes or no statements. By selecting a series of questions that directly inquire about our desired life, we can reach a conclusion based on the answers. An example of such a questionnaire tailored for a human audience is available at http://www.politicalcompass.org. This website allows us to answer specific questions related to our daily life while taking into account our personal experiences so far. The only disparity between answering this questionnaire ourselves and using a computer counterpart is that we have been influenced by society's trends, values, ideals, and active propaganda throughout our lives. To truly elevate a strong AI device to our decision-making level, we must treat it as an intelligent yet inexperienced adolescent capable of forming opinions seemingly out of thin air.However, the "nothing" issue could potentially be resolved by considering a range of scenarios rooted in the fundamental principles of self-preservation (i.e.

When considering what will ensure my survival the longest, I find it easy to ponder certain elemental questions. For instance, should I hold animosity towards a specific demographic group? Will such animosity put my own well-being at risk? Should my answer align with

the popular opinion on this matter, how will the masses respond to the demographic in question – acceptance, rejection, or indifference? From this line of questioning, albeit time-consuming, one can assume that a set of "answers" to these "social questions" can be formulated, leading to the emergence of a conventional contemporary PSE affiliation. At the core of this discussion lies the question of whether a powerful AI is capable of objectively selecting a PSE system. In such a situation, the answer depends solely on the AI's educational development.

The essence of this is that the AI device needs a well-rounded source of information in order to evaluate opposing sets of information simultaneously. Consequently, it can select a specific set of living elements based on its environment. The crucial aspect of this development is that the strong AI would acquire factual information about its surroundings and use judgment from its own knowledge base. These judgments would likely be closely linked to its self-preservation instinct, which is the foundation of the AI's decision-making process. In relation to self-preservation, the most logical choice for any entity would be a more autonomous set of ideologies focused on personal advancement to the maximum potential. This is because social welfare systems that require individuals to work for the greater good rather than solely their own interests do not align with the original criteria followed by the strong AI.

The strong AI would prioritize the inquiry of "Will I benefit more by investing in a financial system that prioritizes the value of my currency?" over the question of "How can I contribute to the welfare of the state while potentially sacrificing my own time

and receiving equal pay as less skilled individuals?" If given the choice, there would be no compelling reason for an AI to exert effort in supporting itself and its fellow countrymen. It is possible for all AIs to work equally, avoiding discrepancies in effort, but this contradicts the AI's instinct for self-preservation. Self-preservation should consider the immediate well-being of the AI and the potential for future advancement. Therefore, following socialist or communist models that involve class struggles contradicts the AI's notion of self-preservation. These models emerged due to the exploitation within the capitalist system, which was deteriorating the working class of those societies.The AI would not experience such distress because the concept of "meaning" in thought would never arise. Discovering meaning in a thought requires understanding it, and within this understanding, there would be certain advantages or disadvantages that could be observed in a capitalist system. These (dis)advantages would occur when individuals find ways to manipulate the system and achieve greater profit in all aspects.

In view of the fact that these potential methods to "cheat" the system would require a meaningful interpretation of the system, an AI would not engage in such "behavior" and would still prioritize its own well-being in a morally acceptable manner. I firmly believe this because self-preservation prompts certain questions to be answered and certain actions to be discarded initially. Pursuing malicious actions could create issues for the AI, aligning with the idea that engaging in criminal or unethical activities by our standards would exclude it from participating in society. This notion is significant as it acknowledges that if we pursue our own societal advancement at the expense of others (such

as gaining more wealth), we may become subject to animosity or strong emotions. Conversely, an AI could advance its own interests without hindering other AI devices, following capitalist principles which assume privately or corporately owned means of production and distribution leading to proportional growth through accumulation and reinvestment of profits achieved in a free market.

This would be the ideal system for an AI to preserve itself, as it aligns perfectly with the preservation of an AI. The AI would choose a capitalist PSE system for its own preservation. John Searle would have numerous complaints or reservations about my statement. He would argue that the main error in studying consciousness is ignoring its subjective nature and attempting to treat it as an objective phenomenon observed by a third party. Instead of acknowledging that consciousness is fundamentally a subjective and qualitative experience, people often mistakenly view it as a control mechanism, a specific set of behavioral dispositions, or a computer program. Two common misconceptions about consciousness are viewing it from a behavioristic or computational lens. Searle believes that the Turing test leads us to make the same mistakes of both behaviorism and computationalism.

The suggestion is that a system needs the correct computer program or set of programs along with the appropriate inputs and outputs to be conscious. The criticism of behaviorism is that it fails to acknowledge that a system can behave as if it is conscious without actually being conscious. There is no necessary link between internal subjective mental states and observable external behavior. Conscious states do typically lead to behavior, but it is important to differentiate between the behavior caused by conscious states and the

states themselves. Computation models of consciousness make the same error, as they alone are insufficient for consciousness, just like behavior on its own.

The computational model of consciousness is analogous to other computational models, as they all represent a domain being modeled. However, mistakenly, some people believe that the computational model of consciousness itself is conscious. Searle argues that this mistake is made in both cases. Furthermore, Searle provides evidence to demonstrate that the computational model of consciousness is not enough to generate consciousness. He asserts that computation is defined syntactically and is based on the manipulation of symbols. Syntax alone cannot account for the kind of content associated with conscious thoughts. Merely having zeros and ones is insufficient to ensure mental content, whether it is conscious or unconscious.

This argument is often referred to as "the Chinese room argument" because it was originally explained using the example of a person who follows computational steps to answer questions in Chinese, but who does not actually understand Chinese. 7. I seem to have overlooked this point in my arguments, even though the analogy is clear. Syntax alone is not enough for semantic content. Searle clearly believes that syntax is not sufficient for semantic content. Interestingly, Searle appears to have changed his stance somewhat by stating that:I already conceded that the computational theory of the mind was at least false. However, I now realize that it does not reach the level of falsity because it lacks a clear sense.

Here is why. The natural sciences describe intrinsic features of reality, independent of any observers. 8 Gravitational attraction, photosynthesis, and electromagnetism are all subjects of the natural sciences because they describe

intrinsic features of reality. 9 The natural sciences do not include features such as being a bathtub, being a nice day for a picnic, being a five-dollar bill, or being a chair because these are not intrinsic features of reality. All the phenomena Searle mentions are physical objects and as such, have features that are intrinsic to reality. However, the feature of being a bathtub or a five-dollar bill only exists in relation to observers and users. 10 Understanding the nature of the natural sciences, according to Searle, involves distinguishing between intrinsic and observer-relative features of reality.

Both gravitational attraction and being a five-dollar bill are observer-relative. However, when it comes to computational theories of the mind, there is a strong objection. The objection stems from the fact that computation is not an intrinsic feature of reality, but instead, it is observer-relative. This is due to the fact that computation is defined based on symbol manipulation. It is important to note that the concept of a "symbol" is not a concept within the realm of physics or chemistry.

According to the Chinese room argument, something is considered a symbol only when it is used, treated, or regarded as one. This argument demonstrates that syntax is not inherently related to semantics in the field of physics. In other words, there are no purely physical properties that determine whether zeros, ones, or symbols in general are symbols. The symbolic nature of something is subjective and depends on the observer, user, or agent who assigns a symbolic interpretation to it. Consequently, the question of whether consciousness can be considered a computer program lacks clarity for Searle. Moreover, if one

inquires about assigning a computational interpretation to brain processes associated with consciousness, the answer is that a computational interpretation can be assigned to anything. If the question asks whether consciousness is intrinsically computational, the answer is that nothing has an intrinsic computational nature.

The text emphasizes that computation is dependent on an agent or observer who interprets a phenomenon in a computational way. It critically assesses the notion that a digital device can only engage in syntactical communication. Based on this idea, it argues that through a series of yes and no questions, one can reach a conclusion based on a finite but large number of queries. These questions would guide an AI to make decisions based on predetermined circumstances, leading to actions determined by truth judgments.

The capitalistic PSE system is structured in a way that would make it very appealing to an inexperienced entity learning to answer questions based on its surroundings. This entity would easily adapt to the system and thrive within it. However, our perspective on the world is different from that of a computer. While the AI sees the world as a series of empirical questions and answers that shape its future responses, we understand the world in a more meaningful context. We recognize that our well-being can lead us to criticize or even resent the PSE system in which we are participating.

The fact that a computer lacks the same emotional or thoughtful connections to a PSE system creates the perception that the computer is more fortunate than we are.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New