The second theme delves into the concept that a computer and a brain have distinct differences in their structure and function. While a computer is designed to efficiently compute by keeping its physical set-up separate from its logical set-up, the brain cannot be separated in the same manner. Through empirical analysis, it has been discovered that at different levels (cellular, molecular), it is impossible to detach a physical brain model, algorithm, or representation from its physical implementational substrate.
When program and implementation are indivisible and therefore disrupt each other, a dualistic perspective becomes unattainable. As empiricism compels us, we adopt a monistic standpoint where the brainAmind is perceived neither as embodied by nor embedded in the physical reality, but rather as indistinguishable from it. This standpoint carries consequences for both science and society in the future. I intend to examine these implications from
...a pessimistic standpoint by critiquing some of the anticipated futures popularized in our millennial culture.
Keywords: artificial intelligence; neuroscience; cyclic systems; dualism; science fiction. The 'Net-heads' will encounter the 'Worldbots' on their way. These digital mechanical life-forms will initially assist humans with mundane tasks. However, they will soon surpass human intelligence and potentially evolve into 'spiritual machines'. The extent to which they may exhibit selfless altruism and benevolence towards humans remains uncertain. Ideally, they will continue to serve us, functioning as digital Bodhisattvas.
In the cyberworld, boundaries between individuals will dissolve and transhuman life-forms will emerge, similar to how multicellular life emerged in the ocean. These life-forms will be implanted into robot spaceships and explore space, just as the Crst amphibious Csh ventured onto land. After potential galactic wars, the universe
will transform into one enormous Internet, where matter everywhere becomes part of computational living. This ultimate state is called the Omega Point. Interestingly, unlike joining the Dark Side, the Omega Point is expected to be benevolent. It will use game theoretic reasoning to resurrect everyone who ever lived and fulfill their deepest desires. Tipler (1995) refers to this state as Judeo-Christian heaven. Other authors like Gibson (1986), Moravec (1990), and Kurzweil (1999) are also cited in constructing this version of future history. This paper presents an overview of artificial intelligence (AI) and neuroscience's recent history, current status, and future prospects.
In this passage, I will attempt to discuss the social motivations and potential impact of the Gelds concerned on society. Now, let's move on to the Millennium, which is a significant social phenomenon that greatly influences people's perceptions of science in the future. It is worth examining what impressions someone in the year 2000 may have obtained from popular science books, science fiction novels and films, and even from the science sections of newspapers. Despite the inherent biases in these sources, one might imagine that this person would envision a future similar to what has been portrayed.
Nano-robots will ensure everlasting life by carrying out molecular repairs in our bodies. Advanced drugs, which could be evolved versions of Prozac and Ecstasy, will effectively address emotional disorders while also resolving societal issues, bringing happiness to everyone. This is intended for those few who still prefer to live in their primitive biological state. The more technology-conscious individuals will have transferred their consciousness into the Internet and will live similar to characters from William Gibson's works, within a
worldwide computer network that offers unimaginable entertainment for all users.
The demographic move to the 'Net' will solve many global problems such as population, food, transportation, and energy. These developments are the result of merging the digital and organic worlds, which we are currently on the threshold of. Cellphones and laptop computers are just the beginning of this future, which can be called the bio-informational age. This future seamlessly integrates with elements of NewAge philosophy. The field of artificial intelligence and neuroscience has been influenced by statistics and signal processing, making it an exciting field to work in. Symbolic AI has been challenged by the shift to statistical learning theories, as well as the emergence of artificial life and behavior-based robotics.
Artificial life, also known as alife, operates under the belief that intelligence can be achieved through a simulated life process. This process involves simulating a living system and its environment simultaneously, often using genetic algorithms and population dynamics to mimic evolution. Behaviour-based robotics, on the other hand, aims to address the interaction between a robot's perception and motion in a real environment. It rejects the use of simulated worlds in alife as well as the mainstream concept of representing the world in AI.
The 'agents'-literature, inspired by Gibson (1979) and debated by Marr (1982) (Bruce ; Green 1990), explores the idea of complex behavior emerging from simple mechanisms interacting closely with a complex environment. In contrast, Marr emphasized the feedforward computation of a structural foundation, as seen in neural networks and statistical machine learning using mathematics. Assessing progress and methodology in these fields becomes challenging as a result.
However, on the other hand, it
is extremely rare to find neural networks that learn both sensory perceptions and motor actions in an environment. This is due to the difficulty of constructing a statistical model of an environment when the system's perceptions are translated into actions that affect the input statistics. Additionally, the question arises as to what such an acting system should do. For a feed-forward perceptual system, the goal is clear: construct a probability distribution of events.
The concealed symmetries in this distribution, which include dependencies and redundancies, represent the underlying structure of the world. However, in this cyclic scenario where the system plays a role in shaping the world, the structure of this distribution is determined by the system's actions. This means that the system has some control over which symmetries exist, challenging the concept of a hidden set of preferred symmetries. This concept can be seen as post-modernism for statisticians.
At this point, most people would abandon informational, or unsupervised, goals and turn to one of the various specific goals that a robot system might have, such as finding food or recharging the batteries. While these goals are undoubtedly important, they feel arbitrary and make us uneasy. We are accustomed to a multitude of goals in our personal experiences and desire something more consistent to drive action selection. 5. QUESTIONS CURRENTLY LATENT IN ARTIFICIAL INTELLIGENCE 3. THE CURRENT JOB OF SCIENCE It's an exciting image, but will any of it actually become reality?
If none of these scenarios materialize, it would be beneficial if science could provide explanations as to why so that we can continue pursuing our actual future. The predicament for science
lies in the fact that the idea of a bioinformational future, with its emphasis on merging humans and technology, prompts inquiries about personal identity, consciousness, the human mind, and artificial intelligence - precisely the questions that science has struggled the most to address. However, both AI and neuroscience are the fields that come closest in terms of exploring these questions within the realm of engineering and biology.
Scratch the surface of many A1 researchers and neuroscientists (perhaps quite rigorously) and you may find someone who started off by asking 'What are we?'. The answers to this question are not that numerous. Either we are machines, in which case A1 should be possible and neuroscience should be able to work out the algorithm (or algorithms) that the brain is running, or we are something else, in which case both projects will fail in their ultimate goals, which is not to say they will not achieve great things along the way. One of the great things that they might achieve is an exact picture of their own limits. Either way, by examining the history and current state of A1 and neuroscience and by identifying the issues beneath the surface of these fields, we may gather some sense of what are the important themes playing along science's internal frontier (disregarding for now how different this frontier looks from outside). 4. HISTORY AND STATE OF ARTIFICIAL INTELLIGENCE: A1's ultimate purpose is to build a robot that lives in the world with a computer for a be captured in digital computation.
During the 1960s, A1 production was attempted by the Crst through the use of various quasi-logical languages to input
facts and rules into machines. However, this approach lost popularity in the 1980s due to the non-robust nature of rule-based systems, which struggled to adapt to minor changes in circumstances. Additionally, humans had to manually program every fact. As a result, there was a growing belief that 'subsymbolic' systems capable of learning from data observation were necessary. This perspective was influenced by the cybernetics view prevalent in the 1950s.
One short step from this shift to statistical theories was taken with the advent of neural networks (Haykin 999). The emergence of neural networks began in 1984 (Rummelhart & McClelland 1986) and continues to this day. Neural networks attracted students and military funders due to their interdisciplinary nature and willingness to explore new ideas, which sometimes frustrated other disciplines that they interacted with. As the field became more rigorous, it regained its connection with mainstream A1 as both shared an interest in statistical machine learning.
Technically speaking, the Geld of neural networks is devoid of content. The empirical side is neuroscience; the theoretical Phil. T rans. R. Soc. Lond. B (1999). Here we have identified two underlying questions surrounding today's pluralistic Al. The first question, to put it differently, inquires why there is no mathematical theory for the perception-action cycle. Of course, there is research on active perception, on sensorpmotor coordinate systems, and engineering Levels and loops: the future of artificial intelligence and neuroscience A. J. Bell department robotics is abundant with mathematics.
The text discusses a type of theory that can be applied universally to describe cyclic systems. This theory is similar to Shannon's information theory, which is used to describe communications channels or feed-forward
systems. The main goal in this theory is to maximize the channel capacity by identifying hidden symmetries in the input's probability distribution. In my preferred area of neural networks, unsupervised learning, this objective is fundamental (Hinton ; SeJnowski 1999). Additionally, implicit in this discussion is the question of what functions we would want from a system that goes beyond Shannon's theory.
In terms of a perception-action cycle system, what is the optimal quantity to maximize, similar to how a feed-forward channel maximizes its capacity? A question posed by Penrose in 1989 to A1 researchers revealed a vulnerability in their understanding, as it caused hostility and controversy. Penrose questioned whether the physical foundation of the world, which is described by relativity and quantum mechanics, differs significantly from the digital foundation of computers to the point where A1 becomes impossible. Is there an essential quantum aspect required for consciousness?
Philosophers dubbed Penrose's stance as "we don't comprehend quantum mechanics and we don't comprehend consciousness, hence they must be the same." The ridicule intensified when Penrose, in order to further specify his hypothesis, suggested, along with Stuart Hameroi, that quantum consciousness reveals itself through coherent quantum effects within a protein network known as microtubules, forming the structural framework. These proposals, while not essential to his argument, may overlook the validity of Penrose's overarching skepticism towards computers: that they are extraordinary artifacts due to their determinism, discrete time, and discrete state.
The entire state of the machine in the digital level can be documented. There are no natural objects that possess this quality. The computer is essentially a physical embodiment of a model. We understand that a model has the
capability to compute, but can it possess consciousness or thought? Functionalism, which is the philosophy of A1, was based on the use of the computer metaphor to describe the mind. It suggests that the brain functions as the hardware implementation of a "mental program". However, Penrose's arguments were designed to raise uncertainties about this distinction between physical and mental processes. Is it possible to separate the brain from a supposed finite describable mental process that is being executed on it?
Since Rene Descartes, there has been a conceptual separation in our language. However, the question remains whether this separation truly exists scientifically. There are two possibilities: either there is a physical level where the separation occurs, similar to logic gates in computers, or functionalists must acknowledge that the brain is not a machine. Even if a 'logic gate level' cannot be found halfway up the brain's reductionist hierarchy, functionalists can still argue that if a computer exists at the bottom, A1 would be possible, especially with a computer equipped with the resources of the entire universe.
The idea of the 'universe-as-computer' is a well-known and debated topic in physics. It involves searching for a finite discrete process, such as a cellular automaton, that could potentially explain the known laws of physics. However, this concept still lacks evidence and it might be wiser to adopt R.F. Feynman's perspective. Feynman highlighted that Turing machines cannot simulate quantum processes, leading to the emergence of quantum computing as an enigmatic field. Luckily, scientists occasionally find answers without delving into philosophical speculations.
They have the ability to directly inquire Nature. Therefore, now would be an opportune moment to examine the
history and current status of neuroscience, as it is the field dedicated to empirically describing brain processes. 6. HISTORY AND STATE OF NEUROSCIENCE: The significant milestones in post-war neuroscience include the Nobel Prize-winning research conducted by Hubel ; Wiesel (1968), who studied receptive fields of monkey visual cortical cells, and by Hodgkin ; Huxley (1952), who revealed the mechanism and mathematical aspects of neural spiking.
The Society of Neurosciences meeting in the USA is a popular event that attracts 30,000 participants every year. The field's division is evident through the two early Nobel prizes awarded, which represent researchers working at the cellular and subcellular levels. In the 1970s and 1980s, significant progress was made in subcellular neurobiology thanks to the molecular biology revolution. This resulted in a strong emphasis on empirical research and close collaboration with mainstream cellular, molecular, and developmental biology. Numerous discoveries were made during this time.
The brain's molecular dance, which involves a wide range of ion channels, neurotransmitters, and neuromodulators, intricately shapes neural response properties and regulates communication between neurons. This process also includes the chemistry of photon absorption by photoreceptors and muscle contraction. Although these molecular actions in the brain are similar to those in other living cells, they uniquely give rise to experiences, thoughts, and actions.
At the spiking neuron level and above, there was some variation. Unlike molecular biology, which has a formal structural basis, neuron-level neuroscience focused on spike trains as signals that represent neural information. The discrete nature of spikes as units of information was comparable to the genetic code in biology. Early attempts were made to understand the "neural code", and these efforts were revived
in the 1990s by Bialek and colleagues (Rieke et al. 1997). It's important to note that these attempts aim to characterize neurons as feed-forward information channels. There is a belief in the usefulness of the neuron level as a descriptive level and as a "computing level" where molecular and biophysical processes work together. The purpose of the goop seen in electron micrographs is questioned - does it only exist to support "the spiking computer"? This is similar to the functionalist debate in A1 and will be addressed further in section 7(c), after discussing the issue of cycles in neuroscience.
In the future of artificial intelligence and neuroscience, we can potentially measure the joint probability distribution p(X, Y) by observing X and Y under normal operating conditions. As we observe a peak in the distribution at equilibrium and certain trajectories that correspond to typical dynamics of the variables.
When attempting to determine if X has control over Y, experiments typically involve measuring the conditional distribution p(Y|X) and constructing the Joint distribution using the formula p(X, Y) = p(Y|X) * p(X).
The approach mentioned above is flawed in determining p(X, Y) due to the fact that we are controlling p(X) instead of the system itself. This means that we are essentially cutting off the system at X and imposing a direction (X ! Y) of dependency through our choice of independent and dependent variables. This results in an implied direction of causality that does not occur naturally. However, these experiments can still be valuable in analyzing dynamic cyclic behavior.
The use of voltage and current clamping techniques allows for the identification of ion channel kinetics. However, it
is important to recognize that in these experiments, the clamped cell represents a static snapshot of the actual process. This recognition is often overlooked as the feedback loops become wider and as biology becomes more intertwined with technology. Examples that illustrate this concept include the common use of drugs to regulate serotonin levels in order to treat depression, attempts to control ecosystems by introducing new species, and the application of industrial models to agriculture.
Those studying metabolism or ecosystems are aware of the prevalence of cycles. However, when it comes to medicine or planet management, a causal, feed-forward thinking style is often presented to the public and commercial entities. Anything that does not fit this model is disregarded as a "side-effect," to be eliminated if possible. Nevertheless, side-effects serve as nature's reminder that all processes are cyclic.
I want to discuss the role of the genome, which is biology's master control node. While it is somewhat unrelated to A1 and neuroscience, the idea that the genome is the cause behind animal behavior and intelligence is widely accepted in our culture. Granting special status to the genome without considering feedback cycles would be promoting a mysticism similar to that of Anglican bishops who debated with T.H. Huxley. When science became the authority on human origins, it was a change of government without a change in policy. Giving the genome special status allows evolutionary psychology to go unchecked and draw incorrect conclusions from biology. The genome's interaction with other genomes through populations of phenotypes is the most important biological feedback loop.
In neuroscience, the same issue with cycles arises as in A1. However, in neuroscience, cycles are
found everywhere, unlike the primary perception-action cycle in A1. Interestingly, the most apparent narratives in neuroscience resemble feedforward systems upon initial examination. One such example is the synapse. When a spike reaches the presynaptic bouton, neurotransmitter vesicles are released, which then induce the opening of ion channels at the postsynaptic site and alter the postsynaptic electrical potential.
The early visual system, which includes the retina, thalamus, and early visual cortex, can be considered as a feed-forward channel. This perspective has allowed information theoretic learning models to make qualitatively correct predictions for the static and dynamic cortical receptive fields observed by Hubel ; Wiesel (1968). Despite the presence of corticothalamic and corticocortical feedback, these models have achieved modest success in reproducing the form of these receptive fields (Bell ; SeJnowski 1997; Van Hateren ; Van der Schaaf 1998).
Nevertheless, feed-forward processing in the nervous system is not common and what may appear to be feed-forward often involves complex feedback systems at a different level of analysis. Recent research has revealed that the spikes of a cortical neuron can reach far into the dendritic tree, impacting the integration of signals from synapses through voltage-dependent channels. This challenges the idea that the neuron functions as a directional 'neural network' neuron, simply summing up its input signals with varying weights.
In the synapse inputs from the brain, the brain controls gaze direction, which determines what the retina sees. While neurotransmitter cannot travel backwards across synapses in most neurons, numerous other molecular signals can. This has been revealed through extensive and controversial attempts to find synaptic Hebbian learning mechanisms in long-term potentiation. In simple terms, the absence of a theory
of cycles in biology can be understood by conducting an experiment where a variable X is changed and another variable Y is monitored. Only the relatively rare cases where a correlation between X and Y is observed are published.
In summary, the inclination is to determine that 'X controls Y' and utilize this concept to construct a feed-forward neural information processing model (or if X is a chemical, promote it as a drug to manage Y). However, in nature, events unfold differently compared to experiments. X may increase, resulting in a rise in Y, but subsequently, elevated Y typically leads to a decrease in X, either directly or via other variables Z. These cycles of positive and negative feedback occur universally in biology, resulting in equilibrium values for both X and Y and the occurrence of predictable dynamic behavior.
The occurrence of a neural spike is an example of a transient dynamic that results from positive and negative feedback. In this case, X represents the sodium current and Y represents the potassium current. Using the terminology of probability theory, the relationship between X and Y in nature can be explored (Phil. Trans. R. Soc. Lond. B, 1999). This relationship is akin to the molecular regulation loop cycles observed within cells, where cooperation (or symbiosis as described by Margulis ; Sagan in 1995) corresponds to the positive feedback loop and competition for resources corresponds to the negative feedback loop.
Neo-Darwinists often view cooperative behavior as 'selfish' altruism, seeing it as a reciprocal exchange (I'll scratch your back if you scratch mine). Conversely, they tend to see competition for resources as selfless greediness (I'll eat you, but
this is not about me). You may find both perspectives absurd, or alternatively, use the latter viewpoint as a counterbalance to the prevailing dominance of the former in our culture.
The main idea here is that competition and cooperation hold equal significance and the concept of 'natural selection' can be better understood as a sophisticated molecular regulation loop, similar to other mechanisms. This regulation loop operates through phenotypic success. Moreover, neo-Darwinists tend to overlook the fact that the genome does not solely control all aspects of the phenotype, including specific behaviors. They falsely perceive DNA as the definitive code for an organism.
Some authors find a certain attraction in the idea of portraying organisms as puppets controlled by their genes. This notion has been criticized by many authors, particularly the social or behavioral versions of it. However, I would like to challenge this idea at its strongest point rather than its weakest.
The central dogma of molecular biology is incorrect! Although DNA codes for amino acids, the assembly of these amino acids into functional proteins and the selection of which parts of the DNA are read are both influenced by proteins. These processes also depend on the cell's state and type. It can be compared to a town where a central library represents the genome and proteins act as people who read and share sections of it, using the knowledge to change and improve the town. However, neither the townsfolk nor the library control this process. The origin of the people in the town is also questioned. If "genes make proteins," then they come from the library, but in reality, they have always been present.
The networks of enzymes that work on DNA were already established in the salty water of the egg cell from which you originated. They represent the latest generation in a continuous lineage that dates back to your ancestral seawater droplet enclosed in a lipid membrane with a fortuitous set of amino acids.
The field of 'origin of life' presents a significant challenge for biology as it involves unverified claims. However, it is crucial to determine whether RNA or proteins came first in the debate surrounding 'genes make proteins'. The study from Phil. Trans. R. Soc. Lond. B (1999) 2017 examines the order of existence of RNA (code) and proteins in Crst protocells (De Duve 1991). Two factors need consideration: (i) amino-acid chains are easier to form than nucleic-acid chains, and (ii) it is more plausible that the Crst people authored the Crst books rather than the Crst books writing the Crst people.
Both neoDarwinists and New Testament theologians share the belief that 'in the beginning was the word (logos)'. Currently, it is proposed that ribozymes (RNA with catalytic abilities) had a role in this process. However, there is still uncertainty as to whether metabolism was developing a code or a code was evolving metabolism. Nonetheless, the conclusion of this debate holds little significance. The primary objective is to challenge the notion of DNA controlling the phenotype. An alternative viewpoint suggests that the phenotype possesses the ability to determine what information is extracted from genes and how it is utilized.
The reality is that the organism and its genes are stuck in a never-ending cycle. If the organism decides to spend its afternoon at a physical
library instead of attempting to reproduce, it will definitely alter the pattern of gene expression. This argument supports our earlier criticism of linear thinking in A1 and neuroscience. To revisit the second topic we mentioned when discussing A1, h5 ended with an analysis of levels within a system and functionalism.
The functionalist was challenged to empirically study the brain and determine if there is a level at which its processes can be written down, similar to logic gates in computers. The neuron level is a likely candidate. If we recorded the sequence of spikes from all neurons, would that be sufficient to describe neural computation? Can molecular and biophysical processes support a spiking computer at the neuron level? In my view, the answer to these inquiries is negative.
The computer runs automatically unless there is a problem, unlike the neurons in the brain which require molecular and biophysical processes at the subneural level to control their sensitivity, spike production, spike patterns, and synapse formation.
In addition, there have been observations of transneural volume effects, such as local electric fields and the diffusion of nitric oxide across cell membranes, which have been found to influence coherent neural firing and the supply of energy (blood flow) to cells. The latter is directly related to neural activity. There are many more examples that could be cited. I believe that any researcher who seriously studies neuromodulators, ion channels, or synaptic mechanisms and is honest would find it necessary to reject the idea of the neuron level as a separate computing level, even though it remains a valuable descriptive level.
Possibly a physicist or a neural-network theorist might argue, when
seeking a simple theory, that the molecular level is insignificant. However, in many cases, this viewpoint is driven by bias and supported by laziness and ignorance. If we consider neuroscience, the molecular level may hold more significance. In reality, it does. There exist submolecular interferences that defy the notion of separate levels in the 'molecular machine', and these interferences are quantum effects.
Two examples illustrating the significance of quantum coherences are electron transfer during photosynthesis and the energetics of enzyme interactions (Welch 1986). In both cases, these coherences are essential for explaining the efficiency of these reactions. However, it is not necessary to delve into quantum effects. This is because proteins extend beyond the boundaries of the black and red balls found in ball-and-stick molecular models. Their electrical fields reach out into the surrounding water molecules, causing them to align and create what is known as structured water. Structured water plays a crucial role in determining how enzyme reactions occur and how ion channels selectively interact with specific ions.
To claim that having one segment of structured water or one instance of quantum coherence is a crucial element in accurately describing the functionality of the brain would be absurd. However, if molecules within every cell consistently derive systematic functionality from these submolecular processes and if these processes are utilized throughout the entire brain to reflect, record, and propagate spatio-temporal correlations of molecular fluctuations, in order to amplify or reduce...
- Animals essays
- Charles Darwin essays
- Agriculture essays
- Archaeology essays
- Moon essays
- Space Exploration essays
- Sun essays
- Universe essays
- Birds essays
- Horse essays
- Bear essays
- Butterfly essays
- Cat essays
- Dolphin essays
- Monkey essays
- Tiger essays
- Whale essays
- Lion essays
- Elephant essays
- Mythology essays
- Time Travel essays
- Discovery essays
- Thomas Edison essays
- Linguistics essays
- Journal essays
- Chemistry essays
- Biology essays
- Physics essays
- Seismology essays
- Reaction Rate essays
- Roman Numerals essays
- Scientific Method essays
- Mineralogy essays
- Plate Tectonics essays
- Logic essays
- Genetics essays
- Albert einstein essays
- Stars essays
- Venus essays
- Mars essays
- Evolution essays
- Human Evolution essays
- Noam Chomsky essays
- Methodology essays
- Eli Whitney essays
- Fish essays
- Dinosaur essays
- Isaac Newton essays
- Progress essays
- Scientist essays