In what sense, if any, can a machine be said to ‘know’ something? How can anyone believe that a machine can think?
May 1997 marked the monumental achievement of mankind to some, the failure to defend “dignity of humanity”1 to some, or, to others, nothing much at all. In May 1997, the supercomputer “Deep Blue”, designed by IBM, defeated Garry Kasparov, the chess champion of the world, in his own game. People believed that this event showed the development increasingly authentic artificial intelligence, and also believed that the development artificial intelligence – even rivaling the intelligence and thought of the human mind, was only a matter of time. This sparked a series of old questions anew – how can a machine be said to have knowledge? How can a machine have thought? A man-made machine can ‘think’ and ‘know’, as long as it is able to reproduce the interactions necessary for thought, but its abilities to do so will be limited by the abilities of its creator.
Knowledge, simply put, is a belief or claim that is justified with logical analysis of sufficient empirical evidence gathered through perception at one point. To think is to draw logical inferences, through reason and judgment – in other words, to execute a finite sequence of logical extrapolations to arrive at conclusions. Knowledge is, in one way or another, the end product of perception run through a function which we call thought, or the act of thinking. So for a machine to have knowledge, to ‘know’, it is necessary first for it to ‘think’.
For a machine to ‘think’, and to have ‘knowledge’, it needs to have the ability to reason, which would be in the form of numerous algorithms, such as in “Deep Blue”. This world chess champion has the ability to think, because it possesses certain algorithms which analyzes data it receives, to arrive at conclusions, to ‘know’. The obvious way the computer showed its ability to think was by analyzing the chess game at a rate of up to two hundred million moves per second, with its analysis shortly followed by a conclusion, the conclusion being the position it decided to move its piece to. The winner of the match becomes somewhat irrelevant, because it was not necessary for “Deep Blue” to show its ability to reason. The event was a display of thought and knowledge by machines, because even though the algorithms in the programming of “Deep Blue” was primitive, it was still an algorithm designed to analyze logically – which is basically what thought is. The fact that “Deep Blue” won only means that its algorithm was one that was relatively complex, even when compared to the human mind.
Physically, there is nothing stopping computers such as “Deep Blue” to be eventually designed to rival the human ability to ‘think’. The human mind on the microscopic level consists of countless amounts of neurotransmitters that transmit information and commands, with the end product being an extremely complex set of finite ongoing procedures, which we define as sentience, or thought. On an even smaller scale, it is merely a collection of simple particles interacting together, all obeying the same set of laws. If this is true, there is nothing intrinsically different between a machine and the human mind – it is only the sequence in which the particles are arranged that differs. So in essence, thought is a strictly structured interaction between matters, and there is absolutely nothing that makes artificial intelligence impossible by physical law. If the act of thinking and the knowledge derived from it can be simplified as interactions amongst particles (albeit very complex ones), then for a machine to ‘know’, and to have thought and sentience as humans do, it would need to be constructed to recreate these interactions of the particles.
In terms of natural sciences, to recreate the interactions necessary to simulate human sentience is possible. However, after defining the necessary mechanics behind artificially recreating the human mind, there is an abstract and conceptual problem, dealing more with human sciences, human will and the nature of logic. In order for us to create a machine to recreate human sentience, we would have to first understand every algorithm in our brain. That is impossible, because that would imply that we would be able to flawlessly predict what the human mind will do, given certain circumstances. Such a notion is absurd and is illogical, because that would also imply flawless self-predictability, and if one ‘flawlessly predicts’ that oneself will do a certain thing, there would be nothing impossible about defying this ‘flawless prediction’. In other words, we are unable to predict human behavior in such a deterministic manner, and therefore, we are unable to truly grasp the algorithms of the human mind. There is no physical impossibility of the creation of a machine that can think and know like a human does, but humans will not be its creator; it will be another, more superior being with a more refined mental ability than humans.
“Deep Blue” is a prime example of the ability of a machine to reason. Even though the algorithms of it are simpler than that of humans, and consequently require more brawn2 (a more complex algorithm would be a ‘shortcut’ to the conclusion), they are nevertheless, in an abstract sense, equivalent to our act of ‘thinking’. Since machines can think, however primitively, they are also able to “know”, since they would be able to infer logically by analyzing perception (in their case, it is inputted information).
Get access to
Guarantee No Hidden