stions of artificial intelligence and its capabilities become important considerations for answering the ultimate question of what thought truly is.
Computerized calculation is one of the few events that is somewhat analogous to human cognitive thought, so the extension of this current technology to more advanced future applications makes it a very interesting testing area for questions into consciousness. If one concludes that the advancement from cash registers to present day computers is a step closer to human thought, then we must concede that progressing technology will bring us closer and perhaps to the very point of true cognitive skills. The dilemma left to us philosophers and scientists is to determine when a machine has reached the point of thought, or at least to create a rough guideline. A.M. Turing proposed a test to solve this problem. Named, appropriately enough, the Turing test, it contains a controversial method of testing called the imitation game.
The idea is to put one man and one woman in two rooms and have them questioned by an interrogator in a third room. The man would try to answer questions in a way which would suggest he is a woman. The woman attempts to answer in a way to reveal the truth of the matter. If the man fools the interrogator, it is said that he can think like a woman, or, at the very least, mimic a woman’s responses. This game can also be played with a computer in the male slot, trying to convince the interrogator that it is human. It would follow reason that if a computer could pass this test, it could think like a human, or at least mimic one.
Perhaps the abilities showcased in the test alone would not be sufficient, but Daniel C. Dennett claims that “the assumption Turing was prepared to make was that nothing could possibly pass the Turing test by winning the Imitation Game without being able to perform indefinitely many other clearly intelligent actions” (Dennett 93). One often cited criticism of this notion is the idea of mimicry.
Imagine a program that stored an almost infinite amount of information regarding sentences and grammar and was able to spit out contextualy appropriate sentences to a wide variety of inquiries. The computer has no knowledge of what the information means; it is acting much as a parrot does. Luckily for Turing, there is no shortage of responses for this claim. First of all, as Douglas Hofstadter points outs, “the number of sentences you’d need to store to be able to respond in a normal way to all possible sentences in a conversation is astronomical, really unimaginable” (Hofstadter 92). The computer would also have to contain a complex microprocessor to keep up with conversation in a timely and manageable fashion. It would have to be so advanced indeed that such a microprocessor might be considered a small scale brain, sorting through symbols and their meanings to form contextually valid responses. Accordingly, if such a machine existed, it would pass the Turing test and validate the method of testing at the same time.
If a machine was capable of mastering the context-sensitive language we use, it may very well have a claim to true thought. At the very least, the computer would surpass mimicry and be labeled a simulation. Human thought is so complicated and demanding that any device that attempts to duplicate it with any success would have to be a highly sensitive simulation. Any machine that passes the Turing test must have a rudimentary “knowledge” of the information it is using and therefore is more than parrot .
Assuming this is true, we must then ask hard questions about the value of simulation. The critical claim is that any simulation is just a simulation and not a real example of what it is simulating. Hofstadter finds this fallacious, as do I. First, any simulation can reasonable defined in this context as the recreation of a natural event by an agent other than nature. This view brings up the idea of levels in simulation.
A good example is Dennett’s simulated hurricane in Brainstorms. From the programmer’s vantage point, the God spot, of course the simulation can be easily identified as such. On the level of the simulation, however, no such preordained order can be seen. Perhaps if we all had the vantage point of nature, we would see the entire physical universe as a large simulation created by natural forces.
Ultimately, it would seem unfair to discriminate between two like events on the basis of what agent set them into motion. We are still left with the largest concern, however. What does the Imitation Game really prove? As far as I can tell, the Imitation Game proves nothing at all, yet it does not have to. As pointed out as the beginning of this investigation, the job of the philosopher/scientist is to create a guideline for judging the relative intelligence of machines.
Some critics say that the Imitation Game played with humans lends no insight into how the male thinks. They say that the test will never prove the man can think like a woman. Even if this is true, it does not invalidate the test as applied to machines. The cognitive abilities of men and women are so close in nature that the test may indeed lend no valuable information. With a machine, however, the cognitive differences from a human can be seen easily. The Turing test may not lay down a definite line for thought, but it is valuable for relative evaluations. For example, if one machine performs almost perfectly on the test, and another performs badly, one could conclude that the first machine is closer to human thought than its failing counterpart. What the test cannot do, however, it tell us how close the better machine is to thought.
The identity of the computer as conscious cannot be proved. Kishan Ballal points out that “we intuitively feel that personal identity is the paradigm for all other judgments of identity, even though personal identity cannot be justified through purely rational means” (Ballal 86). The sad truth is that at present there is no way to establish conscious identity other than asking the entity and hope it doesn’t lie. G.W.F. Hegel supports this theory of conscious identity, commenting that “the self-contained and self-sufficient reality which is at once aware of being actual in the form of consciousness and presents itself to itself, is Spirit” (Hegel 637).
In the Hegelian view, the computer is the only one with the correct insight to determine if it is conscious. Could this possibly suggest that the only accurate Turing test is one a computer runs on itself? Through self-inspection, or self-interrogation if you will, the computer may be able to draw conclusions on its own condition. Now while Hegel never saw the computer in any form, even he realized the limits of a test like Turing’s. From Hegel’s point of view, there is not even a test to determine if a human is thinking or merely simulating conscious existence.
Personal conscious identity is an assumption. “Like other elements which form our bedrock of assumptions,” Ballal says, “personal identity is without proof” (Ballal 86). Normally, this is not a problem. The knowledge of self-existence is clearly a priori analytic. It is a self-supporting truth, exempt from the attacks of epistemological skeptics. We can then deduce that any similar being that shares the basic physiological structure probably shares the same conscious existence. These assumptions are rarely challenged except by the highly fallacious solipsism of young children. When we examine a computer, however, the same assumptions cannot be applied.
Therefore, the Turing test can only go so far, for the assumptions it rests on are small in number. We must keep in mind that the Turing test is only a tool, not a proof. The test was not designed to tell if machines can think. After all, Turing himself says that question is “too meaningless to deserve discussion” (Turing 57). The test is a yardstick with no predetermined end. There is no prefect score for the test; the most current machine defines the best result.
As machines continue to advance, the best result will constantly grow better, stopping only when technology advances to its peak. Thus the Turning test can only answer the question “Can machines think?” in two ways: “No” if technology stops advancing, or “We don’t know yet” if it has not stopped. Ultimately, the Turing test does have flaws and limitations, but that should not sharply downgrade its usefulness as a tool for measuring a computer’s cognitive abilities. As science grows in scope, more tests may be devised to gauge these abilities, but for current use, the Turing test clearly accomplishes what it was set out to do. Perhaps it does not offer a comprehensive proof, but it does lend insight into areas of science which were previously