top of page

Intelligence as We Know It

Guest post by: Chad Ellison

 

There is a bit of fuss among scientists and philosophers regarding the possibility of producing an artificially intelligent being that is actually intelligent. It might seem pessimistic to say that human beings are not even close to being able to manufacture genuine intelligence, but that is my thesis. This is not an attack on technology or on the remarkable advancements that have been made in computer science; rather, my contention is that none of these advancements come close to instantiating intelligence as we know it.

While there is no doubt that artificial intelligence resembles actual intelligence, to believe that it can actually be intelligent is predicated on the assumption that intelligence is reducible to a mere pattern and collection of physical parts. If intelligence is reducible to physical material, we might be able to manufacture it. The problem with this idea, however, is that if so called intelligence were reducible to physical material, it would be so far removed from our understanding of intelligence that we would have no business calling it that—it would not be intelligence in so far as we conceive it as rational ability. If all that we call intelligence is reducible to matter in motion, then there is nothing rational about it; all thoughts would find a comprehensive explanation in physical cause and effect. There is no good reason to think that the physical hardware of our brains would be more likely to produce rational conclusions than to believe that a stone rolling down a hill would be more likely to veer to the right rather than the left. Consequently, the idea that actual intelligence can be manufactured with mere physical parts leads to the destruction of all human intelligence rather than the construction of artificial intelligence.

There is another reason why our best technology is not close to being genuinely intelligent; human intelligence is a different kind of thing than mere input and computational processes. Things that differ in degree are the same kind of thing. Two different numbers (5 and 3), though different in degree, are the same kind of thing, numbers. Their differences can be explained by the essence that unites them. On the other hand, a dog is a different kind of thing than a number; dogs are essentially different than numbers.

A computer computes and outputs the information it does for one reason and one reason always: it is programmed to do just that. However complex it may be, all of its computational processes and varying outputs are reducible to the specific conditions its programmer(s) gave it. A computer does not give us the correct answer to 2 + 2 because it knows that it is rational or correct; it gives the correct answer because it was programmed to output that answer under its specified conditions, and it could have been programmed to give any number of different answers. The computer neither values reason, nor can choose to act in accordance with or against it. By contrast, there seem to be two essential elements to human intelligence that computers do not have: volition and the ability to ascribe value. To make a rational judgment as a human being, both volition and the ability to value something seem to be necessary. First, some value of reason over non-reason must exist to serve as the motivational impulse to pursue rationality rather than irrationality. There is nothing like this genuine value of rationality in a computer—it does not value what is rational; it does not value anything. For such a being, pursuing rationality rather than irrationality is arbitrary. It is very difficult to conceive of a being that has no reason to employ reason as rational—it is not rational to be indifferent about rationality.

Second, a being that is causally determined by antecedent conditions does not have any power to will a thing because it is rational. It does what it does solely because the antecedent conditions cause it. While it may act consistently with a rational conclusion (like providing 4 as the answer to 2 + 2), such an output was entirely independent of rational evaluation on its part; rationality becomes a superfluous category for it. Its computational processes are neither rational, nor irrational; they are non-rational.

Because computer programs lack a genuine value of rationality and a genuine volition to choose a conclusion because it is rational, the combination of mere input, computational processes, and output are a different kind of thing than human intelligence. Computers could increase in their degree of complexity one-thousand fold, yet they would not be any closer than they are now to being intelligent. In as much as intelligence involves the ability to be rational, we have not come close to producing the kind of thing that could be intelligent.

3 views0 comments

Recent Posts

See All
bottom of page