Thymos - Philosophy, Art and Gung-Fu

mmmm fresh rant. Also: go away - this rant not for you.

Monday, August 30, 2004

Why Artificial Intelligence can Never Get it Right

Introduction

Artificial Intelligence or AI is a hot topic these days. Whether that is caused by sci-fi movies, or the possibility of creating and expanding the possibility of human like existence, or the instinct or will to dominate over all life forms, either way, scientists seem hell bent on creating a thinking computer. One that doesn’t react, but understands and acts of its own volition. Creating an Artificial intelligence in mimicry, or even possibly in superiority, to our natural intelligence.

In this paper I will argue that AI enthusiasts may have a fundamental error which, in my view, will preclude them from ever attaining their goal – of creating a computer born synthetic intelligence. My reasoning is simple: the error is apparent in the very name that they choose for their endeavor: artificial intelligence. This, I argue, is a pair of incompatible terms, thus indicating the impossibility of the endeavor, stemming from a lack of understanding of the very thing they need to know - what intelligence is in itself, and what method would be required to know this.

How can I argue such a presumptuous thing, and what consequences of my argument may be observed? Before I can argue such a presumptuous thesis however a few more presumptions about my presumptuousness must first be admitted: 1) I fully admit I know next to nothing about Artificial Intelligence, theoretical Mathematics, physics, advanced electrical engineering, advanced computer programming, and advanced (or even basic) neurochemistry. 2) I do not intend to discuss or look at any of these complicated subjects in my critique of the foundations and methods of artificial intelligence or discussion of actual intelligence. One could admit, it hardly seems someone with my apparent lack of intelligence should be arguing about intelligence at all.

That being said, if I may be permitted, I would like to explain my intention as not presenting a positive thesis or outright proving anything new in these various subject matters which are beyond me, but only critiquing what I think is a mistake made by them on a meta-discussion or philosophical level, for which my abilities are yet to be tested. In doing so, I should like to appeal to the common belief in the suitability of common language argument to such an endeavor in order to make a case for my thesis about mistakes made by a technical discipline. In other words, I shall use the language of these schools as it has been passed down to me through a few articles, but mainly common media and parlance in order to make a simple argument in a non-technical way with perhaps technical consequences. My readers will be the judge if my language games win anything useful or true, or if my ignorance of technical terms and actual goals renders my argument null and void.

So I return to the methodological question: How shall I proceed and what will I discover when I do? And it is this question which is most quintessential, for the question of method is what I believe is lacking in the study of AI. A study that an examination of the name and underlying understanding of Intelligence shows a possible methodological error therein. Ultimately, I will argue in this paper that another method and subject matter needs to be examined before a computer based intelligence could possibly be created, and that this subject matter is the only example of intelligence we have and so any other models and methods are flawed, that the only existing and demonstrable method to do so can be found in antiquity and must be recovered if one wishes to acquire any knowledge here.


Artificial Intelligence vs. Intelligence?

The term Artificial Intelligence was coined in 1956 but was inspired by Alan Turing in 1950 who asked the question in Computer Machinery and Intelligence “Can machines think?” Turing produced the famous Turing Test to be used to judge if an attempt to make a machine think is possibly successful. The test is purported to answer the question “Can you make a machine that is intellectually indistinguishable from a real person?” Turing predicted by the year 2000 that 30% of the time a computer would be sufficiently “intelligent” enough to be able to fool a human for a five minute test and appear indistinguishable in intelligence from the responses of another person. Some AI enthusiasts claim that this prediction has come true, and we can even argue with some of these AI “bots” in web based chat applications online and see for ourselves the clever responses they give .

As fascinating as this is, however, there is a problem here, identified right from the start: I would like to argue there has been an oversight on the behalf of Turing and his descendants. When Turing asked the question in 1950 ‘Can we make a machine think’ the test he postulated to determine if it thinks was flawed - for it does not test if a machine can actually think, but only if it can fool a human into that it can. What’s the difference? The difference is that if a machine can fool us into thinking that it thinks does not constitute that it can actually think. To draw that conclusion is a fallacy.

The usual response is that this is merely a problem of perfecting the system – that with enough time and structure a computer which 100% of the time can fool any human into thinking it is intelligent can in theory be accomplished, and would make it sufficiently comparable to intelligence. And this may be possible. However, this still does not make the computer intelligent, only able to mimic intelligence, which does not necessarily (or even probably) mean that it is actually intelligent. The assumption that these two cases are identical is cause / correlation fallacy – just because two things may be observed to have the same effect (talking like a human) does not mean they necessarily (or even probably) have the same cause (real understanding of what they are saying like the average human).

Other pre-conditions for “artificially” intelligent, non-contradictory processing of information that could be and or resemble “natural” intelligence rely on any possible intelligence being sufficiently (and actually) identical, not possibly identical as displayed by a test designed to trick fallible intelligences into believing a thing which mimics human behavior actually authentically thinks. At least, these intelligences would need to be fundamentally identical in so far as they correctly collect, process, and conceive of information both sub-consciously (involuntarily) and consciously (voluntarily). If we consider intelligence as merely the ability to react to certain stimuli (with perhaps no understanding of those stimuli) we strip the word of its meaning. Intelligence presumes, and is defined by, understanding. If you do not have the latter you do not have the former, although you may assume and claim that you do because the understanding of words may be redefined or not properly understood by intelligence. The very fact that intelligence can do this proves that understanding is required to be truly intelligent.

What the scientists in the 50’s were actually looking for was not Artificial Intelligence, but intelligence itself. So, what do you need to know in order to know you have created real intelligence? In order to actually know that you have created intelligence you first need to know what intelligence is in itself, on its own terms and using the correct method as defined by the essence of the subject at hand.

Where can we find this model of intelligence to study and by what method should we study it? The only place that we can know with certainty the subject of our study – our own mind. For we cannot know with certainly any mind other than our own, and why create a new model when one already exists to examine with a method to do so which has existed for as long as mathematics has as a discipline, namely, the Intelligible Method? Notice that I did not mention the brain, for AI is not looking to recreate only the machinery that houses the interrelations and reactions of neurons, electrical impulses and chemical reactions but those interrelations and reactions in of themselves. In other words we are generally looking for the software, in addition to the hardware, and I intend to demonstrate (or at least make plausible) the necessary distinction and difference between the two below.

To know how to use the Intelligible Method correctly, and for what subject matter it may be used to examine, we must look back into the history of philosophy and reclaim all the methods in their necessary relation which we have lost. For the Intelligible Method is the only method that can examine with certainty the content and subsistence of one’s incorporeal mind.
The Four Methods of Philosophy

Many philosophers have discussed the various methods and subject matters of philosophy. The arguments that we will look at here are chiefly the arguments of Thomas Aquinas and a few others. We will look at these authors in order to reclaim the lost methods of philosophy and to see which method or methods can help us know about intelligence on its own terms.

In his work The Division and Methods of the Sciences Thomas Aquinas argues that the entirety of the body of possible human knowledge can be divided into two categories: Speculative (or Theoretical) Science and Practical Science (or moral action). Speculative science may be further subdivided into Natural Science (or what we now call Science), Mathematics, and Divine Science (sometimes also called Theology or 1st Philosophy). The arguments for the existence of these sciences he derives from many previous authors including but not limited to Boethius, Plotinus, Aristotle, Plato, Al Farabi, and more and cannot all be examined here. All that needs be examined here are the methods used to examine these proposed sciences, the necessary interrelations between them, and one in particular, the Intelligible Method.

Very briefly put, the three different sciences each have their corresponding methods of discovery, and in so far as we can think and talk about these subjects the thread of logic bonds them all as well to human consciousness. We can know what method ought to be used for each subject based on the essence of each subject. This may appear to be circular. For the question may be asked by what method do we study a subject to learn its essence to then know what the proper method was to study that method? That being said however, at least two conditions conspire to make this a sensible transition: 1) All four methods are interrelated and rely upon one another for their indubitable or self-evident foundation, as such 2) all four methods have a self-evident or indubitable foundation upon which they can build, either native to that method or within a sister method. The very fact we ask the question means we start somewhere and may follow the chain of reasoning to what Aquinas claims is something certain.

What are these methods? What are these subject matters? How can we know they are self-evident or indubitable? As much (if not most) of the history of philosophy is concerned with answering these somewhat controversial metaphysical and epistemological questions I can only limit myself here to some of the arguments that Aquinas presented.

The four methods of philosophy and to which science they correspond according to Aquinas are: 1) Empirical Method which allows us to observe and study things in motion with no intellectual abstraction to concepts, but in which we may infer conclusions and hypotheses using logic and cause and effect reasoning to the laws or form that may govern and bind all organized matter (ie: Physics, Chemistry, Biology, etc).
2) Mathematical Reasoning which allows us to consider and discover mathematical objects which are abstracted to applied concepts from the material world (length, breadth, depth, weight measured in magnitude and number), but in which we may also consider the pure theoretical mathematical concepts distinct and not abstracted from the material world (such as perfect geometric shapes or length in itself, etc). 3) The Intelligible Method in which our conscious thought may be directed to consider, catalogue and reflect upon Thought itself and all its necessary properties and the possible or necessary causes of which, including form and organization in itself. The classical and medieval philosophers also argued this method allows one to reason logically to that which is pure form or incorporeal subsistence, or that which is Divine or God, hence the name Divine Science. In so far as each science moves from material objects or content to immaterial form of content to form itself, Aquinas argues all of the sciences lead to the Divine Science and the First Cause for their causal explanation. Hence this is why it is also called First Philosophy. 4) Finally, Logic, which is the method of correctly expressing discoveries in these sciences in linguistic form and hence runs through all of these sciences and corresponds and or coheres to all these methods as much as reality and consciousness dictates.

Thus constituted, Aquinas argues these four methods and three sciences are the entire body of speculative or theoretical knowledge. If they are correct they constitute the whole of possible knowledge about the entirety of the universe.

But how does this relate to AI and intelligence? Now that we have a sketch of the medieval methods and how they may correlate let us resurrect a more modern example of the intelligible method to see what light it may shine upon our subject.

First Philosophy or Descartes’ Intelligible Method

Perhaps the most well known and recent European author who used the intelligible method was Rene Descartes in the work Meditations on First Philosophy. In this modern philosophical work Descartes uses the intelligible method (hence the title: “meditations”) to discover the nature of his own consciousness - a single piece of indubitable truth, that one thing which he cannot doubt in order to discover other things that have the same indubitability. His other claims we will pass over for another time. All we are interested in here is his process of using the intelligible method and our own consciousness.

Basically then, he uses a process of doubting the opinions and truths that he had been raised to habitually believe, even so far as to exclude his own material existence in order to separate it from that which he cannot doubt, arguing that which he cannot doubt must be true and can be used as a base for all scientific enquiry. In so doing he strips away all sensory input and observational methods (for they do not bring deductive certainty but only probability which can be doubted) and he realizes that when he reflects on his own consciousness he cannot doubt that he exists as a thinking thing. For the act of doubting he thinks is a thought itself. Hence the famous realization (sometimes called the Cogito argument) “I think; I am”. But what kind of claim is this? By what method does he make this claim? Let us look at this claim more closely.

Often the Cogito argument is translated or quoted as: “I think, therefore I am.” This rendering is usually interpreted to mean that he (Descartes) may infer by logic that because he thinks, he then exists. The word “therefore” is a technical term in the method of logic meaning “because of the truth of this assertion, I may truthfully infer this conclusion”. This may seem an insignificant change from above, but perhaps it is not. For “I think, therefore I am” being true or the case because logic dictates it is so creates a problem – to what exactly does the “I” refer? This is sometimes the rebuke of critics of Descartes – the “I” is embedded into language, and as such presupposes that it “exists”, to claim that the “I” exists is an assumption not demonstrated by logic. Logic cannot tell you that you know the “I” exists with certainty, or what kind of thing it is.

If this is what Descartes meant by his Cogito argument his critics would be right that he was wrong. However, I would argue that although analytical philosophy is correct on this point, that the Cogito cannot be proven with logic, is irrelevant, for the Cogito is not demonstrated using logic but known to be indubitable by intelligible method. Technically speaking, it is not even proven as true given that true and false are technical terms from logic, only that it is a phenomena which we cannot doubt, for the act of doubting is an example of the phenomena itself.

To put it another way, “I think; I am” is the indubitable foundation of the intelligible method because to doubt it proves itself. And from this realization (or intuition) we may then infer a necessary property of thought - Thought is active. Thoughts are issued by a thinker. This is not a linguistic convention but a necessary condition. Thoughts come and go voluntarily and involuntarily, some return to us from another property our thought has: memory, some thoughts are seemingly unique and new.

All these internal observations we may perceive their presence and infer their cause for ourselves, but they all rest on the indubitable knowledge that to doubt thinking is to think – which is an impossibility. Not because logic dictates so (for there is no contradiction here), or physics (for we are not sure we are talking about a physical thing yet), but because it simply is, by proof that we think when we try to doubt we are thinking. To think to doubt that you are thinking (not in verbal sentences but in your own mind – try it for hours of fun) is closer in nature to a mathematical paradox and is quite impossible.

This is the indubitable base of both Descartes’ Meditations on First Philosophy and the indubitable base of the intelligible method. This short argument of course does not prove anything conclusively, but I hope that I have shown at least the plausibility of the method to making knowledge claims that have an indubitable base about our own minds. If so, then let us turn back to our theme and see what this ancient leviathan we have resurrected may make of our ultra contemporary mecha-science: AI.

The Computer: An image for the Mind

Now that we are aware of the existence and hopefully the plausibility of the Intelligible Method for our subject matter, it is time for us to make a small foray into consciousness as a proof of concept. However, instead of using the mind as a blueprint for the computer system, I will proceed like Plato who many years ago used the city (an easier thing to observe) as an image to compare to a complicated and hidden thing (namely the soul). In my case, as with keeping with the flavor of my particular time, place, and essay, I will use a typical computer system as the image for the mind, and see if the parallels we may draw cohere with the Intelligible Method and its subject – namely me, for the intelligible method does not allow me certain access to anyone’s mind but my own.

To start then, a question: On any modern Graphical User Interface, when one moves their mouse pointer to an icon in order to click it and activate a program, where exactly does the icon exist? What is the subsistence of that icon? What exactly are you clicking on? This is actually a much harder question than you think. Is the icon in the electricity traveling through the conductive pathways on the mainboard and CPU? Is it in the binary signals, the ones and zeroes, collected and interpreted by the CPU and translated into machine instructions? Or is the icon in the programming language that controls and describes the operating system that is compiled and saved on the hard drive and is rendered every time you power on the computer? Or is the icon somewhere else completely, like in the phosphorescent coating being illuminated by the cathode ray tube in your monitor (or the LEDs in your newer monitor if you are so lucky)? Or maybe the icon exists nowhere but in your own imagination, or consciousness, or in language. How can we discover which it is?

Well, what is the icon? It is a computer user interface convention that has a graphical representation on the screen that is modifiable and the program which it represents is activatable by its manual selection by a user. As such, the image we see on the monitor is not the icon – for we can take an exact copy of that image file and look at it in a graphics program, such as Adobe Photoshop. There it looks exactly like the icon on the Desktop but when clicked it doesn’t activate the program. As such, the monitor image of the icon is not the icon itself, but still part of it, without which it would not be.

Well what about the electricity on the mainboard, the binary code that is interpreted from these charged particles of disparate AC frequencies racing around the motherboard, the source code in its native or compiled state stored magnetically on the hard drive, or rendered by the CPU upon boot up and during operation, and all the other conditions that are required to make that icon appear on the screen and behave as it does? To circumvent a longer discussion, as I hope it is now apparent, all those necessary conditions must be present for the icon to appear and act as it does. No electricity, no icon. No program in either its first native, then compiled, then running state, no icon, and so on. All of these material (electrons), formal (electrons of differing frequency arranged and rendered in a certain cascading respondent organizational sequence), and efficient causes (someone inventing and making the computer, you buying it and turning on the power switch, etc) conspire to provide you with the final goal of the icon upon which you can activate a certain program in a computer system. Take away any of these causes for the icon, and you take away the icon.

Do you agree? This seems plausible if not pointless to discuss. As interesting as the sequence of Aristotle’s causes may or may not be when applied to rendered programmatic conventions for the operation of a computer system, it serves to prove my next point: as we can see above, the icon is not simply a physical thing in itself. In fact, the icon is essentially not a physical thing at all, but an incorporeal thing. An immaterial subsistence. I don’t mean simply that it is an Idea, a Form, or a non-material concept that someone compiled from existing ideas and conventions when creating the Graphical User Interface (although it is that essentially). I mean that the icon, the icon you may click on, is rendered by a computing machine and when it is rendered the electricity, the program (ie: the construction and organization of Ideas represented by electrons and organization of said electricity in a certain cascading / reacting sequence), and the act of supplying the hardware with said electricity renders the that particular icon and gives it incorporeal existence. The very idea that the icon is electricity alone simply is impossible. Which electron is it? Which group of them? They pass through the mainboard faster than you can know, so which electrons were the icon in which millisecond?

If you agree with the argument above that none of those conditions may be removed and it still be an icon then you must also except that the icon (and the sub-programs, algorithms, the kernel, the hardware device drivers, and operating and file systems that underlie it) that you see on your computer screen is a representative image of an incorporeal instance of that Idea and material organized to represent and render the operation of that Idea combined in a particular space and time, and that these things are simply not material alone, and not even primarily. For if you turn off the computer they no longer are being rendered and no longer exist. Similarly, if you plug in the computer with no operating system installed it has all the same electrons running through it that it does with the operating system installed – it is the incorporeal organization (the Form itself) and the ability of that particularly organized matter to render information that makes all the difference.

Why is this important? It is both important in itself but perhaps more important for the next point I wish to make given my subject matter: This example above is exactly how I would argue the human mind or consciousness works.

We know we have the thinking subsistence, the thinking thing that issues thoughts at will, the thing that remembers, that feels, that is curious, that has learned to recognize complex patterns, and that we cannot deny it, for denial is a thinking act, and a thinking thing that acts as an affirmation of thought itself caused by a thinker at will, which may reflect upon that very thought and its subsequent impossible denial, and remembers thinking about such things before, and hopes to think about it further, and many other related properties and attributes, too many to list here. Not all the attributes are certain, and not all of them are always around - some come and go, but the thinking thing is indubitable.

And in this way we may infer the incorporeal mind and corporeal brain operates something like a computer operating system and computer hardware respectively. The brain is a collection of organized particles which render a cascading respondent sequence of electrical impulses and chemical reactions into information, into true and false, into binary code (or something with more logical states than binary code but it seems impossible to have less than the two states of “on” and “off”). The mind is organized like an operating system, yet much more complex: we have a thinking thing which is self-aware (the kernel), that may issue thoughts at will in disparate formats (linguistic, musical, mathematical, logical, imaginary, physical commands to voluntary members, etc) which we may internally perceive, a short and long term memory which can recall past thoughts, a pattern recognition / logic / meaning interpreter which builds a database of meaning and logical relations of matrices upon matrices of meaning and reference and can both unreflectively or intuitively and reflectively map and detect contradictions in these matrices of meaning, and more.

Further, unlike a computer program we also have the ability to learn, to adapt both physically (the brain maps its pathways as it needs to bypassing faulty areas) as well our consciousness bypasses (ignores, forgets, causes us pain when we think upon) thoughts which violate its psychological integrity as much as possible, to keep the incorporeal and corporeal existence alive and well and functioning as best as it can. We don’t quite voluntarily do this but we can voluntarily focus upon it and realize that we are avoiding thinking about something because it, or a consequence of its truth, is unpleasant. The list of possible discoveries of the inner machinations of ones mind goes on.

This is not simply mere speculation for we find a basis to start organizing these properties by the foundation of the intelligible method, albeit much work would need to be done to map one mind, nevermind what we could then postulate to be common to all minds. Also given that not all minds are exactly alike or as well educated or formed - there could be drastic differences. But still, this is plausible that such an endeavor is plausible using the methods argued for above.

Further, as with the argument above, we can know that we are an incorporeal subsistence. We, whatever that is, is a thinking thing which is rendered by the brain but not the brain simply. Disrupt the organization of my brain matter (a nice way of saying do serious physical damage to my brain) and you do not alter the material world one bit (you did not blot any atoms out of existence), but you do ruin the organization of it, the immaterial organization and sequence of events which render an incorporeal consciousness. And in disrupting my organization the physical evidence seems to show that you disrupt the possibility of said consciousness continuing to exist, at least in that physical form. And with the methods I have been discussing thus far I do not see any reason to believe that you will continue to exist, for what material will render that consciousness now?

I cannot prove this either way. Perhaps the soul is immortal, and a God does exist, but given the tiny sketch of the intelligible method that has been given here I do not think such a conclusion can be reached. But that is the method you would need to use (and maybe logic and a few others) in order to prove or disprove it and that previous arguments for the immortality of the soul and the existence of the Divine exist.

If this is sketch for consciousness is true and an accurate analogy, then it seems we have made an interesting discovery in artificial intelligence – it has simultaneously failed and succeeded! 1) The goal of making a computer think has already occurred, for rendering information would seem to be thought simply put, and Turing wanted to make a computer think, yet 2) artificial intelligence has failed to create an artificial intelligence, because it is not an artificial way of thinking at all but the only way of thinking. It just doesn’t think very well, in enough ways and with enough finesse to be considered to understand very much, and as such to be truly intelligent. Is it any surprise that the evolution of operating systems with redundant memory and improved kernels has progressed unwittingly, based on simply what worked better to make a thing that is more and more intelligent (and hence useful for us, the Creators), just as our ancestors must have slowly evolved to be more intelligent or simply died out?

All speculation aside, however, there is one final component of true intelligence that AI has missed and cannot succeed in bringing into existence using its current methods, because this thing deries all current methods to study it: human free will. Volition is the essential difference that computer systems lack, they can only do what they are told. Casparov was not beaten by IBM’s Deep Blue but by the hundred or so programmers that told Deep Blue what to do in what condition. Deep Blue itself did not decide, the programmers did in advance. Human intelligences have the ability to not entirely randomly, but not entirely determinately choose a course of action, a way of thinking about something - they can be inspired with a new connotation of some meaning - and can very simply think to not think or to think something, a sentence, a piece of music, an image, even to move, all at will, in a sequence perceived in time.

The exact cause of this, the magical component that allows the determinate matter of our brain, no matter how we explain it away, to somehow render a consciousness that can will freely (albeit, freely within a determined set of choices) in spite of its seeming determined state, eludes all attempts to conclusively understand it, by any method. Without more knowledge of exactly how the voluntary intelligence works, how it starts and how it learns and grows, AI can never succeed in creating an intelligence, artificial or otherwise. For intelligence requires voluntary understanding of meaning (you have to get it yourself), and then voluntary extrapolation and or action based upon that understanding.

Perhaps that is what Turing’s followers in 1956 meant all along by calling it artificial? That he wanted to replicate that probably impossible set of conditions and knew full well that because of free will being a precondition of intelligence that true intelligence was probably impossible, but a reasonable facsimile might be possible. If you could make it work, it may be truly artificial. Perhaps this is the true goal of AI, but I believe I have shown here that a methodological addition of the Intelligible Method will have to be included, and a in depth study of the mind will have to be conducted, before such an artificial intelligence could be born.

Conclusion

To conclude, I hope I have shown in this paper the possible oversights of AI and made plausible the use of the Intelligible Method for further discovering the properties of intelligence. Whether or not I have, you will simply have to decide for yourself. At least, until we have computers that can do that for us.

Bibliography:

Artificial Intelligence – An Agent approach; Eric McCreath; The Australian National University, 2003; http://cs.anu.edu.au/people/Eric.McCreath/ai/ai01ho4.pdf

The Emporer’s New Mind; Roger Penrose;

(Note: this was a 2nd year paper I submitted for a philosophy class - if you would like to see the paper please contact me)


4 Comments:

  • At 9:09 AM, Anonymous Anonymous said…

    Sorry, i just read the first few paragraphs of your paper. but it seems that you are arguing that since "real" intelligence is different from "artificial" intelligence, then we are looking in the wrong place (or something like that). In other words, we need to look into our own "minds" as the model of intelligence, and all computers can do is mimmick intelligence. It seems right that computers are just mimmicking our intelligence. But, there is no concrete defn of intelligence. And the best info we have of other people having intelligence is that they act/behave like we do. If all that I can know of others' intelligenc is how they act, I want to say that they are intelligent. So, what is the difference between a computer that responds to stimuli and other people that respond to stimuli? And remember, at the time that Turing proposed his test, behaviorism was very influential.

     
  • At 12:30 AM, Blogger JB said…

    Hi, thanks for your comment. You are the first to make one. Now I'm afraid I have to shoot you down :)

    My point is that there is a logical error in the scientific method, and that would include Turing's test. Science progresses as such:

    if conditions P occur (can respond for 5 minutes to my chat program like a human 30% of the time) then hypothesis Q is true (it has a certain property, ie: therefore it thinks).

    The problem is A) that's a logical error (you cannot infer that even if your test conditions support your hypothesis a million times that your hypothesis is certainly true, or even probably true, or will be in the future) because the millionth and 1 time it may not occur and there is no way of predicting for sure, and even worse B) you certainly cannot from that then state what the essence of something is (such as intelligence) and then claim that something you do not completely understand (a computer) has something else you do not completely understand (intelligence).

    That's quite like putting someone in a flight simulator the air force uses, blind fold them, and say that if you believe you are flying for a sufficient ammount of time then we have sufficiently made you fly.

    They didn't actually fly even though they thought they did, and you can't know if they flew unless you know (completely and utterly) what flying is.

    Plus I assert that you CAN know what intelligence is. That was tyhe pointof my paper. To do so you wold need to learn and use the intelligible method.

    Make sense?

     
  • At 9:26 PM, Anonymous Anonymous said…

    Jeez - please learn to write more concisely. You spent five paragraphs just telling us you were going to make an argument. Then, you buried your point so much that I never really found your point about the Intelligible Method. You know, news papers are written to make the point at the beginning and then add more information for the few who want to see the details - not a bad approach.

    I did see your point about free will and would like to dispute that. You say that computers can't learn and therefore cannot possess free will. Astute of you to simply connect free will to learning. However it is not true that computers can't learn. Maybe most computer programs you encounter are like that, but not all.

    A couple of examples. There are speach recognition programs that are trainable. You practice with them and they get better at recognizing your speach. And then there is TD-gammon. TD-gammon is a backgammon program. As I understand it, its creator had programed a number of backgammon games and had developed games that played really well. They were full of information that he placed there like you mentioned when you talked about Deep Blue. But, then he went back, ripped out all of the knowlege of how to play and put in a neural network board evaluation routine. He then set the computer into playing game-after-game agains itself. Learning how to play backgammon. The end result was a world class backgammon playing system that surpassed his previous works and even taught the backgammon world an opening move that had not generally been used before.

    Does that machine have free will - to the extend it has intelligence at all? It made moves that its maker never did or would know how to do.

    Finally, to your point about the AI people not defining things correctly, I would say that it isn't like the AI people are just sitting around stuck on a philosophical delima. Many think that the Turing test is a sufficient condition, but not a necessary condition to having machine intelligence. But hey, you've got to start somewhere so Alan Turning did. The test is more of a mimicking human test than anything else. We may well have programs that many would consider quite intelligent that would never possibly be able to pass that test. Do we have them now, no I wouldn't say so.

    There are many branches of AI now. Some focus on learning - considering that the essence of the problem. Others deal with reasoning. Some work on games. Others examine how people do it. Others just look for something they can implement which seems useful.

    -Michael
    tague@win.net

     
  • At 10:13 PM, Blogger JB said…

    Dear Michael,

    Thanks for your post. I have a few responses:

    "You say that computers can't learn and therefore cannot possess free will."

    Um, no I didn't. I said that software cannot understand concepts. Therefore, they cannot learn. Because learning requires comprehension.

    Software doesn't learn - yes it can process new instructions which are fed it by a human or by another piece of software, but it can only process instructions. An If loop does not constitute it "changing its mind" - it is still only processing instructions given to it by a human either directly or indirectly.

    That's not learning.

    josh

     

Post a Comment

<< Home