|
|
Subject:
AI: Can neural networks learn logic?
Category: Computers > Algorithms Asked by: rpcxdr-ga List Price: $10.00 |
Posted:
22 Dec 2003 12:03 PST
Expires: 21 Jan 2004 12:03 PST Question ID: 289529 |
|
There is no answer at this time. |
|
Subject:
Re: AI: Can neural networks learn logic?
From: xarqi-ga on 22 Dec 2003 14:09 PST |
Yes, but the rules of logic are well-defined and are more easily "built-in" rather than learnt. Proof of the assertion, "yes", is that a symbology can be used to make statements, and a neural net can be given such statements as input and trained as to which are "true" or "false", or which are proper inferences, and which are not. As an example of a more definitive approach, consider the Prolog system (there may be more recent examples). This is a logic system based on the language LISP. Axioms are defined, as are inference rules. Hypotheses can then be defined and the system will attempt to "prove" them by applying the inference rules to the axioms. I did hear that the rules of Euclidean geometry were once given to such a system and it was allowed to "free-wheel", creating new theorems by randomly combining the axioms. Before long, it had recreated most of contemporary analytical geometry. The snag is, all of these truths were equally important as far as the system was concerned. It was unable to identify those which were in some was "fundamental", or which had practical application. |
Subject:
Re: AI: Can neural networks learn logic?
From: mathtalk-ga on 22 Dec 2003 15:15 PST |
My comments are similar to xarqi-ga's. In the simplest case a neural network can easily be trained to "learn" to compute a Boolean function of its inputs. At the other extreme, if "perform logic" entails viewing a picture of red and white cows and leaping to the conclusion "there are no blue cows", then no, I don't think a neural network can be trained to do inductive reasoning in any generality. Presumably rpcxdr-ga's actual interest lies somewhere in this range; a more precise description is needed to bring it out. The notion of bridging "symbolic" and "grounded" intelligence was no doubt intended to point us in the right direction. For example, could a neural network be trained to do proof verification? Probably so, although I'm not sure if there exist the required publications from research in this area. Could a neural network be trained to do automatic proof generation? Theoretically yes, although it would surely be the wrong tool for the job. regards, mathtalk-ga |
Subject:
Re: AI: Can neural networks learn logic?
From: rpcxdr-ga on 22 Dec 2003 15:16 PST |
What I am looking for is an attempt to bridge the symbolic mechanisims of classic AI with the emergent intelligence of neural nets. Certainly computers have a very direct way to perform logic, like with a language like LISP. Using neural nets would not be a very efficient way to perform logic, I agree. Also, yes, in some way a pattern-matching truth assertion is a form of logic. But I have to imagine someone out there has tried to map one form of intelligence, neural nets, to the other, logic. If there is no mapping, then I imagine there is some very well thought out research explaining why, fundamentally, neural nets cannot be used to perform logic. |
Subject:
Re: AI: Can neural networks learn logic?
From: yosarian-ga on 23 Dec 2003 06:50 PST |
Hi rpcxdr-ga. As far as I understood your question, you may find some answers in the field which is called "hybrid AI" - bridging connectionist and symbolic models. see: http://www.cs.iastate.edu/~honavar/hybrid-ai.html as a starting point. Neural Networks learn some boolean functions easily, while others (the famous example is parity functions) are quite hard. What researchers in hybrid AI try to do is to combine the strong parts of both approaches. You may also look at Kearns and Vazirani's book about computational learning theory to get a feel of the research problems involved. regards, yosarian-ga |
Subject:
Re: AI: Can neural networks learn logic?
From: nostromo-ga on 29 Dec 2003 11:08 PST |
The answer should be yes. If human brain, being neural network itself, can do logic (although logic might not be its normal internal representation) then an artificial neural network should also be amenable to be taught logic. Brain (and therefore artificial neural networks) seem to be better with continuous data, while logic is discrete. Just like computers can do math better than humans (therefore better than neural networks) in an old fashioned way - employing model of register or Turing machine - so it is questionable whether it pays off to make a neural network learn discrete logic - while possible it might not be the best implementation architecture ! |
Subject:
Re: AI: Can neural networks learn logic?
From: nostromo-ga on 30 Dec 2003 13:41 PST |
KBANN (Knowledge-Based Artificial Neural Network) EBNN (Explanation-Based Artificial Neural Network) These methods combine inductive and analytical (deductive) machine learning. A logic theory (deductive) is fed into artificial neural network which is then readjusted with backpropagation to better fit data and possibly improve (deductive) domain theory. This is an interplay of inductive and deductive learning. When you have lots of data and no background knowledge, you need to induce theory. This is normally done with different data mining methods, among them fitting regression functions, building decision trees and training artificial neural networks. When you have a (logic) domain theory and scarce data, then you apply deductive logic reasoning. The question is, what do you do when you have some examples and some part of domain theory. You have to do both, generalize (induction) and specialize (deduction). KBANN and EBNN lets you do exactly this. This is also one of the hybrid approaches that were mentioned before. Look at on-line slides of Tom Mitchell's seminal book Machine learning, chapter 12, which deals specifically with combining inductive and analytical learning with KBANN and EBNN (the book itself is of course even better) http://ailab.snu.ac.kr/courses/ml_03/ml_03_ch12.ppt http://www-2.cs.cmu.edu/~tom/mlbook-chapter-slides.html You can find original paper on KBANN from Towell and Shavlik (1994) here http://citeseer.nj.nec.com/towell94knowledgebased.html Also look at similar REGENT algorithm by Opitz and Shavlik (1997) http://www.cs.umt.edu/CS/FAC/OPITZ/JAIR97/main.html or just type KBANN and EBNN in Google |
Subject:
Re: AI: Can neural networks learn logic?
From: nostromo-ga on 30 Dec 2003 16:41 PST |
Actually, this is very interesting. So interesting that I now study it myself :) Anyway, on representation of boolean functions with ANNs: It is well known fact that any linearly separable boolean function can be represented by a single perceptron (one input layer, one output layer, no hidden layers). Linearly separable are those boolean functions that can be separated by one straight line if you draw them in 2D plane. Linearly separable boolean functions are e.g. OR, AND, NOR, NAND, IMPLICATION. Linearly unseparable are those that can not be separated by a single straight line in 2D plane. Linearly unseparable boolean functions are XOR and EQUIVALENCE. However, even these can be represented in ANN if you introduce a hidden layer with at least two hidden units (one for each plane separating (1,0) and (0,1) from (0,0) and (1,1) for XOR function). Modelling more complicated functions requires adding aditional hidden units (or layers). For example, to express x1 XOR x2 XOR x3, you need three hidden units, and so on. Have a look at slides of Russell and Norvig's Artificial Intelligence A Modern Approach, chapter on Neural Networks: http://aima.eecs.berkeley.edu/slides-pdf/chapter19.pdf One question which is not so clear yet is how to represent not just propositional logic (variable with truth and false values) but full first order predicate logic with neural nets. |
Subject:
Re: AI: Can neural networks learn logic?
From: scubapup-ga on 18 Jan 2004 20:15 PST |
back in college one of our assignments for neural networks was using the back prop algo to train a neural net to do xor operations. it worked and with xor it should be easier to train it with the basic and or operations. but of course im talkin about hard fast boolean math. |
If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you. |
Search Google Answers for |
Google Home - Answers FAQ - Terms of Service - Privacy Policy |