The Dual Nature of Intelligence: Are Brains Like Light?

The book that I read in 2000 that really got me into A.I. was Doug Hofstadter's Godel, Escher, Bach:  An Eternal Golden Braid.  If you haven't read that book, you should.  It won a Pulitzer Prize.  The book focuses on Hofstadter's view that intelligence is largely about analogy making, and that self-awareness is tied to symbolic recursion.  So, a human brain may be a symbolic logic processing system that has a symbol inside the system that represents the entire system itself.

Symbolic approaches to A.I. are out of fashion at the moment, with the rise of neural networks and the connectionist movement, but the history of A.I. is one of reversals, with out of favor technologies suddenly thrusting back onto the scene after a major breakthrough.  Somewhere in academia the symbolicists are hoping to make a comeback, and waiting patiently for neural networks to hit their limits.  

I have never taken a stance either way because I see both sides.  Clearly the power of neural networks, particularly the recent advances in recurrent neural networks, is real.  We are on to something.  But in the back of my mind I've always thought about Tabletop, a program written by Robert French for his PhD thesis.  (French published a book about it called The Subtlety of Sameness)

Tabletop worked like this - French setup a table with items on each side.  Sometimes the items were the same, and sometimes they weren't.  Sometimes the items were in the same place on each side, and other times they weren't.  French then touched an object on the table and the program was supposed to touch the object that best matched what French touched.  So, if both sides of the table contained a plate, a fork, a spoon, and a cup, setup in the same way, and French touched the spoon, then the program would touch the spoon as well.  But it was interesting when there was no spoon.  In that case, Tabletop may touch the piece of silverware that was in the same location, even if it was a fork or knife.  Sometimes, French would set up the table so that there was no silverware on the other side, in which case maybe it would touch a napkin or whatever was in the same spot.  

The program, and French's analysis of it, is interesting.  And while it isn't the kind of microdomain problem current A.I. approaches even try to solve, it does seem to shed light on some portion of the human mind.  

I came across French's book recently when I moved and, was thinking about how symbolic approaches to intelligence might fit into current connectionist models.  It occurred to me that perhaps this gets solved the way Physics solved the issue around the nature of light - by declaring it to have two natures.  

You've probably studied the debate in your high school or college Physics course, but to refresh your memory, there are experiments that show light behaves as a wave, and experiments that show light behaves as a stream of particles.  As a result of this conflicting evidence, we have come to accept this strange dual nature.  I believe in the long term, our view of A.I. may turn out the same way.  The connectionists are right.  And so are the symbolicists.  

We are still quite far from fully understanding this problem, but I would love to hear your comments on this issue.  And in particular, if you are aware of dual nature approaches to A.I., please send them my way.  I would love to read up on them.