I’ve been interested in artificial intelligence (AI) for many years. Back in the mid-1980’s one of my spare time projects was playing around with computer languages such as Logo and Lisp on my Mac, and studying books about AI and “expert systems,” which were a big thing in those days. At the time, there were special workstations called “Lisp machines” that were used for AI research and to develop expert systems. I was so into this AI stuff that I nearly went to work for a Lisp machine company called Symbolics. The job offer luckily fell through because it turns out the company was in trouble (the guy who had offered me a job had suddenly lost his).
AI wasn’t ready to be a business and Lisp machines would be replaced by general workstations (Sun, VAX, etc.) and with more powerful PC’s and Macs. So I dodged the bullet on that one. It seems that whatever we can’t figure out how to make computers do might be called AI – but once we figure it out, it becomes a software engineering tool. So we have limited speech recognition even in our mobile phones, and there are web pages that translate text between various languages, but we don’t consider them to be especially intelligent.
But intelligence, natural or artificial, is still fascinating stuff. How do our brains do what they do? AI researchers have been "on the verge of a breakthrough" since the fifties, but intelligence is a lot more complicated than physics (it's more like engineering). Sure there are impressive programs for playing chess, for simulating human behavior in sports or military strategy games, for financial trading, etc. But these specialized programs don’t have the common sense of three year old human (or probably even a fly).
I’m currently reading The Emotion Machine, a 2006 book by Marvin Minksy, one of the pioneers of AI since the fifties. It’s interesting in its concepts but frankly a bit tough in the actual reading. It’s very structured (though less so than his earlier book The Society of Mind, which I never managed to finish). He is still concerned with the problem of how a system made up of many unintelligent sub-systems (artificial or natural, such as a brain) can be “intelligent.” The most useful thing I’ve gotten from the book so far is the idea of “suitcase words” – words such as “consciousness” which seem at first glance to be well defined (we know it when we feel it!) but on closer examination are more like labels for a wide range of phenomena and thus don't explain very much. As clearly as we may feel we can distinguish consciousness from non-consciousness (at least our own), it may not be the most essential thing about our intelligence. It may “simply” emerge when all the parts of our brains work together in “bottom up” fashion.
For a more readable introduction to the possibilities of creating artificial life forms on a bottom-up basis (which of course must include include some level of AI), I suggest finding a copy of Creation: Life and How to Make It by Steve Grand. I wrote about it in 2006, and I still think his idea that you have to consider the whole organism (not just the brain) is quite sensible.
And speaking of bottom up, there’s a fascinating story on the Blue Brain Project that was in a recent SEED magazine article that is now on-line (it triggered this unexpectedly long post). This supercomputer project in Switzerland has managed to construct an accurate working software model of a neocortical column, the basic functional unit of the brain. I won’t try to summarize it here, and they say they are really doing neuroscience research and not “trying to build a brain.” But dude, they are building a brain! I think the future is going to be as weird as many SF writers have suggested.