I have built them in Java, Visual Basic and C# .NET and have given them fairly interesting things to do. I once had a website using them to predict the Dow Jones average (until September 11 killed it). I now give them mental problems to do.
The most amazing thing about artificial intelligence, is when you first realize that a machine made an independent decision that was not programmed in.
I was musing on AI and MLP's especially and hearkened back to the day when I used to draw out FPLA's or Field Programmable Logic Arrays. Essentially they were silicon chips full of logic arrays (ANDs, NANDs, NORs, FlipFlops, Exclusive ORs and all of the Boolean gates of logic) all connected with internal fuses. You connected up what you wanted internally and blew out the rest of the connections in the same way that you would bomb a PROM.
So, the big Eureka moment is this. I want to put multilayer perceptrons into silicon. Should be child's play to do with a combination of FLPA and R/W PLA's. The thresholds of each of the nodes would hold the intelligence of the neural nets so that the training epochs result would be held in memory.
Imagine a single chip that would learn to recognize a driver in a car and react to that driver to set up the car parameters for that driver. Then when the car was sold, the chip would go into a learning mode to recognize the new driver. This would work well with fly-by-wire systems of all kinds.
This could be a big thing.