Well, after an incredibly long review process one of my papers has been accepted for publication in the “Journal of Experimental and Theoretical Artificial Intelligence.” The paper’s basically an experimental investigation into some of the cognitive philosophies underlying artificial intelligence. At present there are a whole host of thought experiments considering various fundamental questions: To what extent our are concepts and beliefs innate? How do aspects of the environment acquire meaning? Can human reasoning be captured by mere symbol shuffling?

Instead of adding my own speculations I decided to put these questions to the test by building agents with a cognitive architecture based on one of three competing approaches: symbol grounding, symbol attachment and enactivism. I was able to compare their performance in fulfilling various tasks in a complex environment. The results of this little competition certainly don’t resolve these deep philosophical issues but I like to think that they highlight a different approach to this thorny debate.