Wednesday 11 May 2011

Unthinking Machines - MIT identifies where AI has gone wrong

A few days ago MIT's Technology Review, under the title Unthinking Machines reported a panel discussion in which Some of the founders and leading lights in the fields of artificial intelligence and cognitive science gave a harsh assessment last night of the lack of progress in AI over the last few decades. The panellists called for “a return to the style of research that marked the early years of the field, one driven more by curiosity rather than narrow applications”. Marvin Minsky reported that "The answer is that there was a lot of progress in the 1960s and 1970s. Then something went wrong.” Barbara Partee said "Really knowing semantics is a prerequisite for anything to be called intelligence” while Noam Chomsky derided researchers in machine learning who use purely statistical methods to produce behaviour that mimics something in the world, but who don't try to understand the meaning of that behaviour. Sydney Brenner agreed that researchers in both artificial intelligence and neuroscience might be getting overwhelmed with surface details rather than seeking the bigger questions underneath..

I find the final observation interesting because the danger of being overwhelmed with surface detail was the reason why the CODIL language started. in 1966. I was faced by a major sales accounting system (250,000 customers. 5,000 products, etc.) and many problems because the existing system worked in a way that the sales staff could not understand, often did not do exactly what was intended, and was slow to change to meet novel sales opportunities and threats.

I could have easily got bogged down with the complexities of individual customers and products – but instead decided to stand back and look at the “wood”. What was really needed was a system that allowed the marketing division to be in control of selling any goods to any customers. My proposed solution was a symmetrical invoicing language where the sales staff could tell the computer what they wanted it to do, and where the could could use the same language to tell the sales staff what it was doing for them. The language needed to be simple to learn, flexible, and efficient to implement, and not overwhelm the sales staff with irrelevant detail. Having previously worked with very complex manual information processing systems, and being new to computing, I never thought that anyone would think that this was difficult – so I went ahead and came up with a language, which in retrospect was a model of how the sales staff thought about the way sales contracts worked.

Only a couple of months later I changed job and found myself as the ideas man in a small team to assess the future large commercial system market. I found that there were many large computer users, in business, industry and in universities, with many different applications, which had similar problems. Rather than become overwhelmed by the detail of all these additional information processing tasks I decided to back off even further to get a wider view of the whole forest. 

The result was CODIL – a Context Dependent Information Language designed for a white box processor, allowing human operators and a specially designed “decision making unit” to work in symbiosis in areas where the inherent need to predefine an algorithm for the application limited the conventional stored program computer approach. The interesting thing was that the further I retreated from detailed applicatuions the clearer the way forward became - and perhaps the closer I came to modelling human though processes..

The starting point was semantics. The basic building block was the “item” and in an “ideal system” every item was self-defining (a vital feature of any white box system) and was either a set, or a partition of a set. The definition was fully recursive (sets could be nested in any way to any depth the user wanted) and any item was meaningful as passive data, as a conditional test, or as a “command”. The meaning of an item depended on the context of other items that were linked to it. Processing centred round the Facts – a model of human short term memory – and all addressing was associative (i.e. by the names of sets). The basic processor is a very simple and very highly recursive algorithm, which could almost certainly be remapped as a network of even simpler (single cell?) processors, and any intelligence shown by the working system results from the the way the user uses CODIL to build the knowledge base.

The basic approach was shown to work, including a wide rangew of test applications including many of the artificial intelligence applications being considered by others in the 1970. Unfortunately the research was axed over 20 years ago because it was too unconventional to get funding through the conventional computer science peer review routes. (See elsewhere on this blog for details of how CODIL worked, application tested, publications, and history).

In the light of the MIT comments quoted about perhaps it is time for the research to be restarted, ideally by someone much younger than me. It might be best to re-examine the approach from the starting point of linguistics and the evolution of language and intelligence. My own experience in trying to get blue sky research funding when working in a Computer Science department which had no prior record of successful research is to painful to be repeated.

No comments:

Post a Comment