A personal Story of Artificial Intelligence (2)
The Lisp language is really linked to the first attempt to implement Artificial Intelligence programs. It’d be worth explaining why Lisp is a fascinating language that really fits the task and I’m quite surprised that it barely survived and did not become one of the mainstream programming languages today.
The reason is probably due to an attempt from the American school to create a gigantic environment, mainly for Academic usage called CommonLisp that revealed too complex, slow and too resource intensive to be adopted by the industry, meaning people that wanted to develop commercial applications.
Around the same time, the French school produced Le_Lisp, a variant of Lisp that was really small and was effectively used to build mission-critical applications. Le_Lisp could have been adopted largely to build mainstream applications and gain a large adoption (see https://en.wikipedia.org/wiki/Le_Lisp is giving a time map of the different Lisp dialect). Le_Lisp was the language that was the foundation of several products developed from 1987 by the company ILOG, and I will come to that later.
Unfortunately, in most domains of IT, success is coming from the US, and the reverse is also true…
Nevertheless, Lisp is a wonderful language per se and to implement other languages and particularly Object Oriented languages and was very useful to model Domain Specific Languages. Racket today is an interesting Lisp/Scheme implementation that is worth looking at for those who might be interested in programming. I recently, (funny challenge) developed a data transformation program with it and, because of the recursivity of the problem, I could make a one-page version that is running 10 times faster than a Python-based implementation, 5 times faster than a C# implementation.
This is not the purpose of this article to discuss the qualities of Lisp (refer to http://www.catalysoft.com/articles/goodaboutlisp.html for such a discussion). Let’s focus on just one of its characteristics that could be particularly interesting for AI: the fact that there is no difference between data and programs meaning that you can build a program from data and execute it: that is an open door to create new behaviors!
APSIS, the program I wrote in support of my Ph.D., on generating Plan of Actions for robots, was developed in InterLisp. APSIS was a system falling in the category of Production Systems more commonly named Rule-based Programming Systems. The belief at that time was that Expert Systems (mimicking human reasoning) could be built using rule-based systems, by simply expressing knowledge as “if situation then implication” clauses and chaining them for forward reasoning (deduction) or backward reasoning (induction) or both. Applying this to Robotics was, of course, interesting as robots assess situations thru sensors and elaborate new actions by coding behaviors like: “If there is this event then produce this action”.
However, having rules in a bag of candidates to apply to a situation, rapidly appears to be not manageable and scalable. For small toy problems, the approach is working, but when the number of rules increases, the complexity of controlling the system to get a reasonable outcome, becomes really high.
From 1982 to 1985, APSIS was an attempt to cope with this problem by partitioning the rules into subsets dedicated to specific domains and by building a higher level of control (implemented with rules itself) to manage the relevancy of those sets of rules according to the context. Later at ILOG the same idea was implemented in SMECI and later inspired ILOG Rules, now a commercial product from IBM renamed IBM Business Rules.
I was also surprised to see recently in a discussion with people programming autonomous robots that a similar approach was used. The robot is actually moving from context to context and applies a particular behavior to that particular context. As an example, the robot may be in the context “move toward direction” where a set of behaviors applies until some event provokes a switch of context to the “stop and analyze situation” context etc..
This architecture is actually quite similar to the way the human mind is working: one level comprises a number of behavioral patterns to execute specific tasks and an upper level is monitoring which task has to be done at any given time. On the lower levels, we can find tasks that are for humans not even conscious like sensing and pattern recognition.
In 1982, in the context of the Research Lab, other people were working on Computer Vision and Natural Language Processing using a bunch of different AI techniques to which I was exposed and created a very stimulating environment.
This period between 1982 and 1985, was the time when big changes occurred in the computing world. We were considering new machines where Le_Lisp was able to run, thanks to its low footprint and our implementation machines moved from the mainframe to lighter machines. Here are some of the few we were first to have access to:
The IBM PC August 12, 1981
1983: The SM90 Unix machine from Inria
Apple Cube (Macintosh): January 24, 1984.
In 1984, I moved to the research center of the CGE Group (a conglomerate grouping many industrial activities like Alcatel – think of it as the French equivalent of Siemens or GE) to apply the AI techniques to real industrial problems (to be continued…).
Share the post "A personal Story of Artificial Intelligence (2)"