Writing and communication provide a huge evolutionary advantage to people, freeing us from needing to discover everything by ourselves. What is the role of knowledge representations for computers when they learn rather than being programmed?
Chapter 1 in Introduction to Knowledge Systems distinguishes two kinds of representations (symbol systems) for computer systems: symbols for computation and symbols for communication. Symbols for computation are programs, information and data structures, designed for particular kinds of operations. For example, binary representations of numbers support arithmetic operations. These representations are internal to computers.
In contrast, symbols for communication are used externally to computation. For example, in languages for knowledge exchange, the semantics of communications languages meanings must be understood by both parties in communication, whether they are computers or people. Today the “symbols” of communication with computers routinely include all sorts of symbol structures — from graphs and drawings to videos and animations. This includes the whole area of human-computer interactions and a repertoire of device, gestures and voice. Such work and the design of visual analytics and effective interfaces for interaction is part of a principled design practice supporting external cognition. Visual analytics and interaction design are part of the work in my research group.
The 2012 French film L’artiste et son modèle (The artist and the model) has a discussion about the famous sketch on the left by Rembrandt. In the film, a sculptor teaches his model about seeing and paying attention. What meaning does the sketch convey? Can we see that it depicts a baby’s first steps? Is that the baby’s sister teaching him to walk? Notice where she is looking and how she is poised. How can we tell that she is being very careful? Did the mother and sister wait for the father to come home so that he could see this important event and call the baby to him? Has the mother in her coarse clothing lived through such events before? How do we know that the pail carried by the passing woman is heavy? What does she represent?
My interest in representation was inspired by Dan Bobrow’s paper Dimensions of Representation from his book with Allan Collins. Before that, I thought that symbolic representations were simply programs and data structures. My first paper on representation was An Examination of a Frame Structured Representation System published in the 1979 IJCAI proceedings. UNITS supported reasoning on MOLGEN and was inspired by Bobrow and Winograd’s KRL system. It was in the same category as KLONE and other frame languages of the time. Later it became the basis of the KEE knowledge representation system sold by Intelligenetics.
What was the relationship between frame languages and object-oriented languages? These lines of language development were reported separately in programming languages conferences versus AI conferences. Dan Bobrow, Sanjay Mittal, and I teamed up at PARC to develop Loops, an object-oriented programming extension to Lisp that we used as a knowledge representation foundation for our projects. In Loops we explored boundaries between several competing object-oriented languages. We wrote an overview paper, Object-Oriented Programming: Themes Variations, for AI Magazine comparing object languages and also an article in Science on Perspectives on Artificial Intelligence Programming.
Bobrow, Mittal, and I developed an instructive course for Loops, including a video game called Truckin’. The Truckin’ game was played by programs that students developed in the course. Our 1983 paper, Knowledge Programming in Loops, was featured on the cover of AI Magazine.
Each student programmed an automated truck that had to navigate along a road buying and selling goods. It had to slow down on rough roads, avoid bandits, make as much money as possible, and try to end up at Alice’s Restaurant. The highlight of each course was the knowledge competition, in which all of the programs competed.
Looking over many courses and computational ecologies of Truckin’ fleets in the competitions gave us concrete examples of automatic agents competing in a simulated world. The rules used to drive some of the trucks were tangible expressions of memes that could assure or doom a truck’s success in the game.
As we built knowledge systems for various purposes, especially for VLSI design, our thinking about representation shifted to the knowledge level. An example of this was a paper we wrote for a VLSI design conference at MIT, The Partitioning of Concerns in Digital System Design. This paper proposes multiple layers of representation for making different kinds of design trade-offs in designing chips. Lynn Conway and I later published a more general and formal account in Towards the Principled Engineering of Knowledge in AI Magazine. This paper joined together several threads of our interests — extending the engineering discipline of representation from computing over data structures to reasoning about meaning-carrying representations of knowledge in problem solving. It envisioned clans of experts populating an economy / ecology of knowledge roles.
This early work on representation languages was done before machine learning became an important part of building intelligent systems. Do we need to craft representations in systems that learn in unsupervised or supervised modes? How can such systems explain what they have learned?
Big data problems over fused databases need to integrate data bases from different sources, employing ETL (Extract, Transform, Load) systems to align meanings of tables. ETL and machine learning require representations for both computation and communication.
Science fiction movies sometimes depict artificially intelligent robots that are raised and not built. How could machine learning systems learn to explain what they have discovered? How could they collaborate with people and other systems in creating knowledge and jointly solving problems?