Artificial intelligence systems are performing increasingly complex activities. An example is that they do (most) of the driving in self-driving cars. The control logic for driving is not created by conventional programming techniques because getting programs right to handle all of the situations would be a daunting if not impossible task. Instead, machine learning is the preferred method for acquiring the detailed behavioral rules of driving. Systems built by machine learning offer tremendous benefits, but their effectiveness will be limited by the computer’s inability to explain its decisions and actions to people. Under what circumstances can we trust that computers will do the right thing? The machine-learned systems generally cannot tell us why they do what they do, and they do not know whether they have learned everything they need to do their jobs properly. COGLE is a new project at PARC done in collaboration with Carnegie Mellon University, West Point, the University of Michigan, and the University of Edinburgh. It is part of a government-sponsored research program on “Explainable Artificial Intelligence.”
CAP is an early-stage Xerox-funded project to explore the potential and technology of human plus computer teams. Much work in the future will be automated. In some cases, autonomous systems will replace human labor in today’s jobs, doing many new jobs that are not practical to do today. We believe that in many cases, however, the machines will become part of human plus computer teams. Viewed this way, the future is not just about automation. It may more about helping us with what we do and making us better in many ways. The creation of human plus computer teams may redefine much of how we work and play.
This is a gallery of past projects. They range from projects on sensemaking, to digital rights management, to knowledge system projects, to projects on the design of languages for programming and knowledge representation.