Expert Systems

2015
Cindy Mason. 1/2015. “Engineering Kindness: Building a Machine with Compassionate Intelligence.” International Journal of Synthetic Emotions. PDFAbstract
The author provides first steps toward building a software agent/robot with compassionate intelligence. She approaches this goal with an example software agent, EM-2. She also gives a generalized software requirements guide for anyone wishing to pursue other means of building compassionate intelligence into an AI system. The purpose of EM-2 is not to build an agent with a state of mind that mimics empathy or consciousness, but rather to create practical applications of AI systems with knowledge and reasoning methods that positively take into account the feelings and state of self and others during decision making, action, or problem solving. To program EM-2 the author re-purposes code and architectural ideas from collaborative multi-agent systems and affective common sense reasoning with new concepts and philosophies from the human arts and sciences relating to compassion. EM-2 has predicates and an agent architecture based on a meta-cognition mental process that was used on India's worst prisoners to cultivate compassion for others, Vipassana or mindfulness. She describes and presents code snippets for common sense based affective inference and the I-TMS, an Irrational Truth Maintenance System, that maintains consistency in agent memory as feelings change over time, and provides a machine theoretic description of the consistency issues of combining affect and logic. The author summarizes the growing body of new biological, cognitive and immune discoveries about compassion and the consequences of these discoveries for programmers working with human-level AI and hybrid human-robot systems.
2011
Judea Pearl. 2011. “The algorithmization of counterfactuals.” Annals of Mathematics and Artificial Intelligence . Publisher's VersionAbstract
Recent advances in causal reasoning have given rise to a computation model that emulates the process by which humans generate, evaluate and distinguish counterfactual sentences. Though compatible with the “possible worlds” account, this model enjoys the advantages of representational economy, algorithmic simplicity and conceptual clarity. Using this model, the paper demonstrates the processing of counterfactual sentences on a classical example due to Ernest Adam. It then gives a panoramic view of several applications where counterfactual reasoning has benefited problem areas in the empirical sciences.
1994
Marco Ramoni, Alberto Riva, and Vimla L. Patel. 1/2/1994. “Probabilistic Reasoning under Ignorance.” Cognitive Science Society. PDFAbstract
The representation of ignorance is a long standing challenge for researchers in probability and decision theory. During the past decade, Artificial Intelligence researchers have developed a class of reasoning systems, called Truth Maintenance Systems, which are able to reason on the basis of incomplete information. In this paper we will describe a new method for dealing with partially specified probabilistic models, by extending a logic-based truth maintenance method from Boolean truth-values to probability intervals. Then we will illustrate how this method can be used to represent Bayesian Belief Networks --- one of the best known formalisms to reason under uncertainty --- thus producing a new class of Bayesian Belief Networks, called Ignorant Belief Networks, able to reason on the basis of partially specified prior and conditional probabilities. Finally, we will discuss how this new method relates to some theoretical intuitions and empirical findings in decision theory and cognitive science.
1993
Kenneth D. Forbus and Johan De Kleer. 11/1993. Building Problem Solvers. MIT Press. PDF
Marco Ramoni and Alberto Riva. 1/1/1993. “Belief Maintenance with Probabilistic Logic.” AAAI. PDFAbstract
Belief maintenance systems are natural extensions of truth maintenance systems that use probabilities rather than boolean truth-values. This paper introduces a general method for belief maintenance, based on (the propositional fragment of) probabilistic logic, that extends the Boolean Constraint Propagation method used by the logic-based truth maintenance systems. From the concept of probabilistic entailment, we derive a set of constraints on the (probabilistic) truth-values of propositions and we prove their soundness. These constraints are complete with respect to a well-defined set of clauses, and their partial incompleteness is compensated by a gain in computational efficiency
1986
Brian Falkenhainer. 1986. “Towards a General-Purpose Belief Maintenance System.” UAI. PDFAbstract
There currently exists a gap between the theories proposed by the probability and uncertainty and the needs of Artificial Intelligence research. These theories primarily address the needs of expert systems, using knowledge structures which must be pre-compiled and remain static in structure during runtime. Many Al systems require the ability to dynamically add and remove parts of the current knowledge structure (e.g., in order to examine what the world would be like for different causal theories). This requires more flexibility than existing uncertainty systems display. In addition, many Al researchers are only interested in using "probabilities" as a means of obtaining an ordering, rather than attempting to derive an accurate probabilistic account of a situation. This indicates the need for systems which stress ease of use and don't require extensive probability information when one cannot (or doesn't wish to) provide such information. This paper attempts to help reconcile the gap between approaches to uncertainty and the needs of many AI systems by examining the control issues which arise, independent of a particular uncertainty calculus. when one tries to satisfy these needs. Truth Maintenance Systems have been used extensively in problem solving tasks to help organize a set of facts and detect inconsistencies in the believed state of the world. These systems maintain a set of true/false propositions and their associated dependencies. However, situations often arise in which we are unsure of certain facts or in which the conclusions we can draw from available information are somewhat uncertain. The non-monotonic TMS 12] was an attempt at reasoning when all the facts are not known, but it fails to take into account degrees of belief and how available evidence can combine to strengthen a particular belief. This paper addresses the problem of probabilistic reasoning as it applies to Truth Maintenance Systems. It describes a belief Maintenance System that manages a current set of beliefs in much the same way that a TMS manages a set of true/false propositions. If the system knows that belief in fact is dependent in some way upon belief in fact2, then it automatically modifies its belief in facts when new information causes a change in belief of fact2. It models the behavior of a TMS, replacing its 3-valued logic (true, false, unknown) with an infinite valued logic, in such a way as to reduce to a standard TMS if all statements are given in absolute true/false terms. Belief Maintenance Systems can, therefore, be thought of as a generalization of Truth Maintenance Systems, whose possible reasoning tasks are a superset of those for a TMS.
1980
Randall Davis. 3/1980. “Meta-Rules: Reasoning About Control.” Artificial Intelligence. Publisher's VersionAbstract
How can we insure that knowledge embedded in a program is applied effectively? Traditionally the answer to this question has been sought in different problem solving paradigms and in different approaches to encoding and indexing knowledge. Each of these is useful with a certain variety of problem, but they all share a common problem: they become ineffective in the face of a sufficiently large knowledge base. How then can we make it possible for a system to continue to function in the face of a very large number of plausibly useful chunks of knowledge? In response to this question we propose a framework for viewing issues of knowledge indexing and retrieval, a framework that includes what appears to be a useful perspective on the concept of a strategy. We view strategies as a means of controlling invocation in situations where traditional selection mechanisms become ineffective. We examine ways to effect such control, and describe meta-rules, a means of specifying strategies which offers a number of advantages. We consider at some length how and when it is useful to reason about control, and explore the advantages meta-rules offer for doing this.
1979
Jon Doyle. 6/1/1979. “A Truth Maintenance System.” MIT AI Memo, 521. PDFAbstract
To choose their actions, reasoning programs must be able to make assumptions and subsequently revise their beliefs when discoveries contradict these assumptions. The Truth Maintenance System (TMS) is a problem solver subsystem for performing these functions by recording and maintaining the reasons for program beliefs. Such recorded reasons are useful in constructing explanations of program actions in guiding the course of action of a problem solver. This paper describes (1) the representations and structure of the TMS, (2) the mechanisms used to revise the current set of beliefs, (3) how dependency-directed backtracking changes the current set of assumptions, (4) techniques for summarizing explanations of beliefs, (5) how to organize problem solvers into "dialectically arguing" modules, (6) how to revise models of the belief systems of others, and (7) methods for embedding control structures in patterns of assumptions. We stress the need of problem solvers to choose between alternative systems of beliefs, and outline a mechanism by which a problem solver can employ rules guiding choices of what to believe, what to want, and what to do.
1978
Richard W. Weyhrauch. 1978. Prolegomena to a theory of formal reasoning. Stanford. PDFAbstract
This is an informal description of my ideas about using formal logic as a tool for reasoning systems using computers. The theoretical ideas are illustrated by the features of FOL. All of the examples presented have actually run using the FOL system.