Papers

2017
Gaëtan Hadjeres, François Pachet, and Frank Nielsen. 2017. “DeepBach: a Steerable Model for Bach Chorales Generation.” PLMR. PDFAbstract
This paper introduces DeepBach, a graphical model aimed at modeling polyphonic music and specifically hymn-like pieces. We claim that, after being trained on the chorale harmonizations by Johann Sebastian Bach, our model is capable of generating highly convincing chorales in the style of Bach. DeepBach's strength comes from the use of pseudo-Gibbs sampling coupled with an adapted representation of musical data. This is in contrast with many automatic music composition approaches which tend to compose music sequentially. Our model is also steerable in the sense that a user can constrain the generation by imposing positional constraints such as notes, rhythms or cadences in the generated score. We also provide a plugin on top of the MuseScore music editor making the interaction with DeepBach easy to use.
Owain Evans, Andreas Stuhlmüller, John Salvatier, and Daniel Filan. 2017. “Modeling Agents with Probabilistic Programs”. Publisher's VersionAbstract

This book describes and implements models of rational agents for (PO)MDPs and Reinforcement Learning. One motivation is to create richer models of human planning, which capture human biases and bounded rationality.

Agents are implemented as differentiable functional programs in a probabilistic programming language based on Javascript. Agents plan by recursively simulating their future selves or by simulating their opponents in multi-agent games. Our agents and environments run directly in the browser and are easy to modify and extend.

The book assumes basic programming experience but is otherwise self-contained. It includes short introductions to “planning as inference”MDPsPOMDPsinverse reinforcement learninghyperbolic discountingmyopic planning, and multi-agent planning.

Matko Bosnjak, Tim Rocktäschel, Jason Naradowsky, and Sebastian Riedel. 2017. “Programming with a Differentiable Forth Interpreter.” ICML. PDFAbstract
Given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model. In this paper, we consider the case of prior procedural knowledge for neural networks, such as knowing how a program should traverse a sequence, but not what local actions should be performed at each step. To this end, we present an end-to-end differentiable interpreter for the programming language Forth which enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data. We can optimise this behaviour directly through gradient descent techniques on user-specified objectives, and also integrate the program into any larger neural computation graph. We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex behaviours such as sequence sorting and addition. When connected to outputs of an LSTM and trained jointly, our interpreter achieves state-of-the-art accuracy for end-to-end reasoning about quantities expressed in natural language stories.      
2016
Emilio Parisotto, Abdel-rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, and Pushmeet Kohli. 11/6/2016. “Neuro-Symbolic Program Synthesis.” arXiv:1611.01855. PDFAbstract
Recent years have seen the proposal of a number of neural architectures for the problem of Program Induction. Given a set of input-output examples, these architectures are able to learn mappings that generalize to new test inputs. While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis, to overcome the above-mentioned problems. Once trained, our approach can automatically construct computer programs in a domain-specific language that are consistent with a set of input-output examples provided at test time. Our method is based on two novel neural modules. The first module, called the cross correlation I/O network, given a set of input-output examples, produces a continuous representation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the examples, synthesizes a program by incrementally expanding partial programs. We demonstrate the effectiveness of our approach by applying it to the rich and complex domain of regular expression based string transformations. Experiments show that the R3NN model is not only able to construct programs from new input-output examples, but it is also able to construct new programs for tasks that it had never observed before during training.
Alex Graves et al. 10/12/2016. “Hybrid computing using a neural network with dynamic external memory.” Nature. Publisher's VersionAbstract
Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory.
Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, and Daniel Tarlow. 8/15/2016. “TerpreT: A Probabilistic Programming Language for Program Induction.” arXiv:1608.04428. PDFAbstract
We study machine learning formulations of inductive program synthesis; given input-output examples, we try to synthesize source code that maps inputs to corresponding outputs. Our aims are to develop new machine learning approaches based on neural networks and graphical models, and to understand the capabilities of machine learning techniques relative to traditional alternatives, such as those based on constraint solving from the programming languages community.
Our key contribution is the proposal of TerpreT, a domain-specific language for expressing program synthesis problems. TerpreT is similar to a probabilistic programming language: a model is composed of a specification of a program representation (declarations of random variables) and an interpreter describing how programs map inputs to outputs (a model connecting unknowns to observations). The inference task is to observe a set of input-output examples and infer the underlying program. TerpreT has two main benefits. First, it enables rapid exploration of a range of domains, program representations, and interpreter models. Second, it separates the model specification from the inference algorithm, allowing like-to-like comparisons between different approaches to inference. From a single TerpreT specification we automatically perform inference using four different back-ends. These are based on gradient descent, linear program (LP) relaxations for graphical models, discrete satisfiability solving, and the Sketch program synthesis system.
We illustrate the value of TerpreT by developing several interpreter models and performing an empirical comparison between alternative inference algorithms. Our key empirical finding is that constraint solvers dominate the gradient descent and LP-based formulations. We conclude with suggestions for the machine learning community to make progress on program synthesis.
Nadia Polikarpova, Ivan Kuraj, and Armando Solar-Lezama. 2016. “Program Synthesis from Polymorphic Refinement Types.” In PLDI. PDFAbstract
We present a method for synthesizing recursive functions that provably satisfy a given specification in the form of a polymorphic refinement type. We observe that such specifications are particularly suitable for program synthesis for two reasons. First, they offer a unique combination of expressive power and decidability, which enables automatic verification—and hence synthesis—of nontrivial programs. Second, a type-based specification for a program can often be effectively decomposed into independent specifications for its components, causing the synthesizer to consider fewer component combinations and leading to a combinatorial reduction in the size of the search space. At the core of our synthesis procedure is a new algorithm for refinement type checking, which supports specification decomposition. We have evaluated our prototype implementation on a large set of synthesis problems and found that it exceeds the state of the art in terms of both scalability and usability. The tool was able to synthesize more complex programs than those reported in prior work (several sorting algorithms and operations on balanced search trees), as well as most of the benchmarks tackled by existing synthesizers, often starting from a more concise and intuitive user input.
Sam Staton, Frank Wood, Hongseok Yang, Chris Heunen, and Ohad Kammar. 2016. “Semantics for probabilistic programming: higher-order functions, continuous distributions, and soft constraints.” Logic in Computer Science (LICS). PDFAbstract
We study the semantic foundation of expressive probabilistic programming languages, that support higher-order functions, continuous distributions, and soft constraints (such as Anglican, Church, and Venture). We define a metalanguage (an idealised version of Anglican) for probabilistic computation with the above features, develop both operational and denotational semantics, and prove soundness, adequacy, and termination. This involves measure theory, stochastic labelled transition systems, and functor categories, but admits intuitive computational readings, one of which views sampled random variables as dynamically allocated read-only variables. We apply our semantics to validate nontrivial equations underlying the correctness of certain compiler optimisations and inference algorithms such as sequential Monte Carlo simulation. The language enables defining probability distributions on higher-order functions, and we study their properties.
2015
Peter-Michael Osera and Steve Zdancewic. 11/2015. “Type-and-example-directed program synthesis.” In PLDI. PDFAbstract
This paper presents an algorithm for synthesizing recursive functions that process algebraic datatypes. It is founded on proof-theoretic techniques that exploit both type information and input–output examples to prune the search space. The algorithm uses refinement trees, a data structure that succinctly represents constraints on the shape of generated code. We evaluate the algorithm by using a prototype implementation to synthesize more than 40 benchmarks and several non-trivial larger examples. Our results demonstrate that the approach meets or outperforms the state-of-the-art for this domain, in terms of synthesis time or attainable size of the generated programs.
Cindy Mason. 1/2015. “Engineering Kindness: Building a Machine with Compassionate Intelligence.” International Journal of Synthetic Emotions. PDFAbstract
The author provides first steps toward building a software agent/robot with compassionate intelligence. She approaches this goal with an example software agent, EM-2. She also gives a generalized software requirements guide for anyone wishing to pursue other means of building compassionate intelligence into an AI system. The purpose of EM-2 is not to build an agent with a state of mind that mimics empathy or consciousness, but rather to create practical applications of AI systems with knowledge and reasoning methods that positively take into account the feelings and state of self and others during decision making, action, or problem solving. To program EM-2 the author re-purposes code and architectural ideas from collaborative multi-agent systems and affective common sense reasoning with new concepts and philosophies from the human arts and sciences relating to compassion. EM-2 has predicates and an agent architecture based on a meta-cognition mental process that was used on India's worst prisoners to cultivate compassion for others, Vipassana or mindfulness. She describes and presents code snippets for common sense based affective inference and the I-TMS, an Irrational Truth Maintenance System, that maintains consistency in agent memory as feelings change over time, and provides a machine theoretic description of the consistency issues of combining affect and logic. The author summarizes the growing body of new biological, cognitive and immune discoveries about compassion and the consequences of these discoveries for programmers working with human-level AI and hybrid human-robot systems.
Stephen Muggleton and Dianhuan Lin. 2015. “Meta-Interpretive Learning of Higher-Order Dyadic Datalog: Predicate invention revisited.” IJCAI. PDFAbstract
In recent years Predicate Invention has been underexplored within Inductive Logic Programming due to difficulties in formulating efficient search mechanisms. However, a recent paper demonstrated that both predicate invention and the learning of recursion can be efficiently implemented for regular and context-free grammars, by way of abduction with respect to a meta-interpreter. New predicate symbols are introduced as constants representing existentially quantified higher-order variables. In this paper we generalise the approach of Meta-Interpretive Learning (MIL) to that of learning higher-order dyadic datalog programs. We show that with an infinite signature the higher-order dyadic datalog class H2 2 has universal Turing expressivity though H2 2 is decidable given a finite signature. Additionally we show that Knuth-Bendix ordering of the hypothesis space together with logarithmic clause bounding allows our Dyadic MIL implementation MetagolD to PAC-learn minimal cardinailty H2 2 definitions. This result is consistent with our experiments which indicate that MetagolD efficiently learns compact H2 2 definitions involving predicate invention for robotic strategies and higher-order concepts in the NELL language learning domain.
2014
Shai Shalev-Shwartz and Shai Ben-David. 7/2014. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press. Publisher's Version (Free)Abstract
Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides a theoretical account of the fundamentals underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics, the book covers a wide array of central topics unaddressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for advanced undergraduates or beginning graduates, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics and engineering.
2013
Jonathan Malmaud, Ryan P. Adams, and Joshua B. Tenenbaum. 2013. “Bootstrap Learning Via Modular Concept Discovery.” In IJCAI. PDFAbstract
Suppose a learner is faced with a domain of problems about which it knows nearly nothing. It does not know the distribution of problems, the space of solutions is not smooth, and the reward signal is uninformative, providing perhaps a few bits of information but not enough to steer the learner effectively. How can such a learner ever get off the ground? A common intuition is that if the solutions to these problems share a common structure, and the learner can solve some simple problems by brute force, it should be able to extract useful components from these solutions and, by composing them, explore the solution space more efficiently. Here, we formalize this intuition, where the solution space is that of typed functional programs and the gained information is stored as a stochastic grammar over programs. We propose an iterative procedure for exploring such spaces: in the first step of each iteration, the learner explores a finite subset of the domain, guided by a stochastic grammar; in the second step, the learner compresses the successful solutions from the first step to estimate a new stochastic grammar. We test this procedure on symbolic regression and Boolean circuit learning and show that the learner discovers modular concepts for these domains. Whereas the learner is able to solve almost none of the posed problems in the procedure’s first iteration, it rapidly becomes able to solve a large number by gaining abstract knowledge of the structure of the solution space.
Andreas Stuhlmüller and Noah D. Goodman. 2013. “Reasoning about Reasoning by Nested Conditioning: Modeling Theory of Mind with Probabilistic Programs.” Journal of Cognitive Systems Research. PDFAbstract
A wide range of human reasoning patterns can be explained as conditioning in probabilistic models; however, conditioning has traditionally been viewed as an operation applied to such models, not represented in such models. We describe how probabilistic programs can explicitly represent conditioning as part of a model. This enables us to describe reasoning about others’ reasoning using nested conditioning. Much of human reasoning is about the beliefs, desires, and intentions of other people; we use probabilistic programs to formalize these inferences in a way that captures the flexibility and inherent uncertainty of reasoning about other agents. We express examples from game theory, artificial intelligence, and linguistics as recursive probabilistic programs and illustrate how this representation language makes it easy to explore new directions in each of these fields. We discuss the algorithmic challenges posed by these kinds of models and describe how Dynamic Programming techniques can help address these challenges.
2011
Judea Pearl. 2011. “The algorithmization of counterfactuals.” Annals of Mathematics and Artificial Intelligence . Publisher's VersionAbstract
Recent advances in causal reasoning have given rise to a computation model that emulates the process by which humans generate, evaluate and distinguish counterfactual sentences. Though compatible with the “possible worlds” account, this model enjoys the advantages of representational economy, algorithmic simplicity and conceptual clarity. Using this model, the paper demonstrates the processing of counterfactual sentences on a classical example due to Ernest Adam. It then gives a panoramic view of several applications where counterfactual reasoning has benefited problem areas in the empirical sciences.
2008
Yarden Katz, Noah D. Goodman, Kristian Kersting, Charles Kemp, and Joshua B. Tenenbaum. 2008. “Modeling Semantic Cognition as Logical Dimensionality Reduction.” CogSci. PDFAbstract
Semantic knowledge is often expressed in the form of intuitive theories, which organize, predict and explain our observations of the world. How are these powerful knowledge structures represented and acquired? We present a framework, logical dimensionality reduction, that treats theories as compressive probabilistic models, attempting to express observed data as a sample from the logical consequences of the theory’s underlying laws and a small number of core facts. By performing Bayesian learning and inference on these models we combine important features of more familiar connectionist and symbolic approaches to semantic cognition: an ability to handle graded, uncertain inferences, together with systematicity and compositionality that support appropriate inferences from sparse observations in novel contexts.
2007
Eleni Stroulia and Ashok K. Goel. 5/2007. “Functional Representation and Reasoning for Reflective Systems.” Applied Artificial Intelligence. Publisher's VersionAbstract
Functional models have been extensively investigated in the context of several problemsolving tasks such as device diagnosis and design. In this paper, we view problem solvers themselves as devices, and use structure-behavior-function models to represent how they work. The model representing the functioning of a problem solver explicitly specifies how the knowledge and reasoning of the problem solver result in the achievement of its goals. Then, we employ these models for performance-driven reflective learning. We view performance-driven learning as the task of redesigning the knowledge and reasoning of the problem solver to improve its performance. We use the model of the problem solver to monitor its reasoning, assign blame when it fails, and appropriately redesign its knowledge and reasoning. This paper focuses on the model-based redesign of a path planner's task structure. It illustrates the modelbased reflection using examples from an operational system called the Autognostic system.
1994
Marco Ramoni, Alberto Riva, and Vimla L. Patel. 1/2/1994. “Probabilistic Reasoning under Ignorance.” Cognitive Science Society. PDFAbstract
The representation of ignorance is a long standing challenge for researchers in probability and decision theory. During the past decade, Artificial Intelligence researchers have developed a class of reasoning systems, called Truth Maintenance Systems, which are able to reason on the basis of incomplete information. In this paper we will describe a new method for dealing with partially specified probabilistic models, by extending a logic-based truth maintenance method from Boolean truth-values to probability intervals. Then we will illustrate how this method can be used to represent Bayesian Belief Networks --- one of the best known formalisms to reason under uncertainty --- thus producing a new class of Bayesian Belief Networks, called Ignorant Belief Networks, able to reason on the basis of partially specified prior and conditional probabilities. Finally, we will discuss how this new method relates to some theoretical intuitions and empirical findings in decision theory and cognitive science.
1993
Kenneth D. Forbus and Johan De Kleer. 11/1993. Building Problem Solvers. MIT Press. PDF
Marco Ramoni and Alberto Riva. 1/1/1993. “Belief Maintenance with Probabilistic Logic.” AAAI. PDFAbstract
Belief maintenance systems are natural extensions of truth maintenance systems that use probabilities rather than boolean truth-values. This paper introduces a general method for belief maintenance, based on (the propositional fragment of) probabilistic logic, that extends the Boolean Constraint Propagation method used by the logic-based truth maintenance systems. From the concept of probabilistic entailment, we derive a set of constraints on the (probabilistic) truth-values of propositions and we prove their soundness. These constraints are complete with respect to a well-defined set of clauses, and their partial incompleteness is compensated by a gain in computational efficiency

Pages