Representation

2020
Goutham Ramakrishnan, Jordan Henkel, Zi Wang, Aws Albarghouthi, Somesh Jha, and Thomas Reps. 6/11/2020. “Semantic Robustness of Models of Source Code.” arXiv:2002.03043. PDFAbstract
Deep neural networks are vulnerable to adversarial examples - small input perturbations that result in incorrect predictions. We study this problem for models of source code, where we want the network to be robust to source-code modifications that preserve code functionality. (1) We define a powerful adversary that can employ sequences of parametric, semantics-preserving program transformations; (2) we show how to perform adversarial training to learn models robust to such adversaries; (3) we conduct an evaluation on different languages and architectures, demonstrating significant quantitative gains in robustness.
2019
Joshua J. Michalenko, Ameesh Shah, Abhinav Verma, Richard G. Baraniuk, Swarat Chaudhuri, and Ankit B. Patel. 1/2019. “Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks.” ICLR. PDFAbstract
We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language. Specifically, we train a RNN on positive and negative examples from a regular language, and ask if there is a simple decoding function that maps states of this RNN to states of the minimal deterministic finite automaton (MDFA) for the language. Our experiments show that such a decoding function indeed exists, and that it maps states of the RNN not to MDFA states, but to states of an {\em abstraction} obtained by clustering small sets of MDFA states into "superstates". A qualitative analysis reveals that the abstraction often has a simple interpretation. Overall, the results suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure.      
2018
Noah D. Goodman and Joshua B. Tenenbaum. 11/13/2018. Probabilistic Models of Cognition. 2nd ed. Publisher's VersionAbstract
This book explores the probabilistic approach to cognitive science, which models learning and reasoning as inference in complex probabilistic models. We examine how a broad range of empirical phenomena, including intuitive physics, concept learning, causal reasoning, social cognition, and language understanding, can be modeled using probabilistic programs (using the WebPPL language).
2017
Owain Evans, Andreas Stuhlmüller, John Salvatier, and Daniel Filan. 2017. “Modeling Agents with Probabilistic Programs”. Publisher's VersionAbstract

This book describes and implements models of rational agents for (PO)MDPs and Reinforcement Learning. One motivation is to create richer models of human planning, which capture human biases and bounded rationality.

Agents are implemented as differentiable functional programs in a probabilistic programming language based on Javascript. Agents plan by recursively simulating their future selves or by simulating their opponents in multi-agent games. Our agents and environments run directly in the browser and are easy to modify and extend.

The book assumes basic programming experience but is otherwise self-contained. It includes short introductions to “planning as inference”MDPsPOMDPsinverse reinforcement learninghyperbolic discountingmyopic planning, and multi-agent planning.

2013
Andreas Stuhlmüller and Noah D. Goodman. 2013. “Reasoning about Reasoning by Nested Conditioning: Modeling Theory of Mind with Probabilistic Programs.” Journal of Cognitive Systems Research. PDFAbstract
A wide range of human reasoning patterns can be explained as conditioning in probabilistic models; however, conditioning has traditionally been viewed as an operation applied to such models, not represented in such models. We describe how probabilistic programs can explicitly represent conditioning as part of a model. This enables us to describe reasoning about others’ reasoning using nested conditioning. Much of human reasoning is about the beliefs, desires, and intentions of other people; we use probabilistic programs to formalize these inferences in a way that captures the flexibility and inherent uncertainty of reasoning about other agents. We express examples from game theory, artificial intelligence, and linguistics as recursive probabilistic programs and illustrate how this representation language makes it easy to explore new directions in each of these fields. We discuss the algorithmic challenges posed by these kinds of models and describe how Dynamic Programming techniques can help address these challenges.
2008
Yarden Katz, Noah D. Goodman, Kristian Kersting, Charles Kemp, and Joshua B. Tenenbaum. 2008. “Modeling Semantic Cognition as Logical Dimensionality Reduction.” CogSci. PDFAbstract
Semantic knowledge is often expressed in the form of intuitive theories, which organize, predict and explain our observations of the world. How are these powerful knowledge structures represented and acquired? We present a framework, logical dimensionality reduction, that treats theories as compressive probabilistic models, attempting to express observed data as a sample from the logical consequences of the theory’s underlying laws and a small number of core facts. By performing Bayesian learning and inference on these models we combine important features of more familiar connectionist and symbolic approaches to semantic cognition: an ability to handle graded, uncertain inferences, together with systematicity and compositionality that support appropriate inferences from sparse observations in novel contexts.
2007
Eleni Stroulia and Ashok K. Goel. 5/2007. “Functional Representation and Reasoning for Reflective Systems.” Applied Artificial Intelligence. Publisher's VersionAbstract
Functional models have been extensively investigated in the context of several problemsolving tasks such as device diagnosis and design. In this paper, we view problem solvers themselves as devices, and use structure-behavior-function models to represent how they work. The model representing the functioning of a problem solver explicitly specifies how the knowledge and reasoning of the problem solver result in the achievement of its goals. Then, we employ these models for performance-driven reflective learning. We view performance-driven learning as the task of redesigning the knowledge and reasoning of the problem solver to improve its performance. We use the model of the problem solver to monitor its reasoning, assign blame when it fails, and appropriately redesign its knowledge and reasoning. This paper focuses on the model-based redesign of a path planner's task structure. It illustrates the modelbased reflection using examples from an operational system called the Autognostic system.
1966
John von Neumann. 1/1/1966. Theory of Self-Reproducing Automata. University of Illinois Press. PDF