NESCAI07: The Second North East Student Colloquium on Artificial Intelligence

13-15 April 2007, Ithaca, NY

Tutorials

We will be offering three tutorials

Partially Observable Markov Decision Processes: An Introduction
Craig Boutilier (University of Toronto)

Upson B17

Partially observable Markov decision processes, or POMDPs, are a very general model for sequential planning and decision making under uncertainty. They allow one to model uncertainty in the effects of actions, restricted ability to sense the state of the world, and very general reward (or objective) functions. In this tutorial, I describe the basic ingredients of (discrete) POMDPs, the conversion of POMDPs to fully observable ``belief state'' MDPs, and describe several standard solution techniques developed largely in the operations research community (but also variants developed within the AI community). I also briefly overview the application of AI representation and approximation techniques for the solution of large scale POMDPs.

Craig Boutilier is the chair of the Department of Computer Science at University of Toronto and a member of Artificial Intelligence Group. His current research efforts focus on various aspects of decision making under uncertainty: decision-theoretic planning and Markov decision processes; game theory and multiagent decision processes; economic models; reinforcement learning; probabilistic inference; and preference elicitation. In a past research life, he spent quite a bit of time working in the areas of knowledge representation, belief revision, default reasoning, and philosophical logic.

Mining, modeling and learning with graphs
Jure Leskovec (Carnegie Mellon University)

Upson B17

How can we mine the Internet graph over time? How do we model social and biological networks to help us understand or predict the behavior of these systems? What properties do they have? How do we find communities in networks? How is machine learning with graphs done? What kind of dynamical processes are talking place on the networks?

In this talk I will give an overview of the field that brings together physicist, sociologists, and computer scientists. I will review the state of the art in four related fields: (a) graph mining, (b) graph theory, (c) machine learning with graphs, and (d) social networks. I will present both theoretical results and algorithms as well as case studies on several real applications. The emphasis is on the intuition behind each method, and on guidelines how to use them.

Jure Leskovec is a PhD candidate in machine learning at Carnegie Mellon University. He received ACM KDD 2005 best research paper award, won ACM KDD cup competition in 2003, holds two patents, and is a Microsoft Research Graduate Fellow. His research interests include machine learning and data mining on large real-world graphs and networks.

Slides from the tutorial

Sparsity recovery and structure learning
Pradeep Ravikumar (Carnegie Mellon University)

Phillips 101

The sparsity pattern of a vector, the number and location of its non-zeros, is of importance in varied settings as structure learning in graphical models, subset selection in regression, signal denoising and compressed sensing. In particular, a key task in these settings is to estimate the sparsity pattern of an unknown ``parameter'' vector from a set of n noisy observations.

The task can be understood simply as identifying signals from noise. Suppose a set of ``signals'' (edges in graphical models, covariates in regression) enter into linear combinations, and have observations which are functions of these linear combinations with added noise. The task is to identify the set of relevant signals from n such noisy observations. In graphical models for instance, the task in structure learning is to identify the set of edges from the noisy samples.

This is an intractable problem under general settings, but there has been a flurry of recent work on using convex relaxations and L1 penalties to recover the ideal sparse signal. In particular, fairly general conditions have been proposed under which computationally tractable methods can recover the true sparsity pattern of the signal.

The tutorial will cover the basic problem abstraction, the applications in various settings and some general conditions under which the tractable methods ``work''. It will also focus in particular on the application setting of structure learning in graphical models.

Pradeep Ravikumar is a PhD Candidate at Carnegie Mellon University, advised by John Lafferty. He received his Bachelor of Technology in Computer Science and Engineering from the Indian Institute of Technology, Bombay. His research interests include machine learning, statistical learning theory, optimization and information integration.