Seminar Abstracts (Fall 2012)

9/13/2012

Gilbert Strang (MIT)
Exponentials of Difference Matrices are Toeplitz plus Hankel

Abstract: The second difference matrix K is tridiagonal and Toeplitz,
with entries 1, -2, 1 down the diagonals.  We can approximate the heat equation u_t = u_xx
on an interval by u_t = Ku / h^2.  The solution involves E = exp(K)
and we take a closer look at this matrix (and related matrices) .

The eigenvectors of K are sine vectors.  So E_ij becomes a combination
of sines times sines, weighted by eigenvalues of E.  Rewriting in terms of
sin((i-j)kh) and sin ((i+j)kh), this matrix E is T + H  (Toeplitz plus Hankel). 
The Toeplitz part reflects shift-invariance:  shift the initial function and the
solution shifts.  The Hankel part shifts the other way !  We hope to explain this.

A second step is to find a close approximation to these sums -- by integrals.
With smooth periodic functions the accuracy is superexponential.  The integrals
produce Bessel functions and we can study also the exact solution -- to the
heat equation and the wave equation. There are applications to graphs and networks. 


This is joint work with Shev Macnamara (MIT).


09/20/2012
 

Cristian Lalescu (Johns Hopkins University)
Synchronization of Chaos and the Smallest Turbulent Eddy

Abstract: Although chaotic dynamics are famously unpredictable, their evolution can --- somewhat surprisingly --- be recovered in all detail from only partial observations. This is possible due to the phenomenon of Chaos Synchronization (CS), a process wherein two or more chaotic systems adjust a given property of their motion to a common behavior, due to a coupling or to a forcing. Turbulence is a specific type of chaotic motion of a fluid, and it is not yet fully understood; a defining characteristic is that it involves fluctuations on a wide range of scales. Presently, one of the outstanding problems in the field is determining what is the length-scale of the smallest turbulent eddies. The Kolmogorov 1941 theory makes a specific prediction, but accumulated experimental, numerical and theoretical evidence suggests that fluid turbulence develops scales considerably smaller than the Kolmogorov length.  We numerically demonstrate CS in simulations of fluid turbulence, and we discuss how this phenomenon can be exploited in a spacetime database of hydrodynamic turbulence to study the problem of the smallest turbulent eddy. This has some relevance to the Clay Millenium Prize Problem about the regularity of solutions of the incompressible Navier-Stokes equation.


09/27/2012
 

Josh Lospinoso (RedOwl Analytics)
Statistical Modeling of Social Network Dynamics in Relational Event Histories with Multiple Recipients

Abstract: A stochastic model is proposed for inferring about the evolving social network dynamics embedded in relational event histories with multiple recipients. The model builds on the seminal work of Butts (2008) and Brandes, Lerner, and Snijders (2009) by generalizing the models to the case of multiple recipients. Relational events are assumed to be a stochastic process whereby series of sequential decisions are undertaken by the senders of the relational events. An initial recipient is chosen according to some preference from among some valid set of candidates. The sender may then decide to add another recipient according to some preference. Addition of recipients proceeds until the sender is satiated. The ordering of the recipient choices is unobserved. Inference in this latent variable problem is treated with a Markov Chain Monte Carlo maximum likelihood estimation approach based on expectation maximization. Models are fit to two exemplar datasets: the Enron email corpus and the Eisenhower Leadership Development Program Dataset collected by Dr. Kate Coronges at the US Military Academy at West Point.

C. T. Butts. A relational event framework for social action. Sociological Methodology,
381(1):155–200, 2008.

U. Brandes, J. Lerner, and T. A. B. Snijders. Networks evolving step by step: Sta-
tistical analysis of dyadic event data. In ASONAM’09, pages 200–205, 2009.


10/04/2012
 

Wotao Yin (Rice University)
The Alternating Direction Method of Multipliers

Abstract: Through examples, we argue that the alternating direction
method of multipliers (ADMM) is suitable for problems arising in image
processing, conic programming, machine learning, compressive sensing,
as well as distributed data processing, and the method is able to
solve very large scale problem instances. The development of this
method dates back to the 1950s and has close relationships with the
Douglas-Rachford splitting, the augmented Lagrangian method, Bregman
iterative algorithms, proximal-point algorithms, etc. This "old"
method has recently become popular among researchers in image/signal
processing, machine learning, and distributed/decentralized
computation. After a brief overview, we explain its convergence
behavior and then demonstrate how to solve very large scale conic
programming problems and machine learning problems such as LASSO by
ADMM.


10/11/2012
 

Yipeng Yang (University of Missouri-Columbia)
Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form - Slides

Abstract: The traditional difficulty about stochastic singular control is to characterize the existence and regularity of the value function as the solution to the associated Hamilton-Jacobi-Bellman (HJB) equation. In this work, a multi-dimensional singular control problem is considered. We found the optimal value function and the optimal control policy of this problem via Dynkin game (zero-sum game), whose solution is given by the saddle point of the cost function. The existence and uniqueness of the solution to this Dynkin game are proved through an associated variational inequality problem involving Dirichlet form, and the non-existence of exceptional set in higher dimensional space is proved using the absolute continuity of the probability transition function of the underlying process. We also showed that the integrated form of the value function of the Dynkin game gives the value function of the multi-dimensional singular control problem. As a result, the properties of the value function of this Dynkin game imply the smoothness of the value function of the singular control problem. In this way, we are able to show the existence of a classical solution to the associated HJB equation, which was traditionally solved in the sense of viscosity solutions.


10/18/2012
 

Carey Priebe (Johns Hopkins University)
Statistical Inference on Errorfully Observed Graphs

 Picture of Abstract


10/25/2012
 

Alan Izenman (Temple University)
Spatial Biclustering and Nonlinear Modeling of a Complex Data Set

Abstract: Using a novel database, ProDES, developed by the Crime and Justice Research Center at Temple University, this article investigates the relationship between spatial characteristics and juvenile delinquency and recidivism --- the proportion of delinquents who commit crimes following completion of a court-ordered program --- in Philadelphia, Pennsylvania. ProDES was originally a case-based sample, where the cases had been adjudicated in family court between 1994 and 2004.  For our analysis, we focused attention on studying 6,768 juvenile males from the data set.  To address the difficult issue of nonstationarity in the data, we apply the plaid biclustering algorithm in which a sequence of subsets ("layers") of both juveniles and variables are extracted from the data one layer at a time, but where the layers are allowed to overlap with each other.  This type of "biclustering" is a new way of studying juvenile offense data.  We show that the juveniles within each layer can be viewed as spatially clustered. Statistical relationships of the variables and juveniles within each layer are then studied using neural network models.  Results show substantial improvements in predicting juvenile recidivism using the methods of this paper.


11/01/2012
 

Sauleh Siddiqui (Johns Hopkins University)
Decomposition Methods for Two-Level Optimization Problems with Applications to Robust Engineering Design and Natural Gas Markets

Abstract: This presentation will provide efficient techniques to solve two-level optimization problems. The first problem, robust optimization, directly applies to engineering design by modeling worst-case uncertainty. Traditionally, robust optimization problems have been solved using an inner-outer structure, which can be computationally expensive. This presentation explains a method to decompose and solve this two-level structure using a modified Benders decomposition. This gradient-based technique is applicable to robust optimization problems with quasiconvex constraints and provides approximate solutions to problems with nonlinear constraints.

The second type of two-level problem considered is a mathematical program with equilibrium constraints. Its two-level structure is simplified using Schur’s decomposition and reformulation schemes for
absolute value functions. The resulting formulations are applicable to game theory problems in operations research and modeling large-scale systems.

The techniques for both of these problems help to simplify the two-level structure into one level, which helps gain numerical and application insights. The computational effort for solving these problems is greatly reduced using the presented techniques. Diverse applications to engineering design and operations research motivate the relevance of the novel methods presented.


11/08/2012
 

Jinho Baik (University of Michigan)
Sample covariance matrix and tandem queues

Abstract: Sample covariance matrix is an estimator of the true covariance matrix of samples. We consider the situation of high dimensional data with large number of samples under the assumption that the true covariance matrix is the identity matrix plus a finite rank matrix. This case is often called the spiked models. We discuss the dependence of the sample eigenvalues on the spiked true eigenvalues. There is a fundamental limit on the possibility of the detection by looking at the sample eigenvalues. We also discuss a similar question for the exit time of customers from a tandem of queues. A connection between these two problems will be discussed. If time permits, I will also discuss the connections to directed last passage percolation and to random growth models.


11/15/2012
 

Karl Rohe (University of Wisconsin)
Preconditioning for consistency in sparse inference

Abstract: Preconditioning is a technique from numerical linear algebra that can accelerate algorithms to solve systems of equations.  This talk will discuss how preconditioning can also improve the statistical estimation performance in sparse high dimensional regression (aka compressed sensing).  

Specifically, the talk will demonstrate how preconditioning can circumvent three stringent assumptions for various types of consistency in sparse linear regression.  Given X \in R^{n x p} and Y \in R^n that satisfy the standard linear regression equation Y = X beta + epsilon, this paper demonstrates that even if the design matrix X does not satisfy the irrepresentable condition, the restricted eigenvalue condition, or the restricted isometry property, the design matrix FX often does, where F \in R^{n x n} is a specific preconditioning matrix that will be defined in the talk.  By computing the Lasso on (FX, FY), instead of on (X,Y), the necessary assumptions on X become much less stringent.   Crucially, left multiplying the regression equation by X does not change beta, the vector of unknown coefficients.

Our preconditioner F ensures that the singular values of the design matrix are either zero or one.  When n>p, the columns of FX are orthogonal and the preconditioner always circumvents the stringent assumptions.  When p > n, F projects the design matrix onto the Stiefel manifold; the rows of FX are orthogonal.  The Stiefel manifold is a bounded set and we show that most matrices in this set satisfy the stringent assumptions.  Simulation results are particularly promising. 

This is joint work with Jinzhu Jia at Peking University.


11/29/2012
 

Kasso Okoudjou (University of Maryland)
Scalable Frames

Abstract: Frames provide a mathematical framework for stably representing signals as linear combinations of basic building blocks that constitute an overcomplete collection. Finite frames are frames for finite dimensional spaces, and are especially suited for many applications in signal processing.  The inherent redundancy of frames can be exploited to build compression and transmission algorithms that are resilient not only to lost of information but also to noise. For instance, tight frames constitute a particular class of frames that  play important roles in  many applications. 

After giving an overview of finite frame theory,  I will  consider the question of modifying a general frame to generate a tight frame by rescaling its frame vectors. A frame that positively answers this question will be called   scalable.  I will give  various  characterizations of the set of scalable frames, and present  some  topological descriptions of this  set.  (This talk is based on joint work with G. Kutyniok, F. Philipp and E. Tuley).  

 


12/06/2012
 

Richard Smith (University of North Carolina/SAMSI)

Richard L. Smith is Mark L. Reed III Distinguished Professor of Statistics and Professor of Biostatistics in the University of North Carolina, Chapel Hill. He is also Director of the Statistical and Applied Mathematical Sciences Institute, a Mathematical Sciences Institute supported by the National Science Foundation. He obtained his PhD from Cornell University and previously held academic positions at Imperial College (London), the University of Surrey (Guildford, England) and Cambridge University. His main research interest is environmental statistics and associated areas of methodological research such as spatial statistics, time series analysis and extreme value theory. He is particularly interested in statistical aspects of climate change research, and in air pollution including its health effects. He is a Fellow of the American Statistical Association and the Institute of Mathematical Statistics, an Elected Member of the International Statistical Institute, and has won the Guy Medal in Silver of the Royal Statistical Society, and the Distinguished Achievement Medal of the Section on Statistics and the Environment, American Statistical Association. In 2004 he was the J. Stuart Hunter Lecturer of The International Environmetrics Society (TIES). He is also a Chartered Statistician of the Royal Statistical Society.

THE WIERMAN LECTURE - Hodson 210
Attribution of Extreme Climatic Events

Abstract: Superstorm Sandy is merely the most recent high-impact weather event to raise concerns about extreme weather events becoming more frequent or more severe. Previous examples include the western European heatwave of 2003, the Russian heatwave and the Pakistan floods of 2010, and the Texas heatwave of 2011. However, it remains an open question to what extent such events may be "attributed" to human influences such as increasing greenhouse gases. One way to answer this question is to run climate models under two scenarios, one including all the anthropogenic forcing factors (in particular, greenhouse gases) while the other is run only including the natural forcings (e.g. solar fluctuations) or control runs with no forcings at all. Based on the climate model runs, probabilities of the extreme event of interest may be computed under both scenarios, followed by the risk ratio or the "fraction of attributable risk", which has become popular in the climatology community as a measure of the human influence on extreme events. This talk will discuss statistical approaches to these quantities, including the use of extreme value theory as a method of quantifying the risk of extreme events, and Bayesian hierarchical models for combining the results of different climate models. This is joint work with Xuan Li (UNC) and Michael Wehner (Lawrence Berkeley Lab).