**Seminar Abstracts (Spring 2013)**

Zhenyu Cui (University of Waterloo)

Debiased Monte Carlo method and nearly unbiased simulation from characteristic function

**Abstract: **In this talk a fairly general method to debiase a Monte Carlo estimator is proposed. The method is based on a novel randomization and leads to unbiased estimators when applied to asymptotically unbiased estimators. Applications include unbiased simulation of prices of European options under stochastic volatility models, and prices of arithmetic Asian options, which are challenging problems in mathematical finance. Another application is to provide a new approach to perform a nearly unbiased simulation using inversion of the characteristic function. The pricing of the continuously-monitored Parisian option under the Black-Scholes framework is considered.

**Agostino Capponi **(Purdue University)

**Counterparty Risk Pricing for Large Derivatives Portfolios: a Law of Large Numbers Approximation**

**Abstract:**
Counterparty credit risk has been identified as a key driver of financial distress during the credit crisis. The modeling apparatus used in the industry did not properly account for correlation among defaults, volatility of credit spreads, and typically assumed that only one counterparty in the trade was subject to default risk. Such limitations were responsible for roughly two-thirds of the losses experienced by financial institutions during the crisis.

In this talk we introduce a general framework for pricing counterparty risk of over-the-counter derivatives portfolios exchanged between two defaultable counterparties. This advances current literature which typically assumes that one of the trading counterparties is default-free. Moreover, it is consistent with credit events during the crisis indicating that it is no longer realistic to take the credit quality of any financial institution for granted. Our valuation formula is model independent and expresses the price of counterparty risk as the difference of two options written on the residual value of the portfolio at the time when either counterparty defaults.

We specialize our formula to price counterparty risk of large portfolios of credit derivatives under a doubly stochastic default intensity framework allowing for simultaneous defaults.Our model is designed to capture clustering of defaults, such as the credit events occurred in a short period of time in the month of September 2008, and involving Fannie Mae, Freddie Mac, Lehman Brothers, Washington Mutual, Landsbanki, Glitnir and Kaupthing. We obtain a closed form expression for the price of counterparty risk by means of an explicit characterization of the portfolio market value. The latter is obtained as the weak limit of measure-valued processes associated to survival indicators of the names composing the portfolio.

We use the formula to perform an economic analysis, and find that counterparty risk price is monotonically increasing in credit risk volatility and highly sensitive to default correlation. Our findings support empirical evidence from the financial crisis, demonstrating that volatility of credit spreads has been responsible for major credit quality deteriorations and default events.

**Maxim Bichuch** (Princeton University)

**Price of a Contingent Claim Liability in a Market with Small Transaction Costs
**

**Abstract: **I price a contingent claim liability using the utility indifference argument. I consider an agent with exponential utility, who invests in a stock and a money market account with the goal of maximizing the utility of his investment at the final time in the presence of positive transaction cost in two cases with and without a contingent claim liability. I provide a rigorous derivation of the asymptotic expansion of the value function in the transaction cost parameter around the known value function for the case of zero transaction cost in both cases with and without a contingent claim liability. Additionally, using utility indifference method I derive an asymptotic expansion of the price of the contingent claim liability.
In both cases, I also obtain a ``nearly optimal" strategy, whose utility asymptotically matches the leading terms of the value function.

**Frederi Viens** (Purdue University)

**Dependent stochastic models: new tools from analysis on Wiener space, with applications to parameter estimation, quantitative finance, and climatology**

**Abstract: **Finite and infinite-dimensional stochastic processes in continuous time provide a wealth of modeling tools for random systems with complex interactions. In the classical theory, which is still in wide use today, models rely heavily on processes with independent increments such as Brownian motion (the Wiener process), as building blocks, for instance via stochastic differential equations. This results in models with short-range dependence, and related features like the martingale or Markov properties, which are convenient for dynamic methodologies based on current observations, such as feedback control. However, applications ranging from communications networks, to polymer studies, to financial and geophysical time series, are increasingly calling for models with longer-range correlations, such as stochastic long memory for time-evolution systems.

To propose mathematically sound models in this direction, the most widely used building block in continuous time is the so-called fractional Brownian motion, but it comes with its own set of mathematical and practical challenges. We will present a broad overview of recent developments in Analysis on Wiener space which help overcome some of these challenges, including new Malliavin calculus tools for estimating densities, tails, and convergences in law. Among the applications, we will discuss asymptotics for statistical estimators in long-memory stochastic processes, stochastic volatility calibration and tracking in financial options markets in periods of turmoil, and proxy-based paleo-temperature Bayesian reconstruction with memory and external forcings.

**Tom Bielecki **(Illinois Institute of Technology)

**Dependence between components of multivariate Markov chains: Markov consistency and Markov Copulae**

**Abstract:**Modeling of evolution of dependence between processes occurring in financial markets is important. Typically, one can identify marginal statistical properties of individual processes, and then one is confronted with the task of modeling dependence between these individual processes so that the marginal properties are obeyed. We have been advocating, for some time now, to address this modeling problem via the theory of Markov consistency and Markov copulae.

In this talk we shall examine the problem of existence and construction of a multivariate

Markov chain with components that are given Markov chains. In this regard we shall give sufficient and necessary conditions, in terms of relevant conditional expectations, for a component of a multivariate Markov chain to be a Markov chain in its own filtration - a property called weak Markov consistency. This characterization result is proved via analysis of the semi-martingale structure of the chain.

We shall also discuss the issue of dependence between the components of a multivariate Markov chain in the context of weak Markovian consistency. Accordingly,

we shall introduce and discuss the concept of weak Markov copulae. In addition, we shall examine relationship between the concepts of weak Markov consistency and weak Markov copulae, and the concepts of strong Markov consistency and strong Markov copulae that were introduced in our earlier works for Feller processes. Weak and strong Markovian copulae provide, in sense, dynamic counterparts of the classical concept of copulae in probability. Interestingly, the strong Markovian consistency can be fully characterized analytically in terms of the generator of a multivariate Markov chain. However, this appears to be not the case for the weak Markovian consistency; here, the

analytical condition needs to be supplemented with conditions imposed on the initial distribution of the chain.

We shall give an example of finite Markov chains that satisfy strong Markovian consistency, an example of a chain that satisfies weak Markovian consistency but does not satisfy strong Markovian consistency, and finally - an example of a chain that does not satisfy the weak Markovian consistency.

**THE ALAN GOLDMAN LECTURE
David Shmoys **(Cornell University)

David Shmoys is a Professor of Operations Research and Information Engineering as well as of Computer Science at Cornell University. He obtained his Ph.D. in Computer Science from the University of California at Berkeley in 1984, and held postdoctoral positions at MSRI in Berkeley and Harvard University, and a faculty position at MIT before joining the Cornell faculty. Shmoys's research has focused on the design and analysis of efficient algorithms for discrete optimization problems, with applications including scheduling, inventory theory, computational biology, and most recently, computational sustainability. He recently published (jointly with David Williamson) a graduate-level text on The Design of Approximation Algorithms. He is a Fellow of the ACM, a SIAM Fellow, and was an NSF Presidential Young Investigator; he has served on numerous editorial boards, and is currently on the Scientific Advisory Board of Mathematics of the Planet Earth 2013, and is Chair of the IEEE Technical Committee on Mathematical Foundations of Computing.

**Improving Christofides' Algorithm for the s-t Path Traveling Salesman Problem**

**Abstract:** In 1976, Christofides gave an approximation algorithm for the traveling
salesman problem (TSP) with metric costs that was guaranteed to find a tour
that was no more than 3/2 times the length of the shortest tour visiting a
given set of n cities; it remains an open problem to give a polynomial-time
algorithm with a better performance guarantee. There have been a number
of recent results that yield improvements for significant special cases, and
for related problems. In this talk, we shall present an approximation
algorithm for the s-t path TSP with metric costs, which is guaranteed
to find a solution of cost within a factor of the golden ratio of optimal
in polynomial time; in this variant, in addition to the pairwise distances
among n points, there are two prespecified endpoints s and t, and the problem
is to find a shortest Hamiltonian path between s and t. Hoogeveen showed that
the natural variant of Christofides' algorithm for this problem is a
5/3-approximation algorithm, and this asymptotically tight bound has
been the best approximation ratio known until now. We modify this algorithm so
that it chooses the initial spanning tree based on an optimal solution to a
natural linear programming relaxation, rather than a minimum spanning tree;
we prove this simple but crucial modification leads to an improved
approximation ratio, surpassing the 20-year-old barrier set by the natural
Christofides' algorithm variant.

This is joint work with Hyung-Chan An and Robert Kleinberg.

**Hans Lindblad **(Johns Hopkins University)

**Free Boundary problems for Fluids**

**Abstract:** I will talk about the wellposedness of the free boundary problem
describing the motion of the surface of a fluid in vacuum. The surface of the boundary of the fluid moves with the velocity of the fluid at the boundary, and the acceleration of the fluid at the boundary is determined through Euler's equations by the pressure which vanishes at the boundary
since there is no fluid in the exterior. This leads to a free boundary problem. An example is the surface of the Ocean.

**Susan Margulies **(Pennsylvania State University)

Hilbert's Nullstellensatz and Linear Algebra: An Algorithm for Determining Combinatorial Infeasibility

**Abstract:**Unlike systems of linear equations, systems of multivariate polynomial equations over the complex numbers or finite fields can be compactly used to model combinatorial problems. In this way, a problem is feasible (e.g. a graph is 3-colorable, Hamiltonian, etc.) if and only if a given system of polynomial equations has a solution. Via Hilbert's Nullstellensatz, we generate a sequence of large-scale, sparse linear algebra computations from these non-linear models to describe an algorithm for solving the underlying combinatorial problem. As a byproduct of this algorithm, we produce algebraic certificates of the non-existence of a solution (i.e., non-3-colorability, non-Hamiltonicity, or non-existence of an independent set of size k).

In this talk, we present theoretical and experimental results on the size of these sequences, and the complexity of the Hilbert's Nullstellensatz algebraic certificates. For non-3-colorability over a finite field, we utilize this method to successfully solve graph problem instances having thousands of nodes and tens of thousands of edges. We also describe methods of optimizing this method, such as finding alternative forms of the Nullstellensatz, adding carefully-constructed polynomials to the system, branching and exploiting symmetry.

**David Bergman** (Carnegie Mellon University)

**Decision Diagrams for Discrete Optimization**

**Abstract:**In recent years, decision diagrams (DDs) have been applied to various applications in operations research. These include sequential pattern data mining, cut generation, product configuration, and post-optimality analysis in integer programming, to name a few. Through these applications and others, DDs have proven to be a useful tool for a variety of tasks. Unfortunately, DDs may grow exponentially large, prohibiting their application.

To overcome this difficulty, we introduce the notion of limited-width approximate decision diagrams. By limiting the width of a decision diagram, the size of the data structure can be controlled to the level of accuracy desired.

In this talk, we will discuss the application of approximate DDs to discrete optimization problems, where we investigate their use as problem relaxations and restrictions of the feasible set. Approximate DDs can then be used to generate both upper and lower bounds on the objective function value for any separable objective function.

We then discuss how relaxed and restricted DDs can be used together to create a DD-based branch-and-bound algorithm. The algorithm differs substantially from traditional branch-and-bound algorithms on this class of problems in several important ways. First, relaxed DDs provide a discrete relaxation as opposed to a continuous relaxation (for example a linear programming relaxation), which is typically employed. In addition, subproblems are generated by branching on several partial solutions taken at once, thereby eliminating certain symmetry from the search. We discuss the application of the algorithm to the classical maximum independent set problem. Computational results show that the algorithm is competitive with state-of-the-art integer programming technology.

**Gautam Iyer** (Carnegie Mellon University)

**Cellular flows: Homogenization, Averaging and Anomalous Diffusion
**

**Abstract: **
I will talk about two model problem concerning cellular flows (a.k.a
array of opposing vortices). The first concerns the behaviour as the
flow amplitude $A$ (or more precisely the Peclet number) goes to
infinity, AND the cell size (or vortex seperation) $\epsilon$
approaches $0$ simultaneously. When one of the parameters is fixed,
the problem has been extensively studied and the limiting behaviour
is that of an effective "homogenized" or "averaged" problem. When
both vary simultaneously one sees an interesting transition at $A
\approx \eps^{-4}$. While the behaviour in the averaged regime ($A
\gg \eps^{-4}$) is well understood, the behaviour in the homogenized
regime ($A \ll \eps^{-4}$) is poorly understood, and the critical
transition regime is not understood at all.

The second problem concerns an anomalous diffusive behaviour
observed in "intermediate" time scales. It is well known that a
passive tracer diffusing in the presence of a strong cellular flows
"homogenizes" and behaves like an effective Brownian motion on large
time scales. On intermediate time scales, however, and anomalous
diffusive behaviour was recently observed numerically. I will show a
few preliminary results rigorously indicating stable "anomalous"
behaviour at intermediate time scales, and show this can be used to
recover the homogenized behaviour on long time scales.

THE DUNCAN LECTURES

**David Siegmund** (Stanford University)

**The Intersection of Operations Research, Kinetic Theory, and Genetics**

**Abstract:**A number of related, and in some cases identical, mathematical models were developed

in the early decades of the 20th century to explain the apparent paradox between the second

law of thermodynamics and its meaning in statistical mechanics, to model the number of

calls in a telephone exchange, and to verify experimentally the theory of Brownian motion.

I will give a historical review of these models and then jump ahead to the end of the century

when at least some of the same models and essentially the same questions reappear in the

statistical foundations of gene mapping in experimental and in human genetics.

**David Siegmund** (Stanford University)

**Detection of Local Signals in Genomics**

**Abstract: **A large class of problems of genomic analysis involve detection of local genomic signals.

Examples are (i) linkage and/or association analysis, (ii) detection of intervals of copy

number variation based on SNP arrays, (iii) detection of DNA structural variations from

paired end reads, (iv) detection of protein-DNA interactions by ChIP-Seq. Theoretical

approaches to some of these problems based on change-point methods are described, with

particular attention to the problems of multiple comparisons. The theoretical results are

illustrated on simulated and on real data.

This is joint research with (among others) Nancy Zhang (Penn) and Benny Yakir (Hebrew

University of Jerusalem).

**Laurent Younes **(Johns Hopkins University)

**Constrained Optimal Control for Diffeomorphic Registration**

**Abstract: **Diffeomorphic shape analysis has emerged as a powerful approach for under
standing nonrigid shape variation, designing deterministic and stochastic shape
evolution models, performing segmentation, tracking or registration tasks while
guaranteeing that topology remains unchanged. The approach, which is rooted
in Grenander's deformable template theory, consists in considering the whole
space as an elastic or viscous medium to which shapes of interest are attached.
Global deformations of this space induce shape deformation by restriction. The
interest of this approach is that the modeling effort can be made only once,
by considering the space of global deformations, in order to induce a variety of
shape spaces, one for each type of deformed objects (landmarks, curves, sur
faces, images. . . ). This space of global deformations is well defined and well
studied, since it can be represented as a group of diffeomorphisms in a Eu
clidean space, on which a Lie group structure, or invariant Riemannian metrics,
can be defined.

In this setting, the registration problem (mapping a shape onto another) can
be recast into an optimal control problem, in which the control is a vector field,
namely the Eulerian velocity of an underlying time-dependent diffeomorphism.
The control cost is generally defined a priori (i.e., it does not depend on the
object that is being deformed) and controls the smoothness of the velocity.
There are, however, several situations, both in medical imaging and com
puter vision, in which additional constraints on the deformation arise in a natu
ral way. The talk will describe several such examples, and associated computa
tional methods. It will review, in particular numerical procedures that restrict
the velocity field to finite-dimensional distributions, which can be useful when
designing multi-scale registration schemes. We will also describe multi-phase
registration problems, in which objects move non-rigidly within a deformable
background, that is stitched to them to induce a homeomorphism of the ambient
space.

Joint work with Sylvain Arguillere, Emannuel Trelat and Alain Trouve.

Partially Supported by the ONR, NSF and NIH.

**Lucy Robinson **(Drexel University)

Detecting Time-dependent Subpopulations in Network Data

**Abstract:** We introduce a new class of latent process models for dynamic relational network data with the goal of detecting time-dependent structure. Network data are often observed over time, and static network models for such data may fail to capture relevant dynamic features. We present a new technique for identifying the emergence or disappearance of distinct subpopulations of vertices. In this formulation, a network is observed over time, with attributed edges appearing at random times. At unknown time points, subgroups of vertices may exhibit a change in behavior. Such changes may take the form of a change in the overall probability of connection within or between subgroups, or a change in the distribution of edge attributes. A mixture distribution for latent vertex positions is used to detect heterogeneities in connectivity behavior over time and over vertices. The probability of edges with various attributes at a given time is modeled using a latent-space stochastic process associated with each vertex. A random dot product model is used to describe the dependency structure of the graph. As an application we analyze the Enron email corpus.

**Youngmi Hur **(Johns Hopkins University)

Multi-D Wavelet FB Design using Unimodular Completion over Laurent Polynomial Rings

**Abstract: **In this talk we present a new approach for constructing the wavelet filter bank. Our approach enables constructing nonseparable multidimensional non-redundant wavelet filter banks using the unimodular completion over Laurent polynomial rings. Our construction method presents some advantages over the traditional methods of multidimensional wavelet filter bank design. First, it works for any spatial dimension and for any sampling matrix. Second, it does not require the initial lowpass filters to satisfy any additional assumption such as interpolatory condition. Third, it provides an algorithm for constructing a wavelet filter bank from a single lowpass filter so that its vanishing moments are at least as many as the accuracy number of the lowpass filter.

04/25/2013

**Tiffany Kolba **(Valparaiso University)

Noise-induced Stabilization of Stochastic Differential Equations

**Abstract:** Noise-induced stabilization occurs when an unstable deterministic system is stabilized by the addition of white noise. Proving that this phenomenon occurs for a particular system is often manifested through the construction of a global Lyapunov function. However, the procedure for constructing a Lyapunov function is often quite ad hoc, involving much time and tedium. In this talk, a systematic algorithm for the construction of a global Lyapunov function for planar systems of stochastic differential equations will be presented. The general methodology is to construct a sequence of local Lyapunov functions in different regions of the plane, where the regions are delineated by different behaviors of the deterministic dynamics, and then patch the local Lyapunov functions together to form one smooth global Lyapunov function. The algorithm is applied to a model problem which displays finite time blow up in the deterministic setting in order to prove that the system exhibits noise-induced stabilization.