Seminar Abstracts (spring 2011)

02/03/2011

Tim S. T. Leung (Johns Hopkins University)
Price Discrepancy and Optimal Timing to Buy Derivatives

Abstract: In incomplete markets, where not all risks can be hedged, different risk-neutral or risk-averse pricing models may yield a range of no-arbitrage prices. Consequently, the investor's model price may disagree with the market price. This leads to the natural and important question of when is the optimal time to buy a derivative security from the market.

We consider an investor who attempts to maximize the spread between her model price and the offered market price through optimally timing the purchase. Both the investor and the market value the options by risk-neutral expectations but under different equivalent martingale measures representing different market views or risk premia specifications. We show that the structure of the resulting optimal stopping problem depends on the interaction between the respective market price of risk and the option payoff. In particular, a crucial role is played by the delayed purchase premium that is related to the stochastic bracket between the market price and the buyer's risk premia. Explicit characterization of the purchase timing and numerical examples are given for two representative classes of Markovian models: (i) defaultable equity models with local intensity; (ii) diffusion stochastic volatility models.


02/08/2011

Tim Carnes (Massachussets Institute of Technology)
Strong LP Formulations and Primal-Dual Approximation Algorithms

Abstract: The state of the art of the design and analysis of approximation algorithms for NP-hard discrete optimization has advanced significantly over the past two decades; furthermore, the most prevalent approach has been to rely on the strength of natural linear programming relaxations. Work in computational integer programming over the same time period has shown the power of adding combinatorially-motivated valid inequalities, but this has not been employed often in the design of approximation algorithms. We will show how this approach can be applied in the design and analysis of primal-dual approximation algorithms. We will present several recent approximation results along these lines, starting with a (surprisingly simple) primal-dual analogue of an LP-rounding result of Carr, Fleischer, Leung, & Phillips for the minimum-cost covering knapsack problem, the capacitated lot-sizing problem, and a location-routing problem that arose recently in the context of determining bases for aircraft serving medical patients (for the province of Ontario). For the last of these, we will also discuss the motivating application along with a ``daily planning'' version of this problem that can be addressed via integer programming methods.


02/10/2011

Siqian Shen (University of Florida)
Expectation and Chance-constrained Models and Algorithms for Insuring Critical Paths

Abstract: We consider a class of two-stage stochastic integer programming problems arising in the protection of vital arcs in a critical path network, in which task finishing times are uncertain, but can be insured a priori to mitigate potential delays. A decision maker must trade off costs incurred in insuring arcs with expected penalties associated with late project completion times, where lateness penalties are assumed to be general lower semi-continuous nondecreasing functions of completion time. We provide decomposition strategies to solve this problem with respect to either convex or nonconvex penalty functions. In particular, for the nonconvex penalty case, we employ the Reformulation-Linearization Technique to make the problem amenable to solution via Benders decomposition. We also consider a chance-constrained version of this problem, in which the probability of completing a project on time is sufficiently large. The computational efficacy of our approach is demonstrated on a set of randomly generated instances, using the Sample Average Approximation method to guide our scenario generation.


02/15/2011

N. Serhat Aybat (Columbia University)
Unified Approach for Minimizing Composite Norms

Abstract: (.pdf)


02/17/2011

Daniel Robinson (Northwestern University)
The Use of Active-Set Phases to Accelerate Optimization Algorithms

Abstract: In the first part of the talk I will clarify the meaning of “active-set phases” by giving an overview of three optimization algorithms that are currently being developed. The first algorithm is designed for solving linear complementarity problems and is motivated by a financial optimization problem. The second algorithm is used for solving (convex) stochastic optimization problems and will be described in the context of a speech recognition application. The third algorithm---known as S2QP---is a general-purpose second-derivative sequential quadratic programming (SQP) method that is designed for solving sparse large-scale nonlinear and nonconvex problems. Since S2QP benefits from so-called “warm-starting”, I will motivate its development by considering an optimal control problem. I will dedicate the second part of the talk to a detailed presentation of S2QP. In particular, I will (i) describe various difficulties associated with developing second-derivative SQP methods; (ii) explain the strategies that I used to avoid these hurdles; (iii) provide global and local convergence results; and (iv) furnish numerical tests that elucidate the benefits of an active-set phase.


02/22/2011

Shiqian Ma, Columbia University
Fast Splitting and Alternating Linearization Methods for Convex Optimization

Abstract: Splitting and alternating direction methods have been widely used for solving convex optimization problems. However, iteration complexity bounds have not been given for these types of methods. In this talk, we present two classes of splitting and alternating direction methods for which we can obtain such bounds. The basic versions of both classes of methods require $O(1/\epsilon)$ iterations to obtain an $\epsilon$-optimal solution. Accelerated versions of these methods require $O(1/\sqrt{\epsilon})$ iterations, while the computational effort needed at each iteration is almost unchanged. To the best of our knowledge, these complexity results are the first ones of this type that have been given for splitting and alternating direction type methods. Numerical results on problems with millions of variables and constraints are reported to demonstrate the efficiency of the proposed methods.


02/24/2011

Fatma Kilinc Karzan (Georgia Institute of Technology)
Large-Scale Optimization with Applications in Compressed Sensing

Abstract: In this talk, we will cover some of the recent developments in large-scale optimization motivated by the compressed sensing paradigm. Sparsity plays a key role in dealing with high-dimensional data sets. Exploiting this fact, compressed sensing suggests a new paradigm by directly acquiring and storing only a few linear measurements (possibly corrupted with noise) of high-dimensional signals, and then using efficient -recovery procedures for reconstruction. Successful applications of this theory range from MRI image processing to statistics and machine learning. This talk will have two main parts. In the first part, after presenting the necessary background from compressed sensing, we will show that prior results can be generalized to utilize a priori information given in the form of sign restrictions on the signal. We will investigate the underlying conditions allowing successful -recovery of sparse signals and show that these conditions although difficult to evaluate lead to sufficient conditions that can be efficiently verified via linear or semidefinite programming. We will analyze the properties of these conditions while making connections to disjoint bilinear programming, and describe their limits of performance. In the second part, we will develop efficient first-order methods with both deterministic and stochastic oracles for solving large-scale well-structured convex optimization problems. Our developments are motivated by the need for sublinear time algorithms to solve large-scale parametric bilinear saddle point problems where cheap online assessment of the solution quality is crucial. As an application of this theory, we will show how large-scale problems originating from compressed sensing fall into this framework. We will conclude with numerical results demonstrating the effectiveness of our algorithms.


03/03/2011

Avanti Athreya (Duke University)
A model for the cooperative dynamics of processive molecular motors

Abstract: Biological cells have a cytoskeletal network of filaments, called microtubules, and proteins, called molecular motors, to transport cargo throughout the cell. We develop a mathematical model for microtubule-motor-cargo dynamics that emphasizes the spatial configuration and the distribution of forces generated on and by the
cargo. In the analysis, we find that the performance of multiple motors versus a single motor is dependent on the applied external force. At small force regimes, multiple motors move cargo more slowly than a single motor, and at high force regimes, we find superlinear improvement in transport performance. Since obtaining clear experimental evidence of the influence of multiple motors has been elusive, our model provides an analytical framework to understand when and how multiple motors
might influence transport. (Joint work with Scott McKinley, Peter Kramer, and John Fricks)


03/10/2011

Youngmi Hur (Johns Hopkins University)
Coset Sum: an alternative to the tensor product in wavelet construction

Abstract: A multivariate wavelet system can be obtained from multivariate refinement masks in Multiresolution Analysis setup. Although tensor product is a popular method to build multivariate refinement masks from univariate refinement masks, we show that it is not the only method that can achieve this. We introduce an alternative method to the tensor product, which we call coset sum. The coset sum shares many essential features of the tensor product that make it attractive in practice: (1) it preserves the biorthogonality of univariate refinement masks, (2) it preserves the accuracy number of the univariate refinement mask, and (3) the wavelet system associated with it has fast algorithms for computing and inverting the wavelet coefficients. These features of the coset sum suggest that it is worthwhile to develop and practice alternative methods to the tensor product for constructing multivariate wavelet systems.


03/17/2011

Simon Foucart (Drexel University)
Recovering sparse vectors via Hard Thresholding Pursuit

Abstract: This talk will provide a brief overview of the field of Compressive Sensing, where one aims at reconstructing sparse vectors from a seemingly incomplete set of linear measurements. The focus will be put on the reconstruction process rather than the measurement process. First, we will review the classical $\ell_1$-minimization scheme, and its extension in the context of joint sparsity. We will then discuss some iterative algorithms, in particular an algorithm which we called Hard Thresholding Pursuit. We will highlight its advantages in terms of simplicity, speed, and theoretical performances. We will also see how it extends in the context of joint sparsity.


03/24/2011

Spring break
No seminar


03/31/2011

Gregory Eyink (Johns Hopkins University)
Spontaneous Stochasticity, Flux Freezing and Magnetic Dynamo

Abstract: We learn in our courses on differential equations that the solution to the initial-value problem not only exists but is unique. However, this need not be true if the vector field in the ODE is continuous but not Lipschitz. This situation is believed to occur in infinite-Reynolds-number turbulent fluid flows, with the result that the solution of the deterministic initial-value problem becomes stochastic! We discuss both rigorous results and empirical evidence for such ``spontaneous stochasticity''. But, aside from being an intriguing physical phenomenon, does it matter? I'll argue that it fundamentally alters our understanding of magnetohydrodynamic turbulence and, in particular, the magnetic dynamo, or the process by which astrophysical objects such as the Sun and the Earth generate their magnetic field. Previous dynamical systems work on chaotic dynamos (V.I. Arnold, L.S. Young, E. Ott) thus omits a fundamental turbulent process of real physical dynamos.


04/07/2011

Avner Bar-Hen (University Paris-Descartes)
Spatial Cluster Detection Using the Number of Connected Components of a Graph

Abstract: The aim of this work is to detect spatial clusters based on the number of connected components of a graph. We link Erdös graph and Poisson point process. We give the probability distribution function (pdf) of the number of connected components for an Erdös graph and obtain the pdf of the number of clusters for a Poisson process. We also obtain the pdf of the number of clusters bigger than a given threshold for Poisson processes. We extend this results to marked point processes by using multi-class Erdös graphs. Using this result, we obtain a test for complete spatial randomness and also identify the clusters that violate the CSR hypothesis. Border effects are computed. We illustrate our results on a tropical forest example.


04/14/2011

The 2010/2011 Acheson J. Duncan Lecture (First Lecture)
Joel Zinn (Texas A&M University)
A Meandering ``Trip'' Through High Dimensions

Abstract: The theme of these talks is high dimensionality. In the first talk we discuss both the curse and blessing of dimensionality. We touch on some examples concerning volumes which are counterintuitive. Through the ideas of the concentration phenomena, we indicate how some of these questions can be attacked using Probability Theory.

In the second talk we discuss infinite dimensional limit theorems - both in the Banach space case as well as for Empirical processes.


04/15/2011

The 2010/2011 Acheson J. Duncan Lecture (Second Lecture)
Joel Zinn (Texas A&M University)
Limit Theorems in High Dimensions

Abstract: (see previous)


04/21/2011

The 2010/2011 John C. and Susan S.G. Wierman Lecture
C. Arden Pope III
Human Health Effects of Air Pollution: Statistics and Public Policy

Abstract: The science of air pollution and health has a rich and complex history. Episode studies, daily time-series studies, panel-based acute exposure studies, and case-crossover studies provide a large and diverse evidence base to indicate that short-term air pollution exposure exacerbates existing cardio-pulmonary disease and increases the risk of becoming symptomatic, requiring medical attention, or even dying. Natural experiment studies, cohort- and panel-based studies, and case control studies indicate even larger effects of long-term air pollution exposure. An important aspect of this research has been the use and development of statistical modeling. This presentation will discuss the role of statistical modeling approaches used to analyze different types health endpoint data generated from these study designs. Specific, high profile examples that have substantially impacted public policy will be provided.


04/28/2011

Jim Fill (Johns Hopkins University)
Distributional Convergence for the Number of Symbol Comparisons Used by QuickSort

Abstract: We will begin by reviewing the operation of the sorting algorithm QuickSort. Most previous analyses of QuickSort have used the number of key comparisons as a measure of the cost of executing the algorithm. In contrast, we suppose that the n independent and identically distributed (iid) keys are each represented as a sequence of symbols from a probabilistic source and that QuickSort operates on individual symbols, and we measure the execution cost as the number of symbol comparisons. Assuming only a mild "tameness" condition on the source, we show that there is a limiting distribution for the number of symbol comparisons after normalization: first centering by the mean and then dividing by n. Additionally, under a condition that grows more restrictive as p increases, we have convergence of moments of orders p and smaller. In particular, we have convergence in distribution and convergence of moments of every order whenever the source is memoryless, i.e., whenever each key is generated as an infinite string of iid symbols. This is somewhat surprising: Even for the classical model that each key is an iid string of unbiased ("fair") bits, the mean exhibits periodic fluctuations of order n.


05/05/2011

Olympia Hadjiliadis (Brooklyn College, CUNY)
Market crashes through the drawdown and its speed

Abstract: This presentation is divided into two parts: The practical and the theoretical. In the first part we present a way to quantify market crashes through two cleverly chosen random variables, while in the second we study their probabilistic behavior. The first random variable is the drawdown of an asset and the second is the speed at which a drawdown is realized. Drawdowns measure the first time the current drop of an investor's wealth from its historical maximum reaches a pre-specified level. They are thus measures of risk which are widely used in financial risk management. Due to their path-dependent nature they can also be used as indicators of market crashes. Yet, in order to characterize the severity of a market crash one needs to take into account the speed at which it occurs. In order to capture this speed, we consider the last reset of the maximum of the underlying preceding the drawdown. We now propose the difference of the time of the drawdown to the last reset of the maximum preceding the drawdown as well as other measures related to this difference as possible measures of the speed of a market crash. An investor interested in insurance against a market crash could be interested in purchasing options related to the drawdown and its speed. This gives rise to the question of pricing such contracts. In the second part we address the problem of pricing insurance based on the drawdown and its speed. We consider the difference between the drawdown and the last reset of the maximum preceding the drawdown as the speed of the drawdown. We study the probabilistic behavior of the drawdown and its speed under general diffusion dynamics. In particular, we derive the joint Laplace transform of the speed of market crash, the last visit time of the maximum of a process preceding the drawdown and the maximum of the process under general diffusion dynamics. This derivation is achieved by decomposing the path to before and after the last reset of the maximum preceding the drawdown. A key observation in our derivations is that the path before the last reset of the maximum and the path after conditional on the level of the process at the last rest of the maximum before the drawdown are in fact independent. An important tool used in our derivations is the progressive enlargement of filtrations, which is used in order to turn the last reset of the maximum preceding the drawdown (an honest time) into a stopping time. We finally develop explicit formulas in the special cases of a drifted Brownian motion and the constant elasticity of variance model.


 

This page uses Peter Jipsen and J. Knisley's implementation of LaTeXMathML.If you use Internet Explorer, you will need to install the mathPlayer plug-in to see mathematical expressions. Safari users may want to switch to the webkit extension of the browser