Save the Date: Mike Todd Retirement Celebration
August 28 & 29, 2014
The School of Operations Research and Information Engineering announces the retirement of Professor Michael J. Todd after 41 years of wonderful inspiration and service at Cornell University.
Thursday, August 28, 2014
Dinner Banquet Program
(limited to colleagues and former PhDs)
Friday, August 29, 2014
Symposium & Evening BBQ Party
Symposium Speakers
Robert Freund
Theresa Seley Professor in Management Science, MIT Sloan School of Management
Mike Todd: Moving Optimization Forward From fixed-point methods to the simplex method, from interior-point methods to the ellipsoid method, Mike Todd has moved optimization theory and algorithms forward like no other has. Along the way, he has moved many researchers forward as well. Through illustrative examples of his research and anecdotes about interacting with Mike, I will prove several theorems about the impact Mike Todd has had on the field of optimization and on the researchers with whom he has interacted.
Kim-Chuan Toh
Professor, Department of Mathematics,
National University of Singapore
Algorithms for Semidefinite Programming: From Interior-Point to Proximal-Point Methods
Convex matrix optimization problems (MOPs) arise in a wide variety of applications. The last two decades have seen dramatic advances in the theory and practice of matrix optimization because of its extremely powerful modeling capability. In particular, semidefinite programming (SDP) and its generalizations have been widely used to model problems in applications such as combinatorial and polynomial optimization, covariance matrix estimation, matrix completion and sensor network localization. The first part of the talk will describe our joint work with Michael Todd on interior-point methods (IPMs) implemented in SDPT3 for solving medium scale SDP, followed by inexact IPMs (with linear systems solved by iterative solvers) for large scale SDP and discussions on their inherent limitations. The second part will present recent algorithmic advances for large scale MOPs where the focus is on designing algorithms based on proximal-point and augmented Lagrangian framework. In particular, we present recent advances in the design and implementation of an augmented Lagrangian based method (called SDPNAL++) for solving large scale doubly nonnegative SDPs. Numerical experiments on a variety of large SDPs show that SDPNAL++ can be very efficient in comparison to other recently developed codes such as SDPAD and 2EBD-HPE.
Donald Goldfarb
Alexander and Hermine Avanessians Professor of Industrial Engineering and Operations Research, Columbia University
Low-rank Matrix and Tensor Recovery: Theory and Algorithms
Recovering a low-rank matrix or tensor from incomplete or corrupted observations is a recurring problem in signal processing and machine learning. To exploit the structure of data that is intrinsically more than three-dimensional, convex models such low-rank completion and robust principal component analysis (RPCA) for matrices have been extended to tensors. In this talk, we rigorously establish recovery guarantees for both tensor completion and tensor RPCA. We demonstrate that using the most popular convex relaxation for the tensor Tucker rank can be substantially suboptimal in terms of the number of observations needed for exact recovery. We introduce a very simple, new convex relaxation which is shown be much better, both theoretically and empirically. Moreover, we propose algorithms to solve both low-rank matrix and tensor recovery models based on the Alternating Direction Augmented Lagrangian (ADAL), Frank-Wolfe and prox-gradient methods. Finally, we empirically investigate the recoverability properties of these convex models, and the computational performance of our algorithms on both simulated and real data.
Yinyu Ye
Professor, Management Science & Engineering,
Stanford University
A Dynamic Near-Optimal Algorithm for Online Linear Programming
A natural optimization model that formulates many online resource allocation and revenue management problems is the online linear program (LP) where the constraint matrix, along with the objective coefficient, is revealed column by column. We provide a near-optimal algorithm for this surprisingly general class of online problems under the assumption of random order of arrival and some conditions on the data and size of the LP problem. Our algorithm has a feature of “learning while doing” by dynamically updating a threshold price vector at geometric time intervals, where the dual prices learned from revealed columns in the previous period are used to determine the sequential decisions in the current period. In particular, our algorithm doesn’t assume any distribution information on the input itself, thus it is robust to data uncertainty and variations due to its dynamic learning capability. Applications of our algorithm include many online multi-resource allocation and multi-product revenue management problems.
Yurii Nesterov
Professor, Center for Operations Research and Economics (CORE), Université catholique de Louvain
Algorithmic Models of Market Equilibrium
In this talk, we suggest a new framework for constructing mathematical models of market activity. Contrary to the majority of the classical economical models (e.g. Arrow-Debreu, Walras, etc.), we get a characterization of general equilibrium of the market as a saddle point in a convex-concave game. This feature significantly simplifies the proof of existence theorems and construction of the adjustment processes both for producers and consumers. Moreover, we argue that the unique equilibrium prices can be characterized as a unique limiting point of some simple price dynamics. In our model, the equilibrium prices have natural explanation: they minimize the total excessive revenue of the market’s participants. Due to convexity, all our adjustment processes have unambiguous behavioral and algorithmic interpretation. From the technical point of view, the most unusual feature of our approach is the absence of the budget constraint in its classical form.
Schedule of Events
Thursday, August 28
(limited to colleagues and former PhDs)
6:30 pm Cocktail Reception at Mallin Sculpture Court,
Johnson Museum of Art, 2nd Floor
7-9 pm Dinner at Lynch Conference Room,
Johnson Museum of Art
Friday, August 29
8:30 am Continental Breakfast
9:00 am Morning Symposium (9 am - 12 pm)
12:00 pm Lunch at Ballroom, Statler Hotel
1:30 pm Afternoon Symposium (1:30 - 4:30 pm)
6:00 pm BBQ Gathering at Lower North Pavilion,
Robert H. Treman State Park
All activities are free of charge.
Hotel Information
August 28-29, 2014
Reservations must be made by August 1, 2014
All blocks reserved under “ORIE-Mike Todd Retirement”
Check in: 3:00 p.m.
Check out: 12 noon
Statler Hotel (10 rooms)
$242.00 single/double occupancy per night plus tax
130 Statler Drive
Ithaca, NY 14853
1-800-541-2501 or (607) 257-2500
Hilton Garden Inn (20 rooms)
$179.00 single/double occupancy per night plus tax
130 E. Seneca Street
Ithaca, NY 14850
(607) 277-8900
Best Western University Inn (20 rooms)
$129.00 single/double occupancy per night plus tax
1020 Ellis Hollow Road
Ithaca, NY 14850
(607) 272-6100