Young Reseachers Workshop - 2023
The 2023 Young researchers Workshop will be held on Cornell's Ithaca campus Sunday, October 1 through Tuesday, October 3!
5pm-8pm. Reception: Argos Warehouse - 408 E State St, Ithaca. Dinner and drinks provided.
Very limited parking onsite.
- 8.30-9:45am Sign-in, Breakfast and Poster Session: Duffield Hall Atrium, 343 Campus Rd
- 9:45am - 10:00am Walk to Statler Ballroom
- 10:00am-11:00am. Opening Session. Session chair: Shane Henderson
- Mark Lewis: Welcome
- Daan Rutten: Mean-Field Analysis for Load Balancing on Spatial Graphs
- Xingyu Wang: Large Deviations and Metastability Analysis for Heavy-Tailed Dynamical Systems
- 11:00am - 11:30am Coffee Break
- 11:30am - 1:00pm Robustness. Session chair: Ziv Scully
- Alankrita Bhatt: Universal Probability and Applications in Data Science
- Billy Jin: Optimal Tradeoffs for Two-Stage Bipartite Matching with Advice
- Abhishek Shetty: Efficient Sequential Decision Making Through Smoothed Analysis
- Kimberly Villalobos Carballo: Holistic Deep Learning
- 1:00pm - 2:00pm Lunch
- 2:00 - 3:30pm Learning. Session chair: Peter Frazier
- Moïse Blanchard: Contextual Bandits and Universal Learning
- Minshuo Chen: Diffusion Models for Distribution Estimation and Reward Optimization
- Anna Winnicki: Role Of Lookahead In Reinforcement Learning Algorithms
- Jiawei Zhang: Improving Efficiency of Offline Reinforcement Learning via Error Bound Analysis
- 3:30pm - 4:00pm Coffee Break
- 4:00pm - 5:30pm AI and Decisions. Session chair: Ruihao Zhu
- Enric Boix: Transformers Can Generalize to Unseen Symbols, But Require Large Data Diversity
- Sarah H. Cen: Design and Governance of Data-Driven Algorithms
- Omar Mouchtaki: From Contextual Data to Newsvendor Decisions: On the Actual Performance of Data-Driven Algorithms
- Lily Xu: High-Stakes Decisions from Low-Quality Data: AI Decision-Making for Planetary Health
- 5:30pm - 8:00pm Dinner. Terrace Restaurant
- 8.30am-9:30am Breakfast and Poster Session: Duffield Hall Atrium
- 9:30am - 9:45am Walk to Willard Straight Memorial Room
- 9:45am-11am Markets. Session chair: Sid Banerjee
- Yuri Fonseca: Signaling Competition in Two-Sided Markets
- Ilan Morgenstern: Design of Resale Platforms
- Kangning Wang: Fair Price Discrimination
- 11:00am - 11:30am Coffee Break
- 11:30am - 12:45pm Optimization. Session chair: Soroosh Shafieezadeh
- Liwei Jiang: Asymptotic Normality and Optimality in Nonsmooth Stochastic Optimization
- Yongchun Li: On the Partial Convexification of the Low-Rank Constrained Optimization
- Sen Na: Stochastic Optimization with Constraints: Algorithm, Convergence, and Statistical Inference
All talk abstracts are below the poster session information.
On counterfactual inference with unobserved confounding
Efficiently Charging Fleets of E-bikes and E-scooters
Understanding the impact of training data on individual algorithmic predictions
New Understandings on L-shaped Methods for Two-Stage Stochastic Programming
Kernel Debiased Plug-in Estimation
The Whiplash Effect: Congestion Dissipation and Mitigation in a Circulatory Transportation System
Exploring the Whole Rashomon Set of Sparse Decision Trees
Cluster Explanation via Polyhedral Descriptions
Catastrophe Insurance: An Adaptive Robust Optimization Approach
Measure Gradient Descent
Adaptive Experimentation at Scale: A Computational Framework for Flexible Baches
Understanding the Sales Impact of Automobile Features in New and Used-Car Markets
FasterRisk: Fast and Accurate Interpretable Risk Scores
Differentially private optimal transport cost estimation
Scalable, Robust and Adaptive Learning and Control for Sustainable Systems
Blockchain Mediated Persuasion
From Counterfactual Estimation to Decision-Making using Panel Data
Global stability of first-order methods for coercive tame functions
Asymptotic Product-form Steady-state for Multiclass Queueing Networks with SBP Service Policies in Multi-scale Heavy Traffic
Matt Eichhorn, Mayleen Cortez-Rodriguez
Exploiting Neighborhood Interference with Low Order Interactions under Unit Randomized Design
Sample Complexity Analysis for Adaptive Optimization Algorithms with Stochastic Oracles
On Steiner Connectivity Augmentation Problems
A multi-treatment forest approach for analyzing the heterogeneous effects of team familiarity
Cooperative Multi-Agent Constrained POMDPs: Strong Duality and Primal-Dual Reinforcement Learning with Approximate Information States
A smoothed augmented Lagrangian framework for nonsmooth convex optimization
PRINCIPRO: Data-Driven Algorithms for Joint Pricing and Inventory Control under Price Protection
Scaling limits of stochastic optimization algorithms over large networks
Dyadic Reinforcement Learning
Shuvomoy Das Gupta
BnB-PEP: A Unified Methodology for Constructing Optimal Optimization Methods
Sinem Kinay Savaser
Decentralized online order fulfillment in omni-channel retailers
Learning with Two-sided Information Asymmetry: Online Recommendation and Dynamic Pricing
Doubly Robust Thompson Sampling with Linear Payoffs
Randomized Robust Price Optimization
Asymptotic steady-state independence for generalized Jackson Networks in multi-scale heavy traffic
Model-free selective inference: decision-making and scientific discovery with black-box models
Deflated HeteroPCA: Overcoming the curse of ill-conditioning in heteroskedastic PCA
Synthetic Combinations: A Causal Inference Framework for Combinatorial Interventions
Optimal Timing for Screening Tests to Identify Children with Reading Disabilities
Holistic Robust Data-Driven Decisions
Sequential Search with Acquisition Uncertainty
Causal Representation Learning for Intervention Design in Complex Systems
Online Learning for Resource Allocation: Applications in Transportation, Electricity and Fisher Markets
Customer Satisfaction and Differentiated Pricing in E-Retail Delivery
COVSIM: A Stochastic Agent-based Simulation Model of North Carolina
Capacity Planning in Stable Matching
Covariance alignment: from maximum likelihood estimation to Gromov-Wasserstein
Optimal Electricity Trading
Theoretical Underpinnings of Nonsmooth Nonconvex-Nonconcave Minimax Optimization
Jiayu (Kamessi) Zhao
On the Supply of Autonomous Vehicles in Open Platforms
Human-in-the-Loop Sequential Experiments in Drug Discovery
Gaussian Graphical Models: A Scalable Framework based on Combinatorial Optimization
On the convex hull of convex quadratic optimization problems with indicators
A Zeroth Order Quasi Newton Method For Nonsmooth, Nonconvex, Stochastic Optimization
Marouane Ibn Brahim
Maximum Load Assortment Optimization: Approximation Algorithms and Adaptivity Gaps
Modeling of Nonlinear Dynamic 3D Shape Evolution through Multimodal Data Fusion and Koopman Operator Theory
Buyout Price Optimization in the Rent-to-Own Business
On the Advances of Instance-adaptive Offline Reinforcement Learning
Information complexity of convex optimization: Different settings and transfers between them
Contextual Standard Auctions with Budgets
On the power of graph neural networks and the role of the activation function
Tian-Yi (Tian) Zhou
Learning Ability of Deep Interpolating Convolutional Neural Networks
Pricing Strategies for Online Dating Platforms
Revenue Management with Calendar-Aware and Dependent Demands
Convergence of the Momentum Method for Semi-algebraic Functions With Locally Lipschitz Gradients
Xinyu Shirley Liang
Generic Drug Effectiveness: An Empirical Study on Health Service Utilization and Clinical Outcomes
Stochastic First-order Policy Optimization for Robust Markov Decision Process
High-Dimensional Inference over Networks: Guarantees and Tradeoffs
Confidence Regions in Stochastic Optimization and Systems of Nonlinear Equations
Towards R-learner of conditional average treatment effects with a continuous treatment: T-identification, estimation, and inference
Yiqun T. Chen
TWIGMA: A dataset of AI-Generated Images with Metadata From Twitter
Federated multiple tensor-on-tensor regression for heterogeneous data under data-sharing constraints
Improving the Geometry of (Conic) Linear Optimization Problems for the Primal-Dual Hybrid Gradient Method (PDHG)
Learning While Scheduling in Multi-Server Systems with Unknown Statistics: MaxWeight with Discounted UCB
In alphabetical order of last name of presenter
In modern statistical and data science applications, the probability distribution generating the data in question is unknown (or even absent) and decisions must be taken in a purely data-driven manner. In this talk, the information-theoretic approach of universal probability is revisited and expanded upon. This approach gives us general principles and guidelines for assigning sequential probabilities to data (based on which a decision can then be made), and has been used successfully over the years to problems in compression, prediction and estimation among others. The utility of this approach is then demonstrated through the example of universal portfolio selection with side information.
We consider the contextual bandit problem in general action and context spaces, where the learner's rewards depend on their selected actions and an observable context. This generalizes the standard multi-armed bandit to the case where side information is available, e.g., patients' records or customers' history, which allows for personalized treatment. We focus on consistency---vanishing regret compared to the optimal policy---and show that for large classes of non-i.i.d. contexts, consistency can be achieved regardless of the time-invariant reward mechanism, a property known as universal consistency. We further give necessary and sufficient conditions for universal consistency to be achievable, and provide algorithms that are consistent whenever possible. As a consequence, to the best of our knowledge, this is the first work giving consistent algorithms for any contextual bandits instance with finite action spaces and i.i.d. contexts (a fortiori beyond i.i.d. contexts). Surprisingly, for finite action spaces, learnable processes for universal learning are exactly the same as in the full-feedback setting of supervised learning. In other words, learning can be performed with partial feedback without any generalization cost. On the algorithmic level, we show that one should adequately balance generalization (similar to structural risk minimization) and personalization (tailoring actions to specific contexts). While this is achievable for standard contextual bandits, we also provide negative results in face of non-stationary or adversarial rewards.
We investigate the capability of Transformer large language models (LLMs) to generalize when trained on a framework of language modeling tasks based on abstract symbols (e.g., dummy variables in programming and mathematics). These tasks test some of the most fundamental "reasoning" capabilities that have long been theorized in the literature to drive much of human cognition. We prove that (i) Transformers learn to solve these reasoning tasks but require astonishingly large training datasets; (ii) our analysis motivates modifications to the Transformer parametrization that improve generalization with less data, and also our theory sheds light on why pretrained models perform better.
We are in the midst of deciding what role we want data-driven algorithms to play in our lives. Their rapid rise has surfaced difficult, open questions, like: To what extent should algorithms intervene on consequential decisions? When should the government step in to regulate the AI industry, and when should it abstain? Answering these questions will be a long and complex process, as we adjust our lives, businesses, and laws to the presence of data-driven algorithms. Within this broader context, my work seeks to improve our understanding of what is feasible from both a design perspective (e.g., what is information-theoretically possible) and a governance perspective (e.g., what is achievable in the currently regulatory climate).
In this talk, I examine algorithmic auditing—specifically, social media auditing—as a case study. The goal of algorithmic auditing is to test whether an algorithm satisfies a predetermined set of criteria. However, in accordance with trade secret law and auditing norms, the auditor is typically given at most black-box access to the algorithm. Turning our attention to social media, we propose a procedure such that, given a regulation, an auditor can test whether the social media platform complies with the regulation. Notably, the proposed audit requires only black-box access to the platform’s content filtering algorithm and does not need access to user data. We establish statistical guarantees for the audit. We further show that the audit does not necessarily impose a performance cost on the platform and that it incentivizes the platform to inject doses of content diversity into users’ feeds.
Diffusion models achieve state-of-the-art performance in various generation tasks. However, their theoretical foundations fall far behind. We study score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace. Our result provides sample complexity bounds for distribution estimation using diffusion models. We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated. Furthermore, the generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution. The convergence rate depends on the subspace dimension, indicating that diffusion models can circumvent the curse of data ambient dimensionality. We further explore the methodology and theory of reward-directed generation via conditional diffusion models.
We consider decentralized platforms facilitating many-to-many matches between two sides of a marketplace. In the absence of direct matching, inefficiency in market outcomes can easily arise. For instance, popular supply agents may garner many units from the demand side, while other supply units may not receive any match. A central question for the platform is how to manage congestion and improve market outcomes. We study the impact of a detail-free lever: the disclosure of information to agents on current competition levels. Disclosing competition reduces the perceived value of popular units, but, at the same time, it can help agents on the other side better elect across options. How large are such effects, and how do they affect overall market outcomes? We answer this question empirically. We partner with the largest service marketplace in Latin America, which sells non-exclusive labor market leads to workers. We propose a structural model which allows workers to internalize competition at the lead level and captures the equilibrium effect of such reaction to competition at the platform level. We estimate the model by leveraging agents' exogenous arrival times and a change in the platform's pricing policy. Using the estimated model, we conduct counterfactual analyses to study the impact of signaling competition on workers' lead purchasing decisions, the platform's revenue, and the expected number of matches. We find that signaling competition is a powerful lever for the platform to reduce congestion, redirecting demand, and ultimately improving the expected number of matches for the markets we analyze.
In their seminal work, Polyak and Juditsky showed that stochastic gradient descent for minimizing smooth strongly convex functions enjoys a central limit theorem. Moreover, it has since been argued that the asymptotic covariance of the method is best possible among any estimation procedure in a local minimax sense of Hájek and Le Cam. A long-standing open question in this line of work is whether similar guarantees hold for important constrained/non-smooth problems, such as stochastic nonlinear programming. In this talk, we show that this is indeed the case.
Real life problems are full of uncertainty. How we handle it is important, since it affects the design and performance of algorithms. In theoretical computer science, the uncertainty is usually assumed to be adversarial, which can be too pessimistic for most real life instances. On the other hand, in operations research, the uncertainty is usually assumed to follow some known distribution, but in practice the estimate of the distribution may or may not be accurate.
Advice-augmented algorithms aim to bridge the gap between these two models. In this framework, the algorithm is given some advice or prediction (e.g. from historical data, forecasts, or expert advice), whose quality is unknown. We aim to design algorithms that perform well when the quality is high (consistency), yet remain robust in their performance even when the quality is low (robustness).
In this talk, I will introduce advice-augmented algorithms and present one of my works in this area: An algorithm for two-stage matching that attains the optimal tradeoff between consistency and robustness.
Based on joint work with Will Ma.
The low-rank constrained optimization arises in various machine learning and operations research problems. It minimizes a linear objective function subject to multiple linear inequalities and a low-rank domain set. Although the low-rank constrained optimization is generally NP-hard, a viable approach is to convexify the domain set (i.e., replace it with its convex hull), known as "partial convexification". Partial convexification often leads to a tractable convex relaxation; however, its solution quality lacks theoretical guarantees. To fill this gap, (i) we establish the necessary and sufficient conditions under which partial convexification matches the original low- rank constrained optimization; and (ii) we derive an upper bound on the minimum rank among all the optimal solutions of the partial convexification and prove its tightness. To efficiently solve the partial convexification, we develop a column generation algorithm combined with a rank-reduction algorithm. This combination ensures that the output solution satisfies the theoretical guarantees. Finally, numerical experiments validate the strength of the partial convexification and the effectiveness of the proposed algorithms.
We study resale platforms, an emerging type of online marketplaces in developing countries. These platforms are designed for users (resellers) to sell products to others, enabling them to supplement their income as they earn a margin on the transactions they generate. Using data from a major platform in India, we find that resellers reduce their margins and exert less effort when searching for products to sell as more of them join the platform. Motivated by these observations, we develop a model of resale platforms to understand how competition among resellers influences their product selection and margin decisions in equilibrium. Finally, we explore two potential interventions through which the platform can benefit resellers: centralizing margin decisions and optimizing its product ranking algorithm.
In this work, we explore a framework for contextual decision-making to study how the relevance and quantity of past data affects the performance of a data-driven policy. We analyze a contextual Newsvendor problem in which a decision-maker needs to trade-off between an underage and an overage cost in the face of uncertain demand. We consider a setting in which past demands observed under ``close by'' contexts come from close by distributions and analyze the performance of data-driven algorithms through a notion of context-dependent worst-case expected regret. We analyze the broad class of Weighted Empirical Risk Minimization (WERM) policies which weigh past data according to their similarity in the contextual space. This class includes classical policies such as ERM, k-Nearest Neighbors and kernel-based policies. Our main methodological contribution is to characterize exactly the worst-case regret of any WERM policy on any given configuration of contexts. To the best of our knowledge, this provides the first understanding of tight performance guarantees in any contextual decision-making problem, with past literature focusing on upper bounds via concentration inequalities. We instead take an optimization approach, and isolate a structure in the Newsvendor loss function that allows to reduce the infinite-dimensional optimization problem over worst-case distributions to a simple line search. This in turn allows us to unveil fundamental insights that were obfuscated by previous general-purpose bounds. We characterize actual guaranteed performance as a function of the contexts, as well as granular insights on the learning curve of algorithms.
Constrained stochastic optimization problems appear widely in numerous applications in statistics, machine learning, and engineering. These include constrained maximum likelihood estimation, constrained deep neural networks, physics-informed machine learning, and optimal control. In this 15-minute talk, I will describe how to design adaptive algorithms for solving constrained stochastic nonlinear optimization problems based on sequential quadratic programming (SQP). The algorithms adaptively select random stepsizes and inexactly solve the Newton systems via a randomized sketching solver. I will introduce global and local almost sure convergence results and perform statistical inference based on SQP by showing an asymptotic normality that matches the minimax lower bound. Numerous experiments are provided to validate the theory.
The analysis of large-scale, parallel-server load balancing systems has relied heavily on mean-field analysis. A pivotal assumption for this framework is that servers are exchangeable. However, modern data-centers have data locality constraints, such that tasks of a particular type can only be routed to a small subset of servers. An emerging line of research, therefore, considers load balancing algorithms on bipartite graphs where vertices represent task types and servers, respectively. In this talk, I will present a novel coupling-based approach to establish mean-field approximation for a large class of graphs which includes spatial graphs. The method extends the scope of mean-field analysis far beyond the classical full-flexibility setup.
Robustness to changes in data is one of the main challenges faced by sequential machine learning and decision-making algorithms. Yet, most deployed algorithms today are designed to work well on fixed data sets and ultimately fail when data evolves in unpredictable or adversarial ways. Given the importance of these problems, both in theory and practice, a pertinent question is to understand and remedy the source of this difficulty, for real-world instances. Towards this end, we establish novel techniques to analyze online algorithms in the smoothed analysis model. In this setting, the algorithm's input is perturbed slightly by noise. One of our framework is to show that, in this model, online learning is as easy as offline learning, both statistically and computationally. That is, we show that the regret against smoothed adversaries is captured by the offline complexity measure, VC dimension. Furthermore, we design efficient algorithms for online learning, circumventing impossibility results in the worst case.
Kimberly Villalobos Carballo
We present a holistic deep learning framework that leverages Optimization tools to simultaneously address the challenges of vulnerability to input perturbations, performance instability from different train-validation splits, and overparametrization. We address robustness by minimizing a novel upper bound of the adversarial loss, which is facilitated by state-of-the-art tools from Robust Optimization. To enhance stability, we adopt a Mixed Integer Programming formulation of the problem which was previously explored for other classification methods such as tree-based models. Lastly, we incorporate sparsity using a well-established state-of-the-art algorithm based on approximations of the L0 norm. The proposed framework holistically improves accuracy, robustness, stability, and sparsity over standard deep learning models, as demonstrated by extensive experiments on both tabular and image data sets. To support practitioners applying our framework, we provide a prescriptive approach that offers recommendations for selecting an appropriate training loss function based on their specific objectives.
A seller is pricing identical copies of a good to a stream of unit-demand buyers. Each buyer has a value on the good as his private information. The seller only knows the empirical value distribution of the buyer population and chooses the revenue-optimal price. We consider a widely studied third-degree price discrimination model where an information intermediary with perfect knowledge of the arriving buyer's value sends the seller a signal, which changes the seller's posterior and induces the seller to set a personalized price. We show the surprising existence of a novel signaling scheme that is ""fair"" to the buyer population: it simultaneously approximately maximizes all welfare functions that are non-negative, monotonically increasing, symmetric, and concave (including the utilitarian social welfare, the Nash welfare, and the max-min welfare).
This is based on joint work with Siddhartha Banerjee, Kamesh Munagala, and Yiheng Shen.
We propose a framework integrating large deviations and metastability analysis of heavy-tailed dynamical systems. This framework allows us to elevate the knowledge about atypical system behavior to a comprehensive characterization of the typical long-term global dynamics. Applying it to heavy-tailed stochastic difference/differential equations, we provide sample path large deviations and first exit time analysis, offering the heavy-tailed counterparts of the classical Freidlin-Wentzell and Eyring-Kramers theories. Moreover, our results systematically characterize the intricate phase transitions in first exit times and an intriguing global behavior under truncated heavy-tailed dynamics that sheds light on the generalization mystery in deep learning.
A common technique in reinforcement learning is to evaluate the value function from Monte Carlo simulations of a given policy, and use the estimated value function to obtain a new policy which is greedy with respect to the estimated value function. A well-known longstanding open problem in this context is to prove the convergence of such a scheme when the value function of a policy is estimated from data collected from a single sample path obtained from implementing the policy (see page 99 of Sutton and Barto (2018), page 8 of Tsitsiklis (2002)). We present a solution to the open problem by showing that a first-visit version of such a policy iteration scheme indeed converges to the optimal policy provided that the policy improvement step uses lookahead Silver et al. (2016), Mnih et al. (2016), Silver et al. (2017b) rather than a simple greedy policy improvement. We provide results both for the original open problem, which was stated for the tabular setting, and also present extensions to the function approximation setting, where we show that the policy resulting from the algorithm performs close to the optimal policy within a function approximation error. The key technical contribution in the paper is the following: the proof in Tsitsiklis (2002) relies on policy iteration ideas extended to the stochastic approximation setting when the probability of a visit to any state is known. In our trajectory-based setting, this approach does not work because the number of visits to each state is policy dependent and unknown; therefore, we employ very different ideas by combining contraction properties of the Bellman operator along with stochastic approximation techniques.
Planetary health recognizes the inextricable link between human health and the health of our planet. Our planet’s growing crises include biodiversity loss, with animal population sizes declining by an average of 70% since 1970, and maternal mortality, with 1 in 49 girls in low-income countries dying from complications in pregnancy or birth. Overcoming these crises will require effectively allocating and managing our limited resources. My research develops data-driven AI decision-making methods to do so, overcoming the messy data ubiquitous in these settings. Here, I’ll present technical advances in multi-armed bandits, robust reinforcement learning, and causal inference, addressing research questions that emerged from on-the-ground challenges across conservation and maternal health. I’ll also discuss bridging the gap from research and practice, with anti-poaching field tests in Cambodia, field visits in Belize and Uganda, and large-scale deployment with SMART conservation software.
Error/sensitivity bound (EB) analysis is a powerful tool in optimization theory and algorithm design. For example, our recent works develop several novel EB and utilize them to design optimal first-order algorithms for hard nonconvex constrained or minimax problems. We ask the question: Can EB analysis help in designing efficient algorithms for traditionally hard learning and decision-making tasks?
In this talk, we discuss how EB analysis can be utilized to improve the efficiency of offline reinforcement learning (RL). Here, the challenge is to learn a near-optimal policy only from offline data in settings with large state-action spaces and mild assumptions on data coverage. We adopt the framework of linear programming to develop two novel reformulations in offline RL. Our EB analysis of these reformulations enables a direct connection between sub-optimality of learned policies and optimization residuals. We demonstrate our approach leads to improved statistical rates and computational tractability for offline RL relative to the previous literature.
Interestingly, our upcoming paper also show that exploiting EB can help improve computational tractability, generalization and robustness of other learning and decision-making tasks that can be reformulated as minimax optimization problems.
We are looking forward to seeing you at the Young Researchers Conference to be held October 1 - 3, 2023 at Cornell University in Ithaca New York. Here is some information about airport options.
Ithaca Airport: The best option for flying would be arriving at the Ithaca airport. It is only three miles away from Cornell campus, and an easy taxi/uber/lyft ride to get to and from. Many hotels provide shuttle service to and from the airport as well as around town.
Other nearby airports (less than an hour drive) are Elmira and Binghamton. Both are quite small.
NYC Airports: The last option would be flying into any of the major NYC airports and taking a bus to Ithaca. This is the longest option, as most buses from NYC to Ithaca take about 4.5 hours. Some options on buses include:
- Cornell Campus to Campus: Most comfortable option that leaves from the Cornell Club in midtown NYC and drops you off right on Cornell campus in Ithaca.
- OurBus: Has options leaving from either the George Washington Bridge or Port Authority to downtown Ithaca. From downtown it is a short cab ride to get to Ithaca.
- Statler Hotel
130 Statler Dr, Ithaca, NY 14853
- Canopy by Hilton
324 E State St, Ithaca, NY 14850’
- The Hotel Ithaca
222 S Cayuga St, Ithaca, NY 14850
- Hlton Garden Inn
130 E Seneca St, Ithaca, NY 14850
120 S Aurora St, Ithaca, NY 14850
- Cayuga Blu Hotel
2310 N Triphammer Rd, Ithaca, NY 14850•(607) 257-3100
- Courtyard by Marriott Ithaca Airport/University
29 Thornwood Dr, Ithaca, NY 14850•(607) 330-1000
- Country Inn & Suites by Radisson, Ithaca, NY
1100 Danby Rd #96b, Ithaca, NY 14850•(607) 252-4791
- Fairfield Inn & Suites by Marriott Ithaca
359 Elmira Rd, Ithaca, NY 14850•(607) 277-1000
- Hampton Inn Ithaca
337 Elmira Rd, Ithaca, NY 14850•(607) 277-5500
- The Dorm Hotel
518 Stewart Ave, Ithaca, NY 14850•(607) 319-4611
- Firelight Camps
1150 Danby Road, Ithaca, NY•(607) 229-1644
Hotels in the Greater Ithaca Area:
- Best Western Plus Finger Lakes Inn & Suites
3175 Fingerlake East Dr, Cortland, NY 13045•(607) 756-2233
- Fairfield by Marriott Inn & Suites Cortland
3707 NY-281, Cortland, NY 13045•(607) 299-4455
- Hampton Inn Cortland
26 River St, Cortland, NY 13045•(607) 662-0007
Talks are approximately 20 minutes long with questions, leaving a couple of minutes to switch to the next speaker (like the INFORMS model). Please bring your own device from which to present; a laptop is usually best as USB-C or HDMI connectors will be available on site. Technical assistance will be available to help connect your laptop to the projector and handle any technical problems. We recommend that you also bring your slides on a USB as a backup. You should have uploaded your title and abstract when you registered.
Poster Presenter Information
Poster sessions will be held 8:30am-9:45am Monday Oct 2 (with sign in) and 8:30am-9:30am Tuesday Oct 3 in the atrium of Duffield Hall (343 Campus Rd, map here), in concert with breakfast (provided). You can either bring a physical copy of your poster with you, or bring a digital copy on USB drive and pay to print locally at Cornell University’s Mann Library. Posters should be at most36" x 48” (inches).