Nathan Kallus

ATTN: Prospective and current Cornell PhD students

I am currently recruiting highly motivated talented PhD students for my research group at the Cornell Tech campus in NYC. Email me to let me know why you think you've got what it takes!

Rule 30

Hi! My name is Nathan Kallus. My email is (my last name) (at) cornell (dot) edu.

I am an Assistant Professor of Operations Research and Information Engineering at Cornell University and Cornell Tech in New York City. My research revolves around data-driven decision making in operations, the interplay of optimization and statistics in decision making and inference, and the analytical capacities and challenges of unstructured, large-scale, and web-based data. I hail from the town of Haifa, Israel. I hold a PhD in Operations Research from MIT as well as a BA in Pure Mathematics and a BS in Computer Science both from UC Berkeley. I was a Visiting Scholar at USC's Department of Data Sciences and Operations and a Postdoctoral Associate at MIT's Operations Research and Statistics group.

Academic Positions

Education

Publications

Honors

Invited talks

Contributed talks

Other employment

Teaching

Service

Editorial in the World Bank's Development Impact on my Operations Research paper, The Power of Optimization Over Randomization in Designing Experiments Involving Small Samples.

Editorial in Academic Radiology on my paper in the same journal, Scheduling, Revenue Management, and Fairness in an Academic-Hospital Division: An Optimization Approach.

Interview with me for Data Science Weekly.

Interview with me for invited guest blog of Scientific American.

Media coverage on my WWW paper, Predicting Crowd Behavior with Big Public Data:
Fastcompany.
Vice.
MIT Technology Review.
GigaOM.
More coverage...

Research interests: data-driven decision making in operations, the interplay of optimization and statistics in decision making and inference, and the analytical capacities and challenges of unstructured, large-scale, and web-based data.

Papers

Dynamic Assortment Personalization in High Dimensions. With M. Udell.
AbstractWe demonstrate the importance of structural priors for effective, efficient large scale dynamic assortment personalization. Assortment personalization is the problem of choosing a best assortment of products, ads, or other offerings (items) to specifically target a particular individual or consumer segment (type). This is a central problem in revenue management for e-commerce, online advertising, and multi-location brick-and-mortar retail, where both types and items number in the thousands to millions. Efficient use of data is critical in this large-scale setting, as the number of interactions with customers is limited -- definitely not in the trillions -- so it is infeasible to learn each one's preferences independently. Furthermore, learning preferences is not enough: the goal of personalization is revenue, not mere knowledge. In dynamic assortment personalization, the retailer chooses assortments to learn preferences and optimize revenue simultaneously.

We formulate the dynamic assortment personalization problem as a discrete-contextual bandit with m contexts, where type (individual or segment identity) is the context and assortments of n items are the arms. We assume that each type's preferences follow a simple parametric model with n parameters (multinomial logit); in all, there are mn parameters. Existing literature suggests that, over T interactions, order optimal regret is m*n*log(T), which is infeasibly large in most e-commerce settings.

In this paper, we impose natural structure -- a small latent dimension for the parameter space, or low rank on the matrix of m*n parameters -- on the problem. This structure reduces the number of parameters to estimate and the regret incurred in estimating them. In the static setting, we show that this model can be efficiently learned from surprisingly few interactions. We propose an efficient algorithm to learn the model that converges globally whenever the model is learnable. In the dynamic setting, we show that structure-aware dynamic assortment personalization can have regret that is an order of magnitude smaller than structure-ignorant approaches. We develop a novel pseudo-Thompson sampling approach that performs exceedingly well in practice.

Personalized Diabetes Management Using Electronic Medical Records. With D. Bertsimas, A Weinstein, and D. Zhuo.
AbstractObjective: Current clinical guidelines for managing type 2 diabetes do not differentiate based on patient-specific factors. We present a data-driven algorithm for personalized diabetes management that improves health outcomes relative to the standard of care.
Research Design and Methods: We modeled outcomes under thirteen pharmacological therapies based on electronic medical records from 1999 to 2014 for 10,806 type 2 diabetes patients from Boston Medical Center. For each patient visit, we analyzed the range of outcomes under alternative care using a k-nearest neighbor approach. The neighbors were chosen to maximize similarity on individual patient characteristics and medical history that were most predictive of health outcomes. The recommendation algorithm prescribes the regimen with best predicted outcome if the expected improvement from switching regimens exceeds a threshold. We evaluated the effect of recommendations on matched patient outcomes from unseen data.
Results: Among the 52,842 patient visits in the test set, the algorithm’s recommendation mirrored the observed standard of care in 68.2% of visits. For patient visits in which the algorithmic recommendation differed from the standard of care, the mean post-treatment glycated hemoglobin (HbA1c) under the algorithm was lower than standard of care by 0.44% (4.8 mmol/mol) ± SE 0.03% (0.3 mmol/mol) (p<<0.001), from 8.37% (68.0 mmol/mol) under the standard of care to 7.93% (63.2 mmol/mol) under our algorithm.
Conclusion: A personalized approach to diabetes management yielded substantial improvements in HbA1c outcomes relative to the standard of care. Our prototyped dashboard visualizing the recommendation algorithm can be used by providers to inform diabetes care and improve outcomes.

Pricing from Observational Data. With D. Bertsimas.
AbstractGiven observational data on price and demand, the price optimization problem is sometimes addressed in the literature by a predictive approach: (a) fit a model to the data that best predicts demand given price and (b) substitute the predictive model into the overall profit and optimize for price. We show that, because historical demand at all prices but the observed one is missing, the price optimization problem is not well specified by the data, and in particular, the predictive approach fails to find the optimal price. We bound the suboptimality of the predictive approach, even when the optimal price cannot be identified from the data, by leveraging the special structure of the problem. Drawing from the causal inference literature, we provide sufficient conditions for the optimal price to be identifiable from the data. Given these conditions, we provide parametric and non-parametric algorithms for the price optimization problem. In the non-parametric case we prove consistency and asymptotic normality and establish rates of convergence. We develop a hypothesis test for asymptotic profit optimality of any algorithm for pricing from observational data. We use this test to demonstrate empirically in an auto loan dataset that both parametric and non-parametric predictive approaches are in fact suboptimal in practice and that our prescriptive parametric framework leads to profit that can not be distinguished from the optimal one.

Causal Inference by Minimizing the Dual Norm of Bias: Kernel Matching & Weighting Estimators for Causal Effects.
AbstractWe consider the problem of estimating causal effects from observational data and propose a novel framework for matching- and weighting-based causal estimators. The framework is based on expressing the bias of a causal estimator as an operator on the unknown conditional expectation function of outcomes and formulating the dual norm of the bias as the norm of this operator with respect to a function space that represents the potential structure for outcomes. We give the term worst-case bias minimizing (WCBM) to estimators that minimize this quantity for some function space and show that a great variety of existing causal estimators belong to this family, including one-to-one matching (with or without replacement), coarsened exact matching, and mean-matched sampling. We propose a range of new, kernel-based matching and weighting estimators that arise when one minimizes the dual norm of the bias with respect to a reproducing kernel Hilbert space. Depending on the case, these estimators can be solved either in closed form, using quadratic optimization, or using integer optimization. We show that estimators based on universal kernels are consistent for the causal effect. In numerical experiments, the new, kernel-based estimators outperform all standard causal estimators in estimation error, providing a successful balance between generality and efficiency.

Revealed Preference at Scale: Learning Personalized Preferences from Assortment Choices. With M. Udell. Proceedings of the 17th ACM Conference on Economics and Computation (EC) 17:821—837, 2016.
AbstractWe consider the problem of learning the preferences of a heterogeneous population by observing choices from an assortment of products, ads, or other offerings. Our observation model takes a form common in assortment planning applications: each arriving customer is offered an assortment consisting of a subset of all possible offerings; we observe only the assortment and the customer's single choice.

In this paper we propose a mixture choice model with a natural underlying low-dimensional structure, and show how to estimate its parameters. In our model, the preferences of each customer or segment follow a separate parametric choice model, but the underlying structure of these parameters over all the models has low dimension. We show that a nuclear-norm regularized maximum likelihood estimator can learn the preferences of all customers using a number of observations much smaller than the number of item-customer combinations. This result shows the potential for structural assumptions to speed up learning and improve revenues in assortment planning and customization. We provide a specialized factored gradient descent algorithm and study the success of the approach empirically.

On the Predictive Power of Web Intelligence and Social Media. Chapter in Big Data Analytics in the Social and Ubiquitous Context, Springer International, 2016.
AbstractWith more information becoming widely accessible and new content created shared on today's web, more are turning to harvesting such data and analyzing it to extract insights. But the relevance of such data to see beyond the present is not clear. We present efforts to predict future events based on web intelligence -- data harvested from the web -- with specific emphasis on social media data and on timed event mentions, thereby quantifying the predictive power of such data. We focus on predicting crowd actions such as large protests and coordinated acts of cyber activism -- predicting their occurrence, specific timeframe, and location. Using natural language processing, statements about events are extracted from content collected from hundred of thousands of open content we sources. Attributes extracted include event type, entities involved and their role, sentiment and tone, and -- most crucially -- the reported timeframe for the occurrence of the event discussed -- whether it be in the past, present, or future. Tweets (Twitter posts) that mention an event to occur reportedly in the future prove to be important predictors. These signals are enhanced by cross referencing with the fragility of the situation as inferred from more traditional media, allowing us to sift out the social media trends that fizzle out before materializing as crowds on the ground.

From Predictive to Prescriptive Analytics. With D. Bertsimas. POMS Applied Research Challenge, Finalist (of 3).
AbstractIn this paper, we combine ideas from machine learning (ML) and operations research and management science (OR/MS) in developing a framework, along with specific methods, for using data to prescribe decisions in OR/MS problems. In a departure from other work on data-driven optimization and reflecting our practical experience with the data available in applications of OR/MS, we consider data consisting, not only of observations of quantities with direct effect on costs/revenues, such as demand or returns, but predominantly of observations of associated auxiliary quantities. The main problem of interest is a conditional stochastic optimization problem, given imperfect observations, where the joint probability distributions that specify the problem are unknown. We demonstrate that our proposed solution methods are generally applicable to a wide range of decision problems. We prove that they are computationally tractable and asymptotically optimal under mild conditions even when data is not independent and identically distributed (iid) and even for censored observations. As an analogue to the coefficient of determination R², we develop a metric P termed the coefficient of prescriptiveness to measure the prescriptive content of data and the efficacy of a policy from an operations perspective. To demonstrate the power of our approach in a real-world setting we study an inventory management problem faced by the distribution arm of an international media conglomerate, which ships an average of 1 billion units per year. We leverage both internal data and public online data harvested from IMDb, Rotten Tomatoes, and Google to prescribe operational decisions that outperform baseline measures. Specifically, the data we collect, leveraged by our methods, accounts for an 88% improvement as measured by our coefficient of prescriptiveness.

Robust Sample Average Approximation. Winner of the Best Student Paper Award, MIT Operations Research Center 2013. With D. Bertsimas and V. Gupta.
AbstractSample average approximation (SAA) is a widely popular approach to data-driven decision-making under uncertainty. Under mild assumptions, SAA is both tractable and enjoys strong asymptotic performance guarantees. Similar guarantees, however, do not typically hold in finite samples. In this paper, we propose a modification of SAA, which we term Robust SAA, which retains SAA's tractability and asymptotic properties and, additionally, enjoys strong finite-sample performance guarantees. The key to our method is linking SAA, distributionally robust optimization, and hypothesis testing of goodness-of-fit. Beyond Robust SAA, this connection provides a unified perspective enabling us to characterize the finite sample and asymptotic guarantees of various other data-driven procedures that are based upon distributionally robust optimization. This analysis provides insight into the practical performance of these various methods in real applications. We present examples from inventory management and portfolio allocation, and demonstrate numerically that our approach outperforms other data-driven approaches in these applications.

Data-Driven Robust Optimization. Finalist, INFORMS Nicholson Paper Competition 2013. With D. Bertsimas and V. Gupta.
AbstractThe last decade has seen an explosion in the availability of data for operations research applications as part of the Big Data revolution. Motivated by this data rich paradigm, we propose a novel schema for utilizing data to design uncertainty sets for robust optimization using statistical hypothesis tests. The approach is flexible and widely applicable, and robust optimization problems built from our new sets are computationally tractable, both theoretically and practically. Furthermore, optimal solutions to these problems enjoy a strong, finite-sample probabilistic guarantee. We also propose concrete guidelines for practitioners and illustrate our approach with applications in portfolio management and queueing. Computational evidence confirms that our data-driven sets significantly outperform conventional robust optimization techniques whenever data is available.

Predicting Crowd Behavior with Big Public Data. Proceedings of the 23rd international conference on World Wide Web (WWW) companion, 23:625—630, 2014. Winner, INFORMS Social Media Analytics Best Paper Competition 2015.
Media coverage.
Slides.
AbstractWith public information becoming widely accessible and shared on today's web, greater insights are possible into crowd actions by citizens and non-state actors such as large protests and cyber activism. Turning public data into Big Data, company Recorded Future continually scans over 300,000 open content web sources in 7 languages from all over the world, ranging from mainstream news to government publications to blogs and social media. We study the predictive power of this massive public data in forecasting crowd actions such as large protests and cyber campaigns before they occur. Using natural language processing, event information is extracted from content such as type of event, what entities are involved and in what role, sentiment and tone, and the occurrence time range of the event discussed. The amount of information is staggering and trends can be seen clearly in sheer numbers. In the first half of this paper we show how we use this data to predict large protests in a selection of 19 countries and 37 cities in Asia, Africa, and Europe with high accuracy using standard learning machines. In the second half we delve into predicting the perpetrators and targets of political cyber attacks with a novel application of the naïve Bayes classifier to high-dimensional sequence mining in massive datasets.

Optimal A Priori Balance in the Design of Controlled Experiments.
AbstractWe develop a unified theory of designs for controlled experiments that balance baseline covariates a priori (before treatment and before randomization) using the framework of minimax variance. We establish a "no free lunch" theorem that indicates that, without structural information on the dependence of potential outcomes on baseline covariates, complete randomization is optimal. Restricting the structure of dependence, either parametrically or non-parametrically, leads directly to imbalance metrics and optimal designs. Certain choices of this structure recover known imbalance metrics and designs previously developed ad hoc, including randomized block designs, pairwise-matched designs, and re-randomization. New choices of structure based on reproducing kernel Hilbert spaces lead to new methods, both parametric and non-parametric.

The Power of Optimization Over Randomization in Designing Experiments Involving Small Samples. With D. Bertsimas and M. Johnson. Operations Research, 63(4):868—876, 2015. Operations Research. Code.
AbstractRandom assignment, typically seen as the standard in controlled trials, aims to make experimental groups statistically equivalent before treatment. However, with a small sample, which is a practical reality in many disciplines, randomized groups are often too dissimilar to be useful. We propose an approach based on discrete linear optimization to create groups whose discrepancy in their means and variances is several orders of magnitude smaller than with randomization. We provide theoretical and computational evidence that groups created by optimization have exponentially lower discrepancy than those created by randomization.

Scheduling, Revenue Management, and Fairness in an Academic-Hospital Division: An Optimization Approach. With D. Bertsimas and R. Baum. Academic Radiology, 21(10):1322—1330, 2014. PDF. Editorial comment (D. Avrin).
AbstractPhysician staff of academic hospitals today practice in several geographic locations including their main hospital, referred to as the extended campus. With extended campuses expanding, the growing complexity of a single division's schedule means that a naïve approach to scheduling compromises revenue and can fail to consider physician over-exertion. Moreover, it may provide an unfair allocation of individual revenue, desirable or burdensome assignments, and the extent to which the preferences of each individual are met. This has adverse consequences on incentivization and employee satisfaction and is simply against business policy. We identify the daily scheduling of physicians in this context as an operational problem that incorporates scheduling, revenue management, and fairness. Noting previous success of operations management and optimization in each of these disciplines, we propose a simple, unified optimization formulation of this scheduling problem using mixed integer optimization (MIO). Through a study of implementing the approach at the Division of Angiography and Interventional Radiology at the Brigham and Women's Hospital, which is directed by one of the authors, we exemplify the flexibility of the model to adapt to specific applications, the tractability of solving the model in practical settings, and the significant impact of the approach, most notably in increasing revenue significantly while being only more fair and objective.