Nathan Kallus

Rule 30

Hi! My name is Nathan Kallus.

I am a fifth-year Ph.D. candidate at the Operations Research Center at MIT.

My research interests include data-driven decision-making, statistical inference and experimental design, and the analytical capacities and challenges of unstructured and large-scale data.

If you want to know more about me I suggest you look at my C.V. below or email me.

My email is nathan.kallus at gmail.com.



Research interests

Education

Peer-reviewed publications

Working papers

Invited talks

Contributed talks

Honors

Employment

Teaching

Service and affilications

Interview with Nathan for Data Science Weekly.

Interview with Nathan for invited guest blog of Scientific American.

Media coverage on "Predicting Crowd Behavior with Big Public Data":
Fastcompany.
Vice.
MIT Technology Review.
GigaOM.
More coverage.

Research interests: Data-driven decision-making, statistical inference and experimental design, and the analytical capacities and challenges of unstructured and large-scale data.

Papers in preparation

From Predictive to Prescriptive Analytics. With D. Bertsimas.

Predictive-Prescriptive Analytics for Personalized Treatments of Hypertension and Diabetes. With D. Bertsimas, A. Weinstein, and D. Zhuo.

Papers published or submitted

Robust SAA. Winner of the Best Student Paper Award, MIT Operations Research Center 2013. With D. Bertsimas and V. Gupta.
AbstractSample average approximation (SAA) is possibly the most popular approach to modeling decision making under uncertainty in data-driven settings. In SAA, one approximates a true, unknown probability distribution by the empirical distribution defined by the data. Under mild assumptions, as the amount of data grows, the solutions of SAA-based optimization problems converge asymptotically to the solutions that would be obtained if the underlying distribution were known. In this paper, we propose a general purpose, modification of SAA that retains this asymptotic convergence, but also enjoys a strong finite-sample performance guarantee. The key idea is to define a suitable robust optimization over the set of distributions which are close to the empirical distribution using tools from statistical hypothesis testing. The resulting optimization problem is computationally tractable and solvable using off-the-shelf solvers. We illustrate the approach by studying some specific inventory models in data-driven settings. Computational evidence confirms that our approach significantly outperforms other data-driven approaches to such problems.

Data-Driven Robust Optimization. Finalist, INFORMS Nicholson Paper Competition 2013. With D. Bertsimas and V. Gupta.
AbstractThe last decade has seen an explosion in the availability of data for operations research applications as part of the Big Data revolution. Motivated by this data rich paradigm, we propose a novel schema for utilizing data to design uncertainty sets for robust optimization using statistical hypothesis tests. The approach is flexible and widely applicable, and robust optimization problems built from our new sets are computationally tractable, both theoretically and practically. Furthermore, optimal solutions to these problems enjoy a strong, finite-sample probabilistic guarantee. We also propose concrete guidelines for practitioners and illustrate our approach with applications in portfolio management and queueing. Computational evidence confirms that our data-driven sets significantly outperform conventional robust optimization techniques whenever data is available.

Predicting Crowd Behavior with Big Public Data. In the Proceedings of the 23rd international conference on World Wide Web.
Media coverage.
Slides.
AbstractWith public information becoming widely accessible and shared on today's web, greater insights are possible into crowd actions by citizens and non-state actors such as large protests and cyber activism. Turning public data into Big Data, company Recorded Future continually scans over 300,000 open content web sources in 7 languages from all over the world, ranging from mainstream news to government publications to blogs and social media. We study the predictive power of this massive public data in forecasting crowd actions such as large protests and cyber campaigns before they occur. Using natural language processing, event information is extracted from content such as type of event, what entities are involved and in what role, sentiment and tone, and the occurrence time range of the event discussed. The amount of information is staggering and trends can be seen clearly in sheer numbers. In the first half of this paper we show how we use this data to predict large protests in a selection of 19 countries and 37 cities in Asia, Africa, and Europe with high accuracy using standard learning machines. In the second half we delve into predicting the perpetrators and targets of political cyber attacks with a novel application of the naïve Bayes classifier to high-dimensional sequence mining in massive datasets.

Optimal A Priori Balance in the Design of Controlled Experiments.
AbstractWe develop a unified theory of designs for controlled experiments that balance baseline covariates a priori (before treatment and before randomization) using the framework of minimax variance. We establish a "no free lunch" theorem that indicates that, without structural information on the dependence of potential outcomes on baseline covariates, complete randomization is optimal. Restricting the structure of dependence, either parametrically or non-parametrically, leads directly to imbalance metrics and optimal designs. Certain choices of this structure recover known imbalance metrics and designs previously developed ad hoc, including randomized block designs, pairwise-matched designs, and re-randomization. New choices of structure based on reproducing kernel Hilbert spaces lead to new methods, both parametric and non-parametric.

The Power of Optimization Over Randomization in Designing Experiments Involving Small Samples. With D. Bertsimas and M. Johnson.
AbstractRandom assignment, typically seen as the standard in controlled trials, aims to make experimental groups statistically equivalent before treatment. However, with a small sample, which is a practical reality in many disciplines, randomized groups are often too dissimilar to be useful. We propose an approach based on discrete linear optimization to create groups whose discrepancy in their means and variances is several orders of magnitude smaller than with randomization. We provide theoretical and computational evidence that groups created by optimization have exponentially lower discrepancy than those created by randomization.

Scheduling, Revenue Management, and Fairness in an Academic-Hospital Division: An Optimization Approach. With D. Bertsimas and R. Baum. Academic Radiology, Volume 21, Issue 10, October 2014, Pages 1322—1330. PDF. Editorial comment (D. Avrin).
AbstractPhysician staff of academic hospitals today practice in several geographic locations including their main hospital, referred to as the extended campus. With extended campuses expanding, the growing complexity of a single division's schedule means that a naïve approach to scheduling compromises revenue and can fail to consider physician over-exertion. Moreover, it may provide an unfair allocation of individual revenue, desirable or burdensome assignments, and the extent to which the preferences of each individual are met. This has adverse consequences on incentivization and employee satisfaction and is simply against business policy. We identify the daily scheduling of physicians in this context as an operational problem that incorporates scheduling, revenue management, and fairness. Noting previous success of operations management and optimization in each of these disciplines, we propose a simple, unified optimization formulation of this scheduling problem using mixed integer optimization (MIO). Through a study of implementing the approach at the Division of Angiography and Interventional Radiology at the Brigham and Women's Hospital, which is directed by one of the authors, we exemplify the flexibility of the model to adapt to specific applications, the tractability of solving the model in practical settings, and the significant impact of the approach, most notably in increasing revenue significantly while being only more fair and objective.