Nathan Kallus

Rule 30

Download as PDF.

Research interests


Peer-reviewed publications

Invited talks

Other talks




Hi! My name is Nathan Kallus. I am a Ph.D. student in the Operations Research Center at MIT. On the left is a picture of me departing by ferry from Middelgrundsfortet in the Øresund, between Copenhagen and Malmö. If you want to know more about me I suggest you look at my C.V. or email me with more personal questions. My email is nathan.kallus at

Interests: Data-driven decision-making, statistical inference and experimental design, and the analytical capacities and challenges of unstructured and large-scale data.

From Predictions to Data-Driven Decisions Using Machine Learning.
AbstractPredictive analyses taking advantage of the recent explosion in the availability of data have proven incredibly successful. It is not clear, however, how to go from mere predictions to decisions that yield high profits and carry low risk. In this paper we construct novel predictive-prescriptive mechanisms that optimize decisions based directly on historical data and predictive observations. The data-driven prescriptions we develop converge to the omniscient optimum for almost all realizations of data and for almost any given new observation even in situations where data is not IID but rather the result of observing an evolving system like a stock market or social network, which is generally the case in practice. We consider an example of portfolio allocation to illustrate the power of these methods.

Regression-Robust Designs of Controlled Experiments.
AbstractExperimental designs that balance pre-treatment measurements (baseline covariates) are in pervasive use throughout the practice of controlled experimentation, including randomized block designs, pairwise-matched designs, and re-randomization. We argue that no balance better than complete randomization can be achieved without partial structural knowledge about the treatment effects and therefore such knowledge must be present in these experiments. We propose a novel framework for formulating such knowledge that recovers these designs as optimal under certain modeling choices and suggests new optimal designs that are based on nonparametric modeling and offer significant gains in precision and power. We characterize the unbiasedness, variance, and consistency of resulting estimators; solve the design problem; and develop appropriate inferential algorithms. We make connections to Bayesian experimental design and extensions to dealing with non-compliance.

Predicting Crowd Behavior with Big Public Data. In the Proceedings of the 23rd international conference on World Wide Web.
Media coverage.
AbstractWith public information becoming widely accessible and shared on today's web, greater insights are possible into crowd actions by citizens and non-state actors such as large protests and cyber activism. Turning public data into Big Data, company Recorded Future continually scans over 300,000 open content web sources in 7 languages from all over the world, ranging from mainstream news to government publications to blogs and social media. We study the predictive power of this massive public data in forecasting crowd actions such as large protests and cyber campaigns before they occur. Using natural language processing, event information is extracted from content such as type of event, what entities are involved and in what role, sentiment and tone, and the occurrence time range of the event discussed. The amount of information is staggering and trends can be seen clearly in sheer numbers. In the first half of this paper we show how we use this data to predict large protests in a selection of 19 countries and 37 cities in Asia, Africa, and Europe with high accuracy using standard learning machines. In the second half we delve into predicting the perpetrators and targets of political cyber attacks with a novel application of the naïve Bayes classifier to high-dimensional sequence mining in massive datasets.

Robust SAA. Winner of the Best Student Paper Award, MIT Operations Research Center 2013. With D. Bertsimas and V. Gupta.
AbstractSample average approximation (SAA) is possibly the most popular approach to modeling decision making under uncertainty in data-driven settings. In SAA, one approximates a true, unknown probability distribution by the empirical distribution defined by the data. Under mild assumptions, as the amount of data grows, the solutions of SAA-based optimization problems converge asymptotically to the solutions that would be obtained if the underlying distribution were known. In this paper, we propose a general purpose, modification of SAA that retains this asymptotic convergence, but also enjoys a strong finite-sample performance guarantee. The key idea is to define a suitable robust optimization over the set of distributions which are close to the empirical distribution using tools from statistical hypothesis testing. The resulting optimization problem is computationally tractable and solvable using off-the-shelf solvers. We illustrate the approach by studying some specific inventory models in data-driven settings. Computational evidence confirms that our approach significantly outperforms other data-driven approaches to such problems.

Data-Driven Robust Optimization. Finalist, INFORMS Nicholson Paper Competition 2013. With D. Bertsimas and V. Gupta.
AbstractThe last decade has seen an explosion in the availability of data for operations research applications as part of the Big Data revolution. Motivated by this data rich paradigm, we propose a novel schema for utilizing data to design uncertainty sets for robust optimization using statistical hypothesis tests. The approach is flexible and widely applicable, and robust optimization problems built from our new sets are computationally tractable, both theoretically and practically. Furthermore, optimal solutions to these problems enjoy a strong, finite-sample probabilistic guarantee. We also propose concrete guidelines for practitioners and illustrate our approach with applications in portfolio management and queueing. Computational evidence confirms that our data-driven sets significantly outperform conventional robust optimization techniques whenever data is available.

The Power of Optimization Over Randomization in Designing Experiments Involving Small Samples. With D. Bertsimas and M. Johnson.
AbstractRandom assignment, typically seen as the standard in controlled trials, aims to make experimental groups statistically equivalent before treatment. However, with a small sample, which is a practical reality in many disciplines, randomized groups are often too dissimilar to be useful. We propose an approach based on discrete linear optimization to create groups whose discrepancy in their means and variances is several orders of magnitude smaller than with randomization. We provide theoretical and computational evidence that groups created by optimization have exponentially lower discrepancy than those created by randomization.

Scheduling, Revenue Management, and Fairness in an Academic-Hospital Division: An Optimization Approach. With D. Bertsimas and R. Baum. Academic Radiology, 2004.
AbstractPhysician staff of academic hospitals today practice in several geographic locations including their main hospital, referred to as the extended campus. With extended campuses expanding, the growing complexity of a single division's schedule means that a naïve approach to scheduling compromises revenue and can fail to consider physician over-exertion. Moreover, it may provide an unfair allocation of individual revenue, desirable or burdensome assignments, and the extent to which the preferences of each individual are met. This has adverse consequences on incentivization and employee satisfaction and is simply against business policy. We identify the daily scheduling of physicians in this context as an operational problem that incorporates scheduling, revenue management, and fairness. Noting previous success of operations management and optimization in each of these disciplines, we propose a simple, unified optimization formulation of this scheduling problem using mixed integer optimization (MIO). Through a study of implementing the approach at the Division of Angiography and Interventional Radiology at the Brigham and Women's Hospital, which is directed by one of the authors, we exemplify the flexibility of the model to adapt to specific applications, the tractability of solving the model in practical settings, and the significant impact of the approach, most notably in increasing revenue significantly while being only more fair and objective.

Odds and ends

My NSF Graduate Research Fellowship research proposal.