**January 2022.**

Do urban children live more segregated lives than urban adults? Using cellphone location data and following the ‘experienced isolation’ methodology of Athey et al. (2021), we compare the isolation of students over the age of 16—who we identify based on their time spent at a high school—and adults. We find that students in cities experience significantly less integration in their day-to-day lives than adults. The average student experiences 27% more isolation outside of the home than the average adult. Even when comparing students and adults living in the same neighborhood, exposure to devices associated with a different race is 20% lower for students. Looking at more broad measures of urban mobility, we find that students spend more time at home, more time closer to home when they do leave the house, and less time at school than adults spend at work. Finally, we find correlational evidence that neighborhoods with more geographic mobility today also had more intergenerational income mobility in the past. We hope future work will more rigorously test the hypothesis that different geographic mobility patterns for children and adults can explain why urban density appears to boost adult wages but reduce intergenerational income mobility.]]>

**December 2021.**

We develop a machine-learning solution algorithm to solve for optimal portfolio choice in a detailed and quantitatively-accurate lifecycle model that includes many features of reality modelled only separately in previous work. We use the quantitative model to evaluate the consumption-equivalent welfare losses from using simple rules for portfolio allocation across stocks, bonds, and liquid accounts instead of the optimal portfolio choices. We find that the consumption-equivalent losses from using an age-dependent rule as embedded in current target-date/lifecycle funds (TDFs) are substantial, around 2 to 3 percent of consumption, despite the fact that TDF rules mimic average optimal behavior by age closely until shortly before retirement. Our model recommends higher average equity shares in the second half of life than the portfolio of the typical TDF, so that the typical TDF portfolio does not improve on investing an age-independent 2/3 share in equity. Finally, optimal equity shares have substantial heterogeneity, particularly by wealth level, state of the business cycle, and dividend-price ratio, implying substantial gains to further customization of advice or TDFs in these dimensions.]]>

**November 2021.**

In recent years, the designs of many new blockchain applications have been inspired by the Byzantine fault tolerance (BFT) problem. While traditional BFT protocols assume that most system nodes are honest (in that they follow the protocol), we recognize that blockchains are deployed in environments where nodes are subject to strategic incentives. This paper develops an economic framework for analyzing such cases. Specifically, we assume that 1) non-Byzantine nodes are rational, so we explicitly study their incentives when participating in a BFT consensus process; 2) non-Byzantine nodes are ambiguity averse, and specifically, Knightian uncertain about Byzantine actions; and 3) decisions/inferences are all based on local information. We thus obtain a consensus game with preplay communications. We characterize all equilibria, some of which feature rational leaders withholding messages from some nodes in order to achieve consensus. These findings enrich those from traditional BFT algorithms, where an honest leader always sends messages to all nodes. We also study how the progress of communication technology (i.e., potential message losses) affects the equilibrium consensus outcome.]]>

**November 2021.**

There is growing concern that the increasing use of machine learning and artificial intelligence-based systems may exacerbate health disparities through discrimination. We provide a hierarchical definition of discrimination consisting of algorithmic discrimination arising from predictive scores used for allocating resources and human discrimination arising from allocating resources by human decision-makers conditional on these predictive scores. We then offer an overarching statistical framework of algorithmic discrimination through the lens of measurement errors, which is familiar to the health economics audience. Specifically, we show that algorithmic discrimination exists when measurement errors exist in either the outcome or the predictors, and there is endogenous selection for participation in the observed data. The absence of any of these phenomena would eliminate algorithmic discrimination. We show that although equalized odds constraints can be employed as bias-mitigating strategies, such constraints may increase algorithmic discrimination when there is measurement error in the dependent variable.]]>

**October 2021.**

Economic data engineering deliberately designs novel forms of data to solve fundamental identification problems associated with economic models of choice. I outline three diverse applications: to the economics of information; to life-cycle employment, earnings, and spending; and to public policy analysis. In all three cases one and the same fundamental identification problem is driving data innovation: that of separately identifying appropriately rich preferences and beliefs. In addition to presenting these conceptually linked examples, I provide a general overview of the engineering process, outline important next steps, and highlight larger opportunities.]]>

**October 2021.**

This paper extends my research applying statistical decision theory to treatment choice with sample data, using maximum regret to evaluate the performance of treatment rules. The specific new contribution is to study as-if optimization using estimates of illness probabilities in clinical choice between surveillance and aggressive treatment. Beyond its specifics, the paper sends a broad message. Statisticians and computer scientists have addressed conditional prediction for decision making in indirect ways, the former applying classical statistical theory and the latter measuring prediction accuracy in test samples. Neither approach is satisfactory. Statistical decision theory provides a coherent, generally applicable methodology.]]>

**October 2021.**

Identification in VARs has traditionally mainly relied on second moments. Some researchers have considered using higher moments as well, but there are concerns about the strength of the identification obtained in this way. In this paper, we propose refining existing identification schemes by augmenting sign restrictions with a requirement that rules out shocks whose higher moments significantly depart from independence. This approach does not assume that higher moments help with identification; it is robust to weak identification. In simulations we show that it controls coverage well, in contrast to approaches that assume that the higher moments deliver point-identification. However, it requires large sample sizes and/or considerable non-normality to reduce the width of confidence intervals by much. We consider some empirical applications. We find that it can reject many possible rotations. The resulting confidence sets for impulse responses may be non-convex, corresponding to disjoint parts of the space of rotation matrices. We show that in this case, augmenting sign and magnitude restrictions with an independence requirement can yield bigger gains.]]>