nicholas wu

research

working papers

How Do Digital Advertising Auctions Impact Product Prices?

joint with Dirk Bergemann and Alessandro Bonatti

revision requested at the Review of Economic Studies. An extended abstract appears in EC'23.

arXiv link

abstract

arrow_forward
We present a model of digital advertising with three key features: (i) advertisers can reach consumers on and off a platform, (ii) additional data enhances the value of advertiser-consumer matches, and (iii) bidding follows auction-like mechanisms. We contrast data-augmented auctions, which leverage the platform’s data advantage to improve match quality, and managed campaign mechanisms that automate match formation and price-setting. The platform-optimal mechanism is a managed campaign that conditions on-platform prices for sponsored products on the off-platform prices set by all advertisers. This mechanism yields the efficient on-platform allocation but inefficient off-platform allocations due to high product prices; it attains the vertical integration profit for the platform and advertisers, and it increases off-platform product prices and decreases consumer surplus, relative to data-augmented auctions.
From Doubt to Devotion: Trials and Learning-Based Pricing

joint with Tan Gan

presented at SITE 2023

link to paper

abstract

arrow_forward
An informed seller designs a dynamic mechanism to sell an experience good. The seller has partial information about the product match, which affects the buyer's private consumption experience. We characterize equilibrium mechanisms of this dynamic informed principal problem. The belief gap between the informed seller and the uninformed buyer, coupled with the buyer's learning, gives rise to mechanisms that provide the skeptical buyer with limited access to the product and an option to upgrade if the buyer is swayed by a good experience. Depending on the seller's screening technology, this takes the form of free/discounted trials or tiered pricing, which are prevalent in digital markets. In contrast to static environments, having consumer data can reduce sellers' revenue in equilibrium, as they fine-tune the dynamic design with their data forecasting the buyer's learning process.
Sharing Credit for Joint Research

submitted

arXiv link

abstract

arrow_forward
How closely should one monitor their collaborators when participating in risky research? First, I show that efficiency can be achieved by allocating payoffs asymmetrically between the researcher who makes a breakthrough (``winner'') and the others, even if agents cannot observe each others' effort. When the winner's identity is non-contractible, allocating credit based on effort at time of breakthrough also suffices to achieve efficiency; that is, the terminal effort profile, rather than the full history of effort, is a sufficient statistic. These findings suggest that simple mechanisms using minimal information are robust and effective in addressing inefficiencies in strategic experimentation.
Maximal Procurement Under a Budget

joint with Nicole Immorlica and Brendan Lucier

presented at the NBER Market Design Working Group Meeting, Fall 2023

arXiv link

abstract

arrow_forward
We study the problem of a principal who wants to influence an agent's observable action, subject to an ex-post budget. The agent has a private type determining their cost function. This paper endogenizes the value of the resource driving incentives, which holds no inherent value but is restricted by finite availability. We characterize the optimal mechanism, showing the emergence of a pooling region where the budget constraint binds for low-cost types. We then introduce a linear value for the transferable resource; as the principal's value increases, the mechanism demands more from agents with binding budget constraint but less from others.

published papers

Explainable Machine Learning Models of Consumer Credit Risk

with Randall Davis, Andrew Lo, Sudhanshu Mishra, Arash Nourian, Manish Singh, and Ruixun Zhang

Journal of Financial Data Science

abstract

arrow_forward
In this paper, we create machine learning (ML) models to forecast home equity credit risk for individuals using a real-world dataset and demonstrate methods to explain the output of these ML models to make them more accessible to the end-user. We analyze the explainability of these models for various stakeholders: loan companies, regulators, loan applicants, and data scientists, incorporating their different requirements with respect to explanations. For loan companies, we generate explanations for every model prediction of creditworthiness. For regulators, we perform a stress test for extreme scenarios. For loan applicants, we generate diverse counterfactuals to guide them with steps to reverse the model's classification. Finally, for data scientists, we generate simple rules that accurately explain 70-72% of the dataset. Our work is intended to accelerate the adoption of ML techniques in domains that would benefit from explanations of their predictions.