nicholas wu

research

job market paper

Solving Problems of Unknown Difficulty
link to paper

abstract

arrow_forward
This paper studies how uncertainty about problem difficulty shapes problem-solving strategies. I develop a dynamic model where an agent solves a problem by brainstorming approaches of unknown quality and allocating a fixed effort budget among them. Success arrives from spending effort pursuing good approaches, at a rate determined by the unknown problem difficulty. The agent balances costly exploration (expanding the set of approaches) with exploitation (pursuing existing approaches). Failures could signal either a bad idea or a hard problem, and this ambiguity generates novel dynamics: optimal search alternates between trying new approaches and revisiting previously abandoned ones. I then examine a principal–agent environment, where moral hazard arises on the intensive margin—how the agent explores. Dynamic commitment leads contracts to frontload incentives, which can be counteracted by the presence of learning. The framework reflects scientific discovery, product development, and other creative work, offering new insights into innovation and organizational design.

published papers (economics)

From Doubt to Devotion: Trials and Learning-Based Pricing

joint with Tan Gan

accepted, Journal of Political Economy

presented at SITE 2023, and an extended abstract appears in EC'24.

link to paper

abstract

arrow_forward
An informed seller designs a dynamic mechanism to sell an experience good. The seller has partial information about the product match, which affects the buyer's private consumption experience. We characterize equilibrium mechanisms of this dynamic informed principal problem. The belief gap between the informed seller and the uninformed buyer, coupled with the buyer's learning, gives rise to mechanisms that provide the skeptical buyer with limited access to the product and an option to upgrade if the buyer is swayed by a good experience. Depending on the seller's screening technology, this takes the form of free/discounted trials or tiered pricing, which are prevalent in digital markets. In contrast to static environments, having consumer data can reduce sellers' revenue in equilibrium, as they fine-tune the dynamic design with their data forecasting the buyer's learning process.
How Do Digital Advertising Auctions Impact Product Prices?

joint with Dirk Bergemann and Alessandro Bonatti

an extended abstract appears in EC'23.

Review of Economic Studies

abstract

arrow_forward
We present a model of digital advertising with three key features: (i) advertisers can reach consumers on and off a platform, (ii) additional data enhances the value of advertiser-consumer matches, and (iii) bidding follows auction-like mechanisms. We contrast data-augmented auctions, which leverage the platform’s data advantage to improve match quality, and managed campaign mechanisms that automate match formation and price-setting. The platform-optimal mechanism is a managed campaign that conditions on-platform prices for sponsored products on the off-platform prices set by all advertisers. This mechanism yields the efficient on-platform allocation but inefficient off-platform allocations due to high product prices; it attains the vertical integration profit for the platform and advertisers, and it increases off-platform product prices and decreases consumer surplus, relative to data-augmented auctions.
Bidding with Budgets: Data-Driven Bid Algorithms in Digital Advertising

joint with Dirk Bergemann and Alessandro Bonatti

International Journal of Industrial Organization

abstract

arrow_forward
In digital advertising, auctions determine the allocation of sponsored search, sponsored product, or display advertisements. The bids in these auctions for attention are largely generated by auto-bidding algorithms that are driven by platform-provided data. We analyze the equilibrium properties of a sequence of increasingly sophisticated auto-bidding algorithms. First, we consider the equilibrium bidding behavior of an individual advertiser who controls the auto-bidding algorithm through the choice of their budget. Second, we examine the interaction when all bidders use budget-controlled bidding algorithms. Finally, we derive the bidding algorithm that maximizes the platform revenue while ensuring that all advertisers continue to participate.

working papers

Sharing Credit for Joint Research

submitted

arXiv link

abstract

arrow_forward
How closely should one monitor their collaborators when participating in risky research? First, I show that efficiency can be achieved by allocating payoffs asymmetrically between the researcher who makes a breakthrough (``winner'') and the others, even if agents cannot observe each others' effort. When the winner's identity is non-contractible, allocating credit based on effort at time of breakthrough also suffices to achieve efficiency; that is, the terminal effort profile, rather than the full history of effort, is a sufficient statistic. These findings suggest that simple mechanisms using minimal information are robust and effective in addressing inefficiencies in strategic experimentation.
Maximal Procurement Under a Budget

joint with Nicole Immorlica and Brendan Lucier

presented at the NBER Market Design Working Group Meeting, Fall 2023

arXiv link

abstract

arrow_forward
We study the problem of a principal who wants to influence an agent's observable action, subject to an ex-post budget. The agent has a private type determining their cost function. This paper endogenizes the value of the resource driving incentives, which holds no inherent value but is restricted by finite availability. We characterize the optimal mechanism, showing the emergence of a pooling region where the budget constraint binds for low-cost types. We then introduce a linear value for the transferable resource; as the principal's value increases, the mechanism demands more from agents with binding budget constraint but less from others.
Return Policies under Moral Hazard

draft coming soon.

abstract

arrow_forward
How should return policies be designed to incentivize quality provision when product durability is unobservable and buyers can strategically exploit generous return policies? A monopolist seller chooses the quality of a durable good and offers a contract consisting of a price and a return policy, which specifies refunds as a function of return time. Quality is costly and affects the stochastic time to product failure (breakdown). Return policies serve as commitment devices: without them, the seller supplies only minimal quality. When breakdown is verifiable by the seller, the optimal return policy is a warranty that offers full refunds up to a deadline. In other settings, breakdown may not be easily verifiable by the seller; for example, the buyer may be able to undetectably sabotage the product and fake a product breakdown. In contrast to the verifiable breakdown result, if breakdowns are unverifiable, the optimal return policy becomes a rent-to-own contract with linearly declining refunds, which balances the seller’s incentive to provide quality with the buyer’s incentive to exploit the refund. Introducing costly verification unifies these cases: the optimal policy interpolates between warranty and rent-to-own as verification costs vary. The results provide a framework for studying the dynamic tradeoffs involved in return policy design between sustaining seller incentives for quality and balancing potential buyer abuse.

other publications

Explainable Machine Learning Models of Consumer Credit Risk

with Randall Davis, Andrew Lo, Sudhanshu Mishra, Arash Nourian, Manish Singh, and Ruixun Zhang

Journal of Financial Data Science

abstract

arrow_forward
In this paper, we create machine learning (ML) models to forecast home equity credit risk for individuals using a real-world dataset and demonstrate methods to explain the output of these ML models to make them more accessible to the end-user. We analyze the explainability of these models for various stakeholders: loan companies, regulators, loan applicants, and data scientists, incorporating their different requirements with respect to explanations. For loan companies, we generate explanations for every model prediction of creditworthiness. For regulators, we perform a stress test for extreme scenarios. For loan applicants, we generate diverse counterfactuals to guide them with steps to reverse the model's classification. Finally, for data scientists, we generate simple rules that accurately explain 70-72% of the dataset. Our work is intended to accelerate the adoption of ML techniques in domains that would benefit from explanations of their predictions.