An informed seller designs a dynamic mechanism to sell an experience good. The
seller has partial information about the product match, which affects the buyer’s private consumption experience. We characterize equilibrium mechanisms of this dynamic
informed principal problem. The belief gap between the informed seller and the uninformed buyer, coupled with the buyer’s learning, gives rise to mechanisms that provide
the skeptical buyer with limited access to the product and an option to upgrade if the
buyer is swayed by a good experience. Depending on the seller’s screening technology,
this takes the form of free/discounted trials or tiered pricing, which are prevalent in
digital markets. In contrast to static environments, having consumer data can reduce
sellers’ revenue in equilibrium, as they fine-tune the dynamic design with their data
forecasting the buyer’s learning process.
I consider a model of strategic experimentation where agents partake in risky research with payoff externalities; a breakthrough grants the discoverer ("winner") a different payoff than the non-discoverers ("losers"). I characterize the first-best solution and show that the noncooperative game is generically inefficient. Simple contracts sharing payoffs between winner and losers restore efficiency even when actions are unobserved. Alternatively, if the winner's identity is not contractible, contracting on effort only at the time of breakthrough also restores efficiency. These results suggest that although strategic experimentation generically entails inefficiency, sharing credit is a robust and effective remedy.
We ask how the advertising mechanisms of digital platforms impact product prices.
We present a model that integrates three fundamental features of digital advertising
markets: (i) advertisers can reach customers on and off-platform, (ii) additional data
enhances the value of matching advertisers and consumers, and (iii) bidding follows
auction-like mechanisms. We compare data-augmented auctions, which leverage the
platform’s data advantage to improve match quality, with managed campaign mecha-
nisms, where advertisers’ budgets are transformed into personalized matches and prices
through auto-bidding algorithms.
In data-augmented second-price auctions, advertisers increase off-platform product
prices to boost their competitiveness on-platform. This leads to socially efficient allo-
cations on-platform, but inefficient allocations off-platform due to high product prices.
The platform-optimal mechanism is a sophisticated managed campaign that conditions
on-platform prices for sponsored products on off-platform prices set by all advertisers.
Relative to auctions, the optimal managed campaign raises off-platform product prices
and further reduces consumer surplus.
presented at the NBER Market Design Working Group Meeting, Fall 2023
draft available upon request
We study the problem faced by an principal who wants to allocate a budget to influence an agent's action. The agent has a private type that determines their costs, and their action is publicly observed. This paper introduces a new perspective by endogenizing the value of the resource that drives incentives; the resource holds no inherent worth but is restricted by finite availability. In the base model, we characterize the optimal mechanism, showcasing the emergence of a pooling region where the altruist gives the entire budget to sufficiently low-cost types. We describe the threshold governing this region, show that the mechanism never excludes agents, and highlight shadow costs. We then reintroduce a linear resource valuation; as the principal's resource value increases, the principal demands more from agents in the pooling region but demands less from others. Our findings contribute to understanding mechanism design in resource-constrained settings, with potential applications in environmental policies, effective altruism, and non-monetary incentives.
In this paper, we create machine learning (ML) models to forecast home equity credit risk for individuals using a real-world dataset and demonstrate methods to explain the output of these ML models to make them more accessible to the end-user. We analyze the explainability of these models for various stakeholders: loan companies, regulators, loan applicants, and data scientists, incorporating their different requirements with respect to explanations. For loan companies, we generate explanations for every model prediction of creditworthiness. For regulators, we perform a stress test for extreme scenarios. For loan applicants, we generate diverse counterfactuals to guide them with steps to reverse the model's classification. Finally, for data scientists, we generate simple rules that accurately explain 70-72% of the dataset. Our work is intended to accelerate the adoption of ML techniques in domains that would benefit from explanations of their predictions.