research
working papers
Trying Lemons: Adverse Selection with Experimentation
joint with Tan Gan
draft available upon request
abstract
arrow_forward
We consider a dynamic informed principal problem, where a buyer privately gets
lump-sum payoffs from the service provided by a seller at an exponential rate if and
only if the service is good. The seller is informed about the exponential rate but not the
lump-sum payoff of the service. The buyer is initially uninformed about both, but she
privately and perfectly learns both upon the first arrival of the lump-sum payoff. The
dynamic nature of the service enables learning-based discrimination, which mitigates
the adverse selection on the seller’s side but creates a multi-dimensional screening
problem on the buyer’s side. The optimal mechanism for the good-type seller is a
two-phase trial mechanism. In the first phase (the trial), the seller sells access to the
service at a discounted price such that any buyer purchases. In the second phase, the
seller sells the remainder of the service at a price lower than the Myerson price, and
only buyers who experience a high payoff from the trial purchase. Varying prices and
the length of two phases can also achieve the Pareto frontier of the equilibrium payoff
set.
Sharing the Credit for Joint Research
draft available upon request
abstract
arrow_forward
I consider a model of strategic experimentation where agents partake in a risky
research endeavor whose state is initially unknown. The state of the research project
is good or bad, with a good project yielding a breakthrough at an exponential rate
depending on effort. All agent actions are observable, and the arrival of a breakthrough
grants the discovering agent (winner) an advantage in continuation payoffs relative to
the non-discoverers (losers). I characterize the first-best solution in the cooperative
problem, and show that the non-cooperative game is efficient if and only if the losers
are indifferent between losing and the status quo. Using this result, I show that a
relatively simple class of contracts that redistribute payoffs between the winner and
losers can restore efficiency. I then extend the key ideas to a joint research model where
the breakthrough cannot be attributed to any one agent; I show that while agreeing
to evenly split rewards is not efficient, conditioning the sharing contract on the share
of effort at time of breakthrough is sufficient to restore efficiency. In particular, effi-
cient contracts must guarantee to each agent exactly the opportunity cost of research,
regardless of effort, and condition the remainder of the reward on the instantaneous
effort share. I finally extend the main insights to the environment where agents have
heterogeneity in resources available to invest in research.
Managed Campaigns and Data-Augmented Auctions for Digital Advertising
joint with Dirk Bergemann and Alessandro Bonatti, Accepted to EC'23
arXiv linkabstract
arrow_forward
We develop an auction model for digital advertising. A monopoly platform has access to data on the value of the match between advertisers and consumers. The platform support bidding with additional information and increase the feasible surplus for on-platform matches. Advertisers jointly determine their pricing strategy both on and off the platform, as well as their bidding for digital advertising on the platform. We compare a data-augmented second-price auction and a managed campaign mechanism. In the data-augmented auction, the bids by the advertisers are informed by the data of the platform regarding the value of the match. This results in a socially efficient allocation on the platform, but the advertisers increase their product prices off the platform to be more competitive on the platform. In consequence, the allocation off the platform is inefficient due to excessively high product prices. The managed campaign mechanism allows advertisers to submit budgets that are then transformed into matches and prices through an autobidding algorithm. Compared to the data-augmented second-price auction, the optimal managed campaign mechanism increases the revenue of the digital platform. The product prices off the platform increase and the consumer surplus decreases.
Maximizing the Effect of Altruism
joint with Nicole Immorlica and Brendan Lucier
draft available upon request
abstract
arrow_forward
We study the problem faced by an altruistic donor who wants to allocate a fixed set of funds to increase the production of a good. The good is produced by one or more profit-maximizing firms that sell it in an imperfectly competitive downstream market. The firms have private types that determine their (convex and increasing) costs of production, whereas the market outcome and prices are publicly observed.
For the case of a single firm, we show that the altruist's optimal mechanism takes the form of a menu of subsidies that offer payments to the firm as a function of its change in production. The optimal solution exhibits pooling of the most efficient types of producer while separating the less efficient types. Offering a single menu item is optimal under sufficiently pessimistic beliefs. We further show that a per-unit subsidy is generically suboptimal. When there are multiple firms, we show that the altruist's impact is decreasing in the size of the competitive externalities that the firms exert on each other.
Explainable Machine Learning Models of Consumer Credit Risk
joint with Randall Davis, Andrew Lo, Sudhanshu Mishra, Arash Nourian, Manish Singh, and Ruixun Zhang
SSRN linkabstract
arrow_forward
In this paper, we create machine learning (ML) models to forecast home equity credit risk for individuals using a real-world dataset and demonstrate methods to explain the output of these ML models to make them more accessible to the end-user. We analyze the explainability of these models for various stakeholders: loan companies, regulators, loan applicants, and data scientists, incorporating their different requirements with respect to explanations. For loan companies, we generate explanations for every model prediction of creditworthiness. For regulators, we perform a stress test for extreme scenarios. For loan applicants, we generate diverse counterfactuals to guide them with steps to reverse the model's classification. Finally, for data scientists, we generate simple rules that accurately explain 70-72% of the dataset. Our work is intended to accelerate the adoption of ML techniques in domains that would benefit from explanations of their predictions.