Current Projects

Strategic Model Releases in the Generative AI Industry

This project studies how leading AI firms time model releases in response to rival product improvements. I combine a simple dynamic model of leader–follower competition with empirical evidence from public model release dates and LMArena performance scores. The project asks whether frontier AI firms “follow” each other’s releases, how quality improvements affect competitive responses, and how public benchmarks shape strategic incentives.

Methods: innovation, dynamic games, benchmark data, Python, MatLab.

Benchmarking and Innovation in Generative AI

This project studies whether generative AI firms selectively report benchmark scores to compete, and whether existing AI benchmarks shape the direction of firms’ innovation. I examine the relationship between technical benchmark scores across different categories and human evaluations to assess whether frontier AI firms respond strategically to benchmark incentives by competing on, adopting, and selectively reporting benchmark results. More broadly, this research asks what an optimal benchmarking structure should look like in the generative AI industry.

Methods: empirical IO, innovation, AI benchmarking, Python.

AI, Productivity, and Skill Formation

This project examines whether AI assistance improves short-run productivity while potentially changing long-run skill formation. I am interested in heterogeneity between workers who use AI as augmentation and workers who become dependent on AI for core task execution. The broader goal is to understand when AI tools complement human capital accumulation v.s. when they may substitute for learning, and what are the implications for future labor markets.

Methods: labor economics, education, skill formation, experimental and quasi-experimental design.

Research Themes

  • Measuring AI capabilities and adoption using public data
  • Understanding how AI changes labor markets and skill formation
  • Modeling strategic behavior among frontier AI firms
  • Translating empirical findings into actionable recommendations for AI deployment and policy