Vasilis Syrgkanis Microsoft Research New England

1 Vasilis Syrgkanis Microsoft Research New EnglandEconome...
Author: Flora Adams
0 downloads 2 Views

1 Vasilis Syrgkanis Microsoft Research New EnglandEconometric Theory for Games Part 1: Introduction to Econometrics and Econometrics of Discrete Bayesian Games Vasilis Syrgkanis Microsoft Research New England

2 Outline of tutorial Day 1: Day 2: Day 3:Brief Primer on Econometric Theory Estimation in Static Games of Incomplete Information: two stage estimators Markovian Dynamic Games of Incomplete Information Day 2: Discrete Static Games of Complete Information: multiplicity of equilibria and set inference Day 3: Auction games: Identification and estimation in first price auctions with independent private values Algorithmic game theory and econometrics Mechanism design for data science Econometrics for learning agents

3 A Primer on Econometric TheoryBasic Tools and Terminology

4 Econometric Theory Given a sequence of i.i.d. data points 𝑍 1 ,…, 𝑍 𝑛Each 𝑍 𝑖 is the outcome of some structural model 𝑍 𝑖 ∼𝐷 πœƒ 0 , with πœƒ 0 ∈Θ Parameter space Θ can be: Finite dimensional (e.g. 𝑅 𝑑 ): parametric model Infinite dimensional (e.g. function): non-parametric model Mixture of finite and infinite: If we are interested only in parametric part: Semi-parametric If we are interested in both: Semi-nonparametric

5 Main Goals Identification: If we knew β€œpopulation distribution” 𝐷( πœƒ 0 ) then can we pin-point πœƒ 0 ? Estimation: Devise an algorithm that outputs an estimate πœƒ 𝑛 of πœƒ 0 when having 𝑛 samples

6 Estimator Properties of InterestFinite Sample Properties of Estimators: Bias =𝐸 πœƒ 𝑛 βˆ’ πœƒ 0 =0? Variance: Var( πœƒ 𝑛 ) ? Mean-Squared-Error (MSE): 𝐸 πœƒ 𝑛 βˆ’ πœƒ =Variance+Bia s 2 Large Sample Properties: π‘›β†’βˆž Consistency: πœƒ 𝑛 β†’ πœƒ 0 ? Asymptotic Normality: π‘Ž 𝑛 πœƒ 𝑛 βˆ’ πœƒ 0 →𝑁(0,𝑉) ? 𝑛 -consistency: π‘Ž 𝑛 = 𝑛 ? Efficiency: is limit variance 𝑉 information theoretically optimal? (typically achieved by MLE estimator)

7 General Classes of EstimatorsGeneralized Method of Moments (GMM): suppose in population 𝐸[π‘š 𝑧,πœƒ ]=0. Then πœƒ 𝑛 is solution to: 1 𝑛 𝑖 π‘š 𝑧 𝑖 , πœƒ 𝑛 =0 Example. Linear regression: 𝑦=π‘§β‹…πœƒ+πœ–. Then: 𝐸 𝑧 π‘¦βˆ’π‘§β‹…πœƒ =0 Empirical analogue: 1 𝑛 𝑖 𝑧 𝑖 β‹… 𝑧 𝑖 𝑇 πœƒ 𝑛 = 1 𝑛 𝑖 𝑧 𝑖 β‹… 𝑦 𝑖 ⇔ πœƒ 𝑛 = 𝑍⋅ 𝑍 𝑇 βˆ’1 𝑍⋅𝑦 Where 𝑍=[ 𝑧 1 … 𝑧 𝑛 ] (matrix with columns 𝑧 𝑖 vectors, i.e. (OLS estimate)

8 General Classes of EstimatorsExtremum Estimator: Suppose we know that πœƒ 0 = argmin πœƒβˆˆΞ˜ 𝑄 0 (πœƒ) πœƒ 𝑛 = argmin πœƒβˆˆΞ˜ 𝑄 𝑛 (πœƒ) Examples MLE: 𝑄 𝑛 πœƒ =βˆ’ 1 𝑛 𝑖=1 𝑛 ln 𝑓( 𝑧 𝑖 ;πœƒ) Overidentified GMM Estimator: suppose in population 𝐸[π‘š 𝑧,πœƒ ]=0. Then: πœƒ 0 = argmin πœƒ 𝐸 π‘š 𝑧,πœƒ π‘Š =𝐸 π‘š 𝑧,πœƒ 𝑇 π‘Š 𝐸 π‘š 𝑧,πœƒ , for some W positive definite 𝑄 𝑛 πœƒ = 1 𝑛 𝑖 π‘š 𝑧 𝑖 ,πœƒ 𝑇 π‘Š 1 𝑛 𝑖 π‘š 𝑧 𝑖 ,πœƒ

9 Consistency of Extremum EstimatorsConsistency Theorem. If there is a function 𝑄 0 πœƒ s.t.: 𝑄 0 πœƒ is uniquely maximized at πœƒ 0 𝑄 0 πœƒ is continuous 𝑄 𝑛 πœƒ converges uniformly in probability to 𝑄 0 πœƒ , i.e. sup πœƒ 𝑄 𝑛 πœƒ βˆ’ 𝑄 0 πœƒ β†’ 𝑝 0 Then πœƒ β†’ 𝑝 πœƒ 0 If 𝑄 𝑛 πœƒ = 1 𝑛 𝑖 𝑔 𝑧 𝑖 ,πœƒ and 𝑄 0 πœƒ =𝐸 𝑔 𝑧,πœƒ , then (2.,3.) will be satisfied if 𝑔 𝑧,πœƒ is continuous 𝑔 𝑧,πœƒ ≀𝑑(𝑧) with 𝐸 𝑑 𝑧 β‰€βˆž Typically referred to as β€œregularity conditions”

10 In practice, typically variance is computed via Bootstrap [Efron’79]:Asymptotic Normality Under β€œregularity conditions” asymptotic normality of extremum estimators follows by ULLN, CLT, Slutzky thm and consistency Roughly: consider case 𝑄 𝑛 πœƒ = 1 𝑛 𝑖 𝑔 𝑧 𝑖 ,πœƒ Take first order condition 1 𝑛 𝑖 𝛻 πœƒ 𝑔( 𝑧 𝑖 , πœƒ ) =0 Linearize around πœƒ 0 by mean value theorem 1 𝑛 𝑖 𝛻 πœƒ 𝑔 𝑧 𝑖 , πœƒ 𝑛 𝑖 𝛻 πœƒπœƒ 𝑔 𝑧 𝑖 , πœƒ πœƒ βˆ’ πœƒ 0 =0 Re-arrange: 𝑛 πœƒ βˆ’ πœƒ 0 = 1 𝑛 𝑖 𝛻 πœƒπœƒ 𝑔 𝑧 𝑖 , πœƒ βˆ’1 β‹… 1 𝑛 𝑖 𝛻 πœƒ 𝑔 𝑧 𝑖 , πœƒ 0 In practice, typically variance is computed via Bootstrap [Efron’79]: Re-sample from your samples with replacement and compute empirical variance β†’ 𝑑 𝑁 0,π‘ˆ β†’ 𝑑 𝑁 0,π‘‰π‘Žπ‘Ÿ 𝛻 πœƒ 𝑔 𝑧, πœƒ 0 β†’ 𝑝 𝐸 𝛻 πœƒπœƒ 𝑔 𝑧, πœƒ 0

11 Econometric Theory for Games

12 Econometric Theory for Games𝑍 𝑖 are observable quantities from a game being played πœƒ 0 : unobserved parameters of the game Address identification and estimation in a variety of game theoretic models assuming players are playing according to some equilibrium notion

13 Why useful? Scientific: economically meaningful quantitiesPerform counter-factual analysis: what would happen if we change the game? Performance measures: welfare, revenue Testing game-theoretic models: if theory on estimated quantities predicts different behavior, then in trouble

14 Incomplete Information Games and Two-Stage EstimatorsStatic Games: [Bajari-Hong-Krainer-Nekipelov’12] Dynamic Games: [Bajari-Benkard-Levin’07], [Pakes-Ostrovsky- Berry’07], [Aguirregabiria-Mira’07], [Ackerberg-Benkard-Berry- Pakes’07], [Bajari-Hong-Chernozhukov-Nekipelov’09]

15 High level idea At equilibrium agents have beliefs about other players actions and best respond If econometrician observes the same information about opponents as the player does then: Estimate these beliefs from the data in first stage Use best-response inequalities to these estimated beliefs in the second stage and infer parameters of utility

16 Static Entry Game with Private ShocksTwo firms deciding whether to enter a market Entry decision 𝑦 𝑖 ∈{0,1} Profits from entry: πœ‹ 1 =π‘₯β‹… 𝛽 1 + 𝑦 2 𝛿 1 + πœ– 1 πœ‹ 2 =π‘₯β‹… 𝛽 2 + 𝑦 1 𝛿 2 + πœ– 2 πœ– 𝑖 ∼ 𝐹 𝑖 : at each market i.i.d. from known distribution and private to player π‘₯: observable characteristics of each market 𝛽 𝑖 , 𝛿 𝑖 : constants across markets

17 Static Entry Game with Private ShocksFirms best-respond only in expectation Expected profits from entry: Ξ  1 =π‘₯β‹… 𝛽 1 + Pr 𝑦 2 =1|π‘₯ 𝛿 1 + πœ– 1 Ξ  2 =π‘₯β‹… 𝛽 2 +Pr⁑[ 𝑦 1 =1|π‘₯] 𝛿 2 + πœ– 2 Let 𝜎 𝑖 π‘₯ =Pr⁑[ 𝑦 𝑖 =1|π‘₯] Then: 𝜎 1 π‘₯ =Pr⁑[π‘₯β‹… 𝛽 1 + 𝜎 2 π‘₯ 𝛿 1 + πœ– 1 >0] 𝜎 2 π‘₯ =Pr⁑[π‘₯β‹… 𝛽 2 + 𝜎 1 π‘₯ 𝛿 2 + πœ– 2 >0]

18 Static Entry Game with Private ShocksIf πœ– 𝑖 is distributed according to an extreme value distribution: 𝜎 1 π‘₯ ∝exp⁑[π‘₯β‹… 𝛽 1 + 𝜎 2 π‘₯ 𝛿 1 ] 𝜎 2 π‘₯ ∝ exp ⁑[π‘₯β‹… 𝛽 2 + 𝜎 1 π‘₯ 𝛿 2 ] Non-linear system of simultaneous equations Computing fixed point is computationally heavy and fixed-point might not be unique Idea [Hotz-Miller’93, Bajari-Benkard-Levin’07, Pakes-Ostrovsky-Berry’07, Aguirregabiria-Mira’07, Bajari-Hong-Chernozhukov-Nekipelov’09]: Use a two stage estimator Compute non-parametric estimate 𝜎 (π‘₯) of function 𝜎 𝑖 π‘₯ from data Run parametric regressions for each agent individually from the condition that: 𝜎 𝑖 π‘₯ ∝ exp [π‘₯β‹… 𝛽 𝑖 + 𝜎 βˆ’π‘– π‘₯ 𝛿 𝑖 ] The latter is a simple logistic regression for each player to estimate 𝛽 𝑖 , 𝛿 𝑖

19 Simple case: finite discrete statesIf there are 𝑑 states, then 𝜎 𝑖 are 𝑑-dimensional parameter vectors Easy 𝑛 -consistent first-stage estimators 𝜎 = 𝜎 1 , 𝜎 2 of 𝜎=( 𝜎 1 , 𝜎 2 ), i.e.: 𝑛 𝜎 𝑖 βˆ’πœŽ →𝑁(0,𝑉) Suppose for second stage we do generalized method of moment estimator: Let πœƒ = 𝛽 1 , 𝛽 2 , 𝛿 1 , 𝛿 2 and πœƒ 0 = 𝛽 1 , 𝛽 2 , 𝛿 2 , 𝛿 2 Let 𝑦 𝑑 = 𝑦 1𝑑 , 𝑦 2𝑑 and Ξ“ π‘₯,𝜎,πœƒ = Ξ“ 1 π‘₯,𝜎,πœƒ , Ξ“ 2 π‘₯,𝜎,πœƒ with Ξ“ 𝑖 π‘₯,𝜎,πœƒ = 𝑒 π‘₯β‹… 𝛽 𝑖 + 𝜎 βˆ’π‘– 𝛿 1+ 𝑒 π‘₯β‹… 𝛽 𝑖 + 𝜎 βˆ’π‘– 𝛿 Then second stage estimator πœƒ is the solution to: 1 𝑛 𝑑=1 𝑛 𝐴 π‘₯ 𝑑 β‹… 𝑦 𝑑 βˆ’Ξ“ π‘₯ 𝑑 , 𝜎 , πœƒ =0 Does first stage error affect second stage variance and how? This is a general question about two stage estimators

20 Two-Stage GMM with 𝑛 -Consistent First Stage[Newey-McFadden’94: Large Sample Estimation and Hypothesis Testing] Run a first step estimator 𝜎 of 𝜎, with 𝑛 𝜎 βˆ’πœŽ →𝑁 0,𝑉 Second stage is a GMM estimator based on moment conditions 𝐸 π‘š 𝑧,πœƒ,𝜎 =0, i.e. πœƒ satisfies: 1 𝑛 𝑑=1 𝑛 π‘š 𝑧 𝑑 , πœƒ , 𝜎 =0 Linearize around πœƒ: 𝑛 πœƒ βˆ’πœƒ = βˆ’ 1 𝑛 𝑑=1 𝑛 πœ•π‘š 𝑧 𝑑 , πœƒ , 𝜎 πœ•πœƒ βˆ’1 β‹… 1 𝑛 𝑑=1 𝑛 π‘š 𝑧 𝑑 ,πœƒ, 𝜎 Now the second term can be linearized around 𝜎: 1 𝑛 𝑑=1 𝑛 π‘š 𝑧 𝑑 ,πœƒ, 𝜎 = 1 𝑛 𝑑=1 𝑛 π‘š( 𝑧 𝑑 ,πœƒ,𝜎) + 1 𝑛 𝑑=1 πœ•π‘š 𝑧 𝑑 , πœƒ, 𝜎 πœ•πœŽ β‹… 𝑛 ( 𝜎 βˆ’πœŽ)

21 Continuous State Space and Semi-Parametric Efficiency

22 Continuous State Space: π‘₯∈ 𝑅 𝑑[Bajari-Hong-Kranier-Nekipelov’12] Then there is no 𝑛 -consistent first stage non-parametric estimator 𝜎 (β‹…) for function 𝜎 β‹… =𝐸[𝑦|π‘₯] Remarkably: still 𝑛 -consistency for second stage estimate πœƒ !! For instance: Kernel estimator for the first stage (tune bandwidth, β€œundersmoothing”) GMM for second stage Intuition: Kernel estimators have tunable β€œbias”-”variance” tradeoffs Close to true πœƒ: first stage bias and variance affect linearly second stage estimate If variance and bias decay at 𝑛 βˆ’ rates we are fine Requires at least 𝑛 βˆ’ consistency of first stage Typically we have wiggle room to get variance of that order while decreasing bias to decay at 𝑛 βˆ’ rate (e.g. decrease the bandwidth of a kernel estimate)

23 Semi-Parametric Two-Stage Estimation[Newey-McFadden’94, Chernozhukov et al β€˜16] Second stage is a GMM estimator based on moment conditions 𝐸 π‘š 𝑍,πœƒ,𝜎(𝑋) =0, i.e. πœƒ satisfies: 1 𝑛 𝑑=1 𝑛 π‘š 𝑧 𝑑 , πœƒ , 𝜎 ( π‘₯ 𝑑 ) =0 Linearize around πœƒ: 𝑛 πœƒ βˆ’πœƒ = βˆ’ 1 𝑛 𝑑=1 𝑛 πœ•π‘š 𝑧 𝑑 , πœƒ , 𝜎 ( π‘₯ 𝑑 ) πœ•πœƒ βˆ’1 β‹… 1 𝑛 𝑑=1 𝑛 π‘š 𝑧 𝑑 ,πœƒ, 𝜎 π‘₯ 𝑑 Second order expansion of second term around true 𝜎: 1 𝑛 𝑑=1 𝑛 π‘š 𝑧 𝑑 ,πœƒ,𝜎 π‘₯ 𝑑 𝑛 𝑑=1 πœ•π‘š 𝑧 𝑑 , πœƒ,𝜎 π‘₯ 𝑑 πœ•πœŽ β‹… 𝜎 π‘₯ 𝑑 βˆ’πœŽ π‘₯ 𝑑 𝑛 𝑑=1 πœ• 2 π‘š 𝑧 𝑑 ,πœƒ, 𝜎 π‘₯ 𝑑 πœ• 𝜎 𝜎 π‘₯ 𝑑 βˆ’πœŽ π‘₯ 𝑑 2 Split data set to estimate 𝜎 with a separate dataset than the one used for πœƒ . Makes 𝜎 independent of π‘₯ 𝑑

24 Semi-Parametric Two-Stage Estimation[Newey-McFadden’94] Suffices to control: 𝑛 Bias 𝜎 and 𝑛 MSE 𝜎 Kernel estimators have tunable β€œbias”-”variance” tradeoffs For instance: if 𝜎 is a single dimensional density function and we use a Kernel estimator: 𝜎 π‘₯ = 1 π‘›β„Ž 𝑑=1 𝑛 𝐾 π‘₯βˆ’ π‘₯ 𝑑 β„Ž Bias 𝜎 β‰ˆ β„Ž 2 Variance 𝜎 β‰ˆ 1 π‘›β„Ž MSE=Bias 𝜎 2 +Variance 𝜎 β‰ˆ β„Ž π‘›β„Ž Optimal β€œbandwidth” β„Ž for MSE is 𝑛 βˆ’ 1 5 β‡’MSEβ‰ˆ 𝑛 βˆ’ But then 𝑛 π΅π‘–π‘Žπ‘  𝜎 β†’βˆž Undersmoothing, decay the bandwidth faster than optimal: β„Žβ‰ˆ 𝑛 βˆ’ 1 3 𝑛 π΅π‘–π‘Žπ‘  𝜎 β‰ˆ 𝑛 βˆ’ 1 6 β†’0 𝑛 𝑀𝑆𝐸 𝜎 β‰ˆ 𝑛 𝑛 βˆ’ 𝑛 βˆ’ β†’0 For detailed exposition see: [Newey94, Ai-Chen’03] Section 8.3 of survey of [Newey-McFadden’94] Han Hong’s Lecture notes on semi-parametric efficiency [ECO276 Stanford]

25 General Dynamic Games [Bajari-Benkard-Levin’07], [Pakes-Ostrovsky-Berry’07], [Aguirregabiria-Mira’07], [Ackerberg-Benkard-Berry-Pakes’07], [Bajari-Hong-Chernozhukov-Nekipelov’09]

26 Steady-State Markovian Dynamic Games… 𝑠 𝑑 𝑠 𝑑+1 … 1. 4. Private shocks i.i.d., independent of state and private information to each player State probabilistically transitions to next state, based on prior state and on action profile πœ– 1𝑑 π‘Ž 1 t 2. πœ– 𝑛𝑑 π‘Ž n t 3. πœ‹ 1 π‘Ž 𝑑 , 𝑠 𝑑 , πœ– 𝑖𝑑 = πœ‹ 1 π‘Ž 𝑑 , 𝑠 𝑑 + πœ– 𝑖𝑑 π‘Ž 𝑖 Each player receives payoff Each player 𝑖 picks an action π‘Ž 𝑖 𝑑 = 𝜎 𝑖 𝑠 𝑑 , πœ– 𝑖𝑑 based on current state and on private shock πœ‹ 𝑛 π‘Ž 𝑑 , 𝑠 𝑑 , πœ– 𝑛𝑑 = πœ‹ 𝑛 π‘Ž 𝑑 , 𝑠 𝑑 + πœ– 𝑛𝑑 π‘Ž 𝑛 Steady state policy: time-independent mapping from states, shocks to actions 𝑉 𝑖 𝑠;𝜎,πœƒ =𝐸 𝑑=0 𝑇 𝛽 𝑑 πœ‹ 𝑖 𝜎 𝑠 𝑑 , πœ– 𝑑 , 𝑠 𝑑 , πœ– 𝑖𝑑 𝑠 0 =𝑠;πœƒ = 𝜈 𝑖 𝜎 𝑠, πœ– 0 ,𝑠 + πœ– 𝑖0 𝜎(𝑠, πœ– 0 ) Markov-Perfect-Equilibrium: player chooses action π‘Ž 𝑖 if: 𝑣 𝑖 π‘Ž 𝑖 ,𝑠 + πœ– 𝑖 π‘Ž 𝑖 β‰₯ 𝑣 𝑖 π‘Ž 𝑖 β€² ,𝑠 + πœ– 𝑖 ( π‘Ž 𝑖 β€² ) β€œshockless” discounted expected equilibrium payoff.

27 Dynamic Games: First Stage[Bajari-Benkard-Levin’07] Let 𝑃 𝑖 π‘Ž 𝑖 𝑠 : probability of playing action π‘Ž 𝑖 conditional on state 𝑠 Suppose πœ– 𝑖 are extreme value and 𝑣 𝑖 0,𝑠 =0, then log 𝑃 𝑖 ( π‘Ž 𝑖 |𝑠) βˆ’ log 𝑃 𝑖 0 𝑠 = 𝑣 𝑖 ( π‘Ž 𝑖 , 𝑠) Non-parametrically estimate 𝑃 𝑖 π‘Ž 𝑖 𝑠 Invert and get estimate 𝑣 𝑖 π‘Ž 𝑖 ,𝑠 = log 𝑃 𝑖 ( π‘Ž 𝑖 |𝑠) βˆ’ log 𝑃 𝑖 0 𝑠 We have a non-parametric first-stage estimate of the policy function: 𝜎 𝑖 𝑠, πœ– 𝑖 = argmax π‘Ž 𝑖 ∈ 𝐴 𝑖 𝑣 𝑖 ( π‘Ž 𝑖 ,𝑠 )βˆ’ πœ– 𝑖 ( π‘Ž 𝑖 ) Combine with non-parametric estimate of state transition probabilities Compute a non-parametric estimate of discounted payoff for each policy, state, parameter tuple: 𝑉 𝑖 (𝜎,𝑠;πœƒ), by forward simulation

28 Dynamic Games: First Stage[Bajari-Benkard-Levin’07] If payoff is linear in parameters: πœ‹ 𝑖 π‘Ž,𝑠, πœ– 𝑖 ;πœƒ = Ξ¨ i π‘Ž,𝑠, πœ– 𝑖 β‹…πœƒ Then: 𝑉 𝑖 𝜎,𝑠;πœƒ = π‘Š 𝑖 𝜎,𝑠 β‹…πœƒ Suffices to do only simulation for each (policy, state) pair and not for each parameter, to get first stage estimates π‘Š 𝑖 (𝜎,𝑠)

29 Dynamic Games: Second Stage[Bajari-Benkard-Levin’07] We know by equilibrium: 𝑔 𝑖,𝑠, 𝜎 𝑖 β€² ;πœƒ = 𝑉 𝑖 𝜎,𝑠;πœƒ βˆ’ 𝑉 𝑖 𝜎 𝑖 β€² , 𝜎 βˆ’π‘– ;πœƒ β‰₯0 Can use an extremum estimator: Definite a probability distribution over (player, state, deviation) triplets Compute expected gain from [deviation]- under the latter distribution 𝑄 πœƒ =𝐸[ min {𝑔 𝑖,𝑠, 𝜎 𝑖 β€² ;πœƒ ,0} ] By Equilibrium 𝑄 πœƒ 0 =0= min πœƒ 𝑄 πœƒ Do empirical analogue with estimate 𝑔 : 𝑔 𝑖,𝑠, 𝜎 𝑖 β€² ;πœƒ = 𝑉 𝑖 𝜎 ,𝑠;πœƒ βˆ’ 𝑉 𝑖 𝜎 𝑖 β€² , 𝜎 βˆ’π‘– ;πœƒ coming from first stage estimates Two sources of error: Error of 𝜎 and P 𝑠 β€² 𝑠,π‘Ž : 𝑛 -consistent, asymptotically normal, for discrete actions/states Simulation error: can be made arbitrarily small by taking as many sample paths as you want

30 Recap of main idea At equilibrium agents have beliefs about other players actions and best respond If econometrician observes the same information about opponents as the player does then: Estimate these beliefs from the data in first stage Use best-response inequalities to these estimated beliefs in the second stage and infer parameters of utility

31 References Primer on Econometric TheoryBajari-Chernozhukov-Hong-Nekipelov, 2009: Non-parametric and semi- parametric analysis of a dynamic game model Newey-McFadden, 1994: Large sample estimation and hypothesis testing, Chapter 36, Handbook of Econometrics Hotz-Miller, 1993: Conditional choice probabilities and the estimation of dynamic models, Review of Economic Studies Amemiya, 1985: Advanced Econometrics, Harvard University Press Static Games of Incomplete Information Hong, 2012: Stanford University, Dept. of Economics, course ECO276, Limited Dependent Variables Bajari-Hong-Krainer-Nekipelov, 2006: Estimating static models of strategic interactions, Journal of Business and Economic Statistics Surveys on Econometric Theory for Games Semi-Parametric two-stage estimation 𝒏 -consistency Ackerberg-Benkard-Berry-Pakes , 2006: Econometric tools for analyzing market outcomes, Handbook of Econometrics Hong, 2012: ECO276, Lecture 5: Basic asymptotic for 𝑛 Consistent semiparametric estimation Bajari-Hong-Nekipelov, 2010: Game theory and econometrics: a survey of some recent research, NBER 2010 Robinson, 1988: Root-n-consistent semiparametric regression, Econometrica Berry-Tamer, 2006: Identification in models of oligopoly entry, Advances in Economics and Econometrics Newey, 1990: Semiparametric efficiency bounds, Journal of Applied Econometrics Dynamic Games of Incomplete Information Newey, 1994: The asymptotic variance of semiparametric estimators, Econometrica Bajari-Benkard-Levin, 2007: Estimating dynamic models of imperfect competition, Econometrica Ai-Chen, 2003: Efficient estimation of models with conditional moment restrictions containing unknown functions, Econometrica Aguirregabiria-Mira, 2007: Sequential estimation of dynamic discrete games, Econometrica Chen, 2008: Large sample sieve estimation of semi-nonparametric models Chapter 76, Handbook of Econometrics Pakes-Ostrovsky-Berry, 2007: Simple estimators for the parameters of discrete dynamic games (with entry/exit examples), RAND Journal of Economics Chernozhukov, Chetverikov, Demirer, Duflo, Hansen, Newey 2016: Double Machine Learning for Treatment and Causal Parameters Pesendorfer-Schmidt-Dengler, 2003: Identification and estimation of dynamic games