SSRN Author: Drew Fudenberg Drew Fudenberg SSRN Content
https://privwww.ssrn.com/author=20257
https://privwww.ssrn.com/rss/en-usTue, 21 Jul 2020 01:31:14 GMTeditor@ssrn.com (Editor)Tue, 21 Jul 2020 01:31:14 GMTwebmaster@ssrn.com (WebMaster)SSRN RSS Generator 1.0REVISION: How Flexible is that Functional Form? Quantifying the Restrictiveness of TheoriesWe propose a new way to quantify the restrictiveness of an economic model, based on how well the model fits simulated, hypothetical data sets. The data sets are drawn at random from a distribution that satisfies some application-dependent content restrictions (such as that people prefer more money to less). Models that can fit almost all hypothetical data well are not restrictive.<br>To illustrate our approach, we evaluate the restrictiveness of two widely-used behavioral models, Cumulative Prospect Theory and the Poisson Cognitive Hierarchy Model, and explain how restrictiveness reveals new insights about them.
https://privwww.ssrn.com/abstract=3580408
https://privwww.ssrn.com/1922591.htmlMon, 20 Jul 2020 08:42:21 GMTREVISION: How Flexible is that Functional Form? Quantifying the Restrictiveness of TheoriesWe propose an algorithm for quantifying the restrictiveness of economic models. Our restrictiveness measure is evaluated on simulated, hypothetical data sets that are drawn at random from a distribution that satisfies some application-dependent content restrictions, such as that people should prefer more money to less. We measure how well the model fits each of these data sets. Models that can fit almost all data well are not restrictive. We illustrate our approach by evaluating the restrictiveness of two widely-used behavioral models: Cumulative Prospect Theory and the Poisson Cognitive Hierarchy Model.
https://privwww.ssrn.com/abstract=3580408
https://privwww.ssrn.com/1920148.htmlMon, 13 Jul 2020 09:14:54 GMTREVISION: Quantifying the Restrictiveness of TheoriesWe propose an algorithm for quantifying the restrictiveness of economic models. Our restrictiveness measure is evaluated on simulated, hypothetical data sets that are drawn at random from a distribution that satisfies some application-dependent content restrictions, such as that people should prefer more money to less. We measure how well the model fits each of these data sets. Models that can fit almost all data well are not restrictive.<br><br>We illustrate our approach by evaluating the restrictiveness of two widely-used behavioral models: Cumulative Prospect Theory and the Poisson Cognitive Hierarchy Model.
https://privwww.ssrn.com/abstract=3580408
https://privwww.ssrn.com/1918373.htmlWed, 08 Jul 2020 11:12:45 GMTREVISION: Limit Points of Endogenous Misspecified LearningWe study how a misspecified agent learns from endogenous data when their prior belief does not impose restrictions on the distribution of outcomes, but can assign probability 0 to a neighborhood of the true model. We characterize the stable actions, which have a very high probability of being the long-run outcome for some initial beliefs, and the positively attracting actions, which have a positive probability of being the long-run outcome for any initial full support belief. A Berk-Nash equilibrium is uniformly strict if the equilibrium action is a strict best response to all the outcome distributions that minimize the Kullback-Leibler divergence from the truth, and uniform if the action is a best response to all those distributions. Uniform Berk-Nash equilibria are the unique possible limit actions under a myopic policy. All uniformly strict Berk-Nash equilibria are stable. They are positively attractive under causation neglect, where the agent believes that their action does not ...
https://privwww.ssrn.com/abstract=3553363
https://privwww.ssrn.com/1901293.htmlTue, 26 May 2020 14:29:28 GMTREVISION: Limit Points of Endogenous Misspecified LearningWe study how a misspecified agent learns from endogenous data when their prior belief does not impose restrictions on the distribution of outcomes, but can assign probability 0 to a neighborhood of the true model. We characterize the stable actions, which have a very high probability of being the long-run outcome for some initial beliefs, and the positively attracting actions, which have a positive probability of being the long-run outcome for any initial full support belief. A Berk-Nash equilibrium is uniformly strict if the equilibrium action is a strict best response to all the outcome distributions that minimize the Kullback-Leibler divergence from the truth, and uniform if the action is a best response to all those distributions. Uniform Berk-Nash equilibria are the unique possible limit actions under a myopic policy. All uniformly strict Berk-Nash equilibria are stable. They are positively attractive under causation neglect, where the agent believes that their action does not ...
https://privwww.ssrn.com/abstract=3553363
https://privwww.ssrn.com/1898570.htmlMon, 18 May 2020 16:19:44 GMTREVISION: Limits Points of Endogenous Misspecified LearningWe study how a misspecified agent learns from endogenous data when their prior belief does not impose restrictions on the distribution of outcomes, but can assign probability 0 to a neighborhood of the true model. We characterize the stable actions, which have a very high probability of being the long-run outcome for some initial beliefs, and the are positively attracting actions, which have positive probability of being the long-run outcome for any initial full support belief. A Berk-Nash equilibrium is uniformly strict if the equilibrium action is a strict best response to all the outcome distributions that minimize the Kullback-Leibler divergence from the truth, and uniform if the action is a best response to all those distributions. Uniform Berk-Nash equilibria are the unique possible limit actions under a myopic policy. All uniformly strict Berk-Nash equilibria are stable. They are positively attractive under causation neglect, where the agent believes that their action does ...
https://privwww.ssrn.com/abstract=3553363
https://privwww.ssrn.com/1888102.htmlTue, 21 Apr 2020 09:37:16 GMTREVISION: Limits Points of Endogenous Misspecified LearningWe study how a misspecified agent learns from endogenous data when their prior belief does not impose restrictions on the distribution of outcomes, but can assign probability 0 to a neighborhood of the true model. We characterize the stable actions, which have a very high probability of being the long-run outcome for some initial beliefs, and the are positively attracting actions, which have positive probability of being the long-run outcome for any initial full support belief. A Berk-Nash equilibrium is uniformly strict if the equilibrium action is a strict best response to all the outcome distributions that minimize the Kullback-Leibler divergence from the truth, and uniform if the action is a best response to all those distributions. Uniform Berk-Nash equilibria are the unique possible limit actions under a myopic policy. All uniformly strict Berk-Nash equilibria are stable. They are positively attractive under causation neglect, where the agent believes that their action does ...
https://privwww.ssrn.com/abstract=3553363
https://privwww.ssrn.com/1882153.htmlFri, 03 Apr 2020 17:34:47 GMTREVISION: Measuring the Completeness of TheoriesTo evaluate how well economic models predict behavior it is important to have a measure of how well any theory could be expected to perform. We provide a measure of the amount of predictable variation in the data that a theory captures, which we call its "completeness." We evaluate the completeness of leading theories in three applications---assigning certainty equivalents to lotteries, initial play in games, and human generation of random sequences---and show that this approach reveals new insights. We also illustrate how and why our completeness measure varies with the experiments considered, for example with the choice of lotteries used to evaluate risk preferences, and explain how our completeness measure can help guide the development of new theories.<br>
https://privwww.ssrn.com/abstract=3018785
https://privwww.ssrn.com/1861003.htmlMon, 27 Jan 2020 17:27:55 GMTREVISION: Measuring the Completeness of TheoriesWe use machine learning to provide a tractable measure of the amount of predictable variation in the data that a theory captures, which we call its "completeness." We apply this measure to three problems: assigning certain equivalents to lotteries, initial play in games, and human generation of random sequences. We discover considerable variation in the completeness of existing models, which sheds light on whether to focus on developing better models with the same features or instead to look for new features that will improve predictions. We also illustrate how and why completeness varies with the experiments considered, which highlights the role played in choosing which experiments to run.
https://privwww.ssrn.com/abstract=3018785
https://privwww.ssrn.com/1828472.htmlSat, 28 Sep 2019 00:53:02 GMTREVISION: The Theory Is Predictive, but Is It Complete?We show how methods based on machine learning can provide tractable measures of how much of the predictable variation in the data a theory captures, which we call its "completeness." We apply this measure to three domains: the evaluation of risk, initial play in games, and human attempts to generate random signals. We discover considerable variation in the completeness of existing models, which sheds light on whether to focus on developing better models with the same features or instead to look for new features that will aid predictions We also illustrate how and why completeness varies with the experiments considered, which highlights the role played in choosing which experiments to run.
https://privwww.ssrn.com/abstract=3018785
https://privwww.ssrn.com/1822420.htmlMon, 09 Sep 2019 10:23:18 GMT