SSRN Author: Gregor ReichGregor Reich SSRN Content
https://privwww.ssrn.com/author=1776931
https://privwww.ssrn.com/rss/en-usWed, 24 Mar 2021 01:01:11 GMTeditor@ssrn.com (Editor)Wed, 24 Mar 2021 01:01:11 GMTwebmaster@ssrn.com (WebMaster)SSRN RSS Generator 1.0REVISION: Structural Estimation by Homotopy Continuation with an Application to Discount Factor EstimationWe develop a method to robustly estimate parameters of structural economic models with potential identification issues. Using homotopy path continuation applied to the MPEC formulation of the estimation problem (Su and Judd, 2012), we trace the parameter estimates and their confidence intervals as a function of a controlled parameter. <br>As the discount factor is commonly assumed to be poorly identified in DDCMs, we trace the parameter estimates of the bus engine replacement model by Rust (1987) as a function of the discount factor β. Applying methods developed for undiscounted dynamic programming, we find that β is well identified and statistically significantly larger than 1. We establish an economically reasonable qualitative link between the decision-maker's discounting and the real interest rates: in an extended model with an unanticipated structural break in β, the decrease in β qualitatively agrees with the macroeconomic regime change in the real interest rates during the ...
https://privwww.ssrn.com/abstract=3303999
https://privwww.ssrn.com/2006195.htmlTue, 23 Mar 2021 12:39:42 GMTNew: Tests of Return Predictability: A Comparative StudyEvidence of return predictability based on dividend yields remains inconclusive. Predictive regressions are subject to subtle econometric issues rendering standard inference inappropriate. As a consequence, there now exists a broad range of statistical tests. We review the tests from Stambaugh (1999), Lewellen (2004), Campbell and Yogo (2006), and Cochrane (2007) and complement our analysis with standard maximum likelihood as well as restricted maximum likelihood estimation as discussed by Chen and Deo (2009). When applied to two samples from the literature, we find that p-values range from almost 0% to more than 20%. We reconcile the conflicting evidence and identify two main sources: First, we attribute part of it to the two different sample periods. Second, we urge to go beyond the p-value and find that some of the tests are severely size-distorted, whereas others lack power. To demonstrate this, we use an extensive simulation study to assess the size (falsely rejecting a true ...
https://privwww.ssrn.com/abstract=3765046
https://privwww.ssrn.com/1997970.htmlThu, 04 Mar 2021 02:00:27 GMTREVISION: 'Small Data': Efficient Inference with Occasionally Observed StatesWe study the estimation of dynamic economic models if some of the state variables are observed only occasionally by the econometrician—a common problem in many fields, ranging from industrial organization over marketing to finance. If such occasional state observations are serially correlated, the likelihood function of the model becomes a potentially high-dimensional integral over a non-standard domain. We propose a method that generalizes the recursive likelihood function integration procedure (RLI; Reich, 2018) to numerically approximate this integral and demonstrate its statistical efficiency in several well-understood examples from finance and industrial organization. Further, we compare the performance of our approach to a recently suggested method of simulated moments in extensive Monte Carlo studies. In all our demonstrations, we can consistently and efficiently identify all model parameters, and we find that <br>the additional variance of our estimator when going from full ...
https://privwww.ssrn.com/abstract=3638618
https://privwww.ssrn.com/1994842.htmlWed, 24 Feb 2021 13:36:41 GMTREVISION: A Note on the Non-Proportionality of Winning Probabilities in BitcoinIt is widely assumed that the selection process in a blockchain is based on proportional winning probabilities. The reliability and security of any blockchain is based upon this assumption. However, making an analogy between the Bitcoin protocol and the classical statistical urn problem, we argue that, at least on a theoretical level, the selection process in several blockchains is based on nonproportional winning probabilities. This reveals a misconception regarding the incentive structure of many blockchain protocols. We develop an empirical approach to testing for nonproportional winning probabilities in any blockchain, and offer a solution to this problem.<br>
https://privwww.ssrn.com/abstract=3399742
https://privwww.ssrn.com/1992942.htmlFri, 19 Feb 2021 09:20:58 GMTREVISION: Learning About Learning in Blockchain ProtocolsThe key differential characteristic of a blockchain is its protocol, which is an economic mechanism that incentivizes the agents participating in the system to act in a decentralized manner. In order to properly work, blockchain protocols cannot offer incentives for the miners to centralize their resources. The viability and security of blockchains is based upon this feature. Making an analogy between the bitcoin protocol (the most prominent consensus mechanism used in the context of blockchain) and the classical statistical urn problem, we find that, in the long-run, this protocol induces within-block learning and therefore provides incentives for the miners to centralize their resources. This will put systems based upon this and other similar protocols at high risk. We develop an empirical approach to test our theoretical findings using an empirical counterpart of a proof by contradiction. This approach can be applied to the data emerging from any kind of blockchain protocol to ...
https://privwww.ssrn.com/abstract=3399742
https://privwww.ssrn.com/1979422.htmlMon, 11 Jan 2021 16:48:32 GMTREVISION: Efficient Likelihood Ratio Confidence Intervals using Constrained OptimizationUsing constrained optimization, we develop a simple, efficient approach (applicable in both unconstrained and constrained maximum-likelihood estimation problems) to computing profile-likelihood confidence intervals. In contrast to Wald-type or score-based inference, the likelihood ratio confidence intervals use all the information encoded in the likelihood function concerning the parameters, which leads to improved statistical properties. In addition, the method does no suffer from the computational burdens inherent in the bootstrap. In an application to Rust's (1987) bus-engine replacement problem, our approach does better than either the Wald or the bootstrap methods, delivering very accurate estimates of the confidence intervals quickly and efficiently. An extensive Monte Carlo study reveals that in small samples, only likelihood ratio confidence intervals yield reasonable coverage properties, while at the same time discriminating implausible values.
https://privwww.ssrn.com/abstract=3455484
https://privwww.ssrn.com/1970433.htmlThu, 10 Dec 2020 09:32:22 GMTREVISION: 'Small Data': Efficient Inference with Occasionally Observed StatesWe study the estimation of controlled Markov processes if the states are only occasionally observed by the econometrician. We propose an extension to the recursive likelihood integration method of Reich (2018), to which we incorporate such occasional state observations in a numerically efficient and accurate way. To evaluate the performance of the proposed method, we assess the computational feasibility as well as the statistical efficiency by applying it to a counter-factual scenario of the widely known bus engine replacement model of Rust (1987): We assume that the mileage state is observed only at replacement, but unobserved in between. We demonstrate that - despite reducing the amount of mileage observations to about only 2% of the original data set - the distribution of the cost parameter estimator under the occasional observation regime is almost indistinguishable from its distribution using all mileage observations; hence there is no (additional) bias and comparable variance.
https://privwww.ssrn.com/abstract=3638618
https://privwww.ssrn.com/1958961.htmlThu, 05 Nov 2020 16:54:06 GMTUpdate: Urns Filled with Bitcoins: New Perspectives on Proof-of-Work MiningThe probability of a miner finding a valid block in the bitcoin blockchain is assumed to follow the Poisson distribution. However, simple, descriptive, statistical analysis reveals that blocks requiring a lot of time to find — long blocks — are won only by miners with a relatively higher hash power per second. This suggests that relatively bigger miners might have an advantage with regard to winning long blocks, which can be understood as a sort of “within block learning”. Modelling the bitcoin mining problem as a race, and by means of a multinomial logit model, we can reject that the time spent mining a particular block does not affect the probability of a miner finding a valid version of this block in a manner that is proportional to her size. Further, we postulate that the probability of a miner finding a valid block is governed by the negative hypergeometric distribution. This would explain the descriptive statistics that emerge from the data and be aligned with the technical ...<br/><i>The Paper was removed</i>
https://privwww.ssrn.com/abstract=3399742
https://privwww.ssrn.com/1951898.htmlFri, 16 Oct 2020 05:22:01 GMTREVISION: Adaptive Grids for the Estimation of Dynamic ModelsThis paper develops a method to flexibly adapt interpolation grids of value function approximations in the estimation of dynamic models using either NFXP (Rust, 1987) or MPEC (Su and Judd, 2012). Since MPEC requires the grid structure for the value function approximation to be hard-coded into the constraints, one cannot apply iterative node insertion for grid refinement; for NFXP, grid adaption by (iteratively) inserting new grid nodes will generally lead to discontinuous likelihood functions. Therefore, we show how to continuously adapt the grid by moving the nodes, a technique referred to as r-adaption. We demonstrate how to obtain optimal grids based on the balanced error principle, and implement this approach by including additional constraints to the likelihood maximization problem. The method is applied to two models: (i) the bus engine replacement model (Rust, 1987), modified to feature a continuous mileage state, and (ii) to a dynamic model of content consumption using ...
https://privwww.ssrn.com/abstract=2650994
https://privwww.ssrn.com/1943191.htmlMon, 21 Sep 2020 08:54:53 GMTREVISION: Adaptive Grids for the Estimation of Dynamic ModelsThis paper develops a method to flexibly adapt interpolation grids of value function approximations in the estimation of dynamic models using either NFXP (Rust, 1987) or MPEC (Su and Judd, 2012). Since MPEC requires the grid structure for the value function approximation to be hard-coded into the constraints, one cannot apply iterative node insertion for grid refinement; for NFXP, grid adaption by (iteratively) inserting new grid nodes will generally lead to discontinuous likelihood functions. Therefore, we show how to continuously adapt the grid by moving the nodes, a technique referred to as r-adaption. We demonstrate how to obtain optimal grids based on the balanced error principle, and implement this approach by including additional constraints to the likelihood maximization problem. The method is applied to two models: (i) the bus engine replacement model (Rust, 1987), modified to feature a continuous mileage state, and (ii) to a dynamic model of content consumption using ...
https://privwww.ssrn.com/abstract=2650994
https://privwww.ssrn.com/1939058.htmlTue, 08 Sep 2020 09:20:02 GMTREVISION: 'Small Data': Efficient Inference with Occasionally Observed StatesWe study the estimation of controlled Markov processes if the states are only occasionally observed by the econometrician. We propose an extension to the recursive likelihood integration method of Reich (2018), to which we incorporate such occasional state observations in a numerically efficient and accurate way. To evaluate the performance of the proposed method, we assess the computational feasibility as well as the statistical efficiency by applying it to a counter-factual scenario of the widely known bus engine replacement model of Rust (1987): We assume that the mileage state is observed only at replacement, but unobserved in between. We demonstrate that - despite reducing the amount of mileage observations to about only 2% of the original data set - the distribution of the cost parameter estimator under the occasional observation regime is almost indistinguishable from its distribution using all mileage observations; hence there is no (additional) bias and comparable variance.
https://privwww.ssrn.com/abstract=3638618
https://privwww.ssrn.com/1926916.htmlFri, 31 Jul 2020 08:36:07 GMTREVISION: 'Small Data': Efficient Inference with Occasionally Observed StatesWe study the estimation of controlled Markov processes if the states are only occasionally observed by the econometrician. We propose an extension to the recursive likelihood integration method of Reich (2018), to which we incorporate such occasional state observations in a numerically efficient and accurate way. To evaluate the performance of the proposed method, we assess the computational feasibility as well as the statistical efficiency by applying it to a counter-factual scenario of the widely known bus engine replacement model of Rust (1987): We assume that the mileage state is observed only at replacement, but unobserved in between. We demonstrate that - despite reducing the amount of mileage observations to about only 2% of the original data set - the distribution of the cost parameter estimator under the occasional observation regime is almost indistinguishable from its distribution using all mileage observations; hence there is no (additional) bias and comparable variance.
https://privwww.ssrn.com/abstract=3638618
https://privwww.ssrn.com/1923938.htmlWed, 22 Jul 2020 09:54:37 GMT