[ECM] Econométrie: working papers (RePEc, 26/10/2010)

Source : NEP (New Economics Papers) | RePEc

  • Estimation and Inference with Weak, Semi-strong, and Strong Identification
Date: 2010-10
By: Donald W.K. Andrews (Cowles Foundation, Yale University)
Xu Cheng (Dept. Economics, University of Pennsylvania)
URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1773&r=ecm
This paper analyzes the properties of standard estimators, tests, and confidence sets (CS’s) in a class of models in which the parameters are unidentified or weakly identified in some parts of the parameter space. The paper also introduces methods to make the tests and CS’s robust to such identification problems. The results apply to a class of extremum estimators and corresponding tests and CS’s, including maximum likelihood (ML), least squares (LS), quantile, generalized method of moments (GMM), generalized empirical likelihood (GEL), minimum distance (MD), and semi-parametric estimators. The consistency/lack-of-consistency and asymptotic distributions of the estimators are established under a full range of drifting sequences of true distributions. The asymptotic size (in a uniform sense) of standard tests and CS’s is established. The results are applied to the ML estimator of an ARMA(1, 1) model and to the LS estimator of a nonlinear regression model.
Keywords: Asymptotic size, Confidence set, Estimator, Identification, Nonlinear models, Strong identification, Test, Weak identification
JEL: C12
  • Selection of weak VARMA models by modified Akaike’s information criteria
Date: 2010-06-21
By: Boubacar Mainassara, Yacouba
URL: http://d.repec.org/n?u=RePEc:pra:mprapa:24981&r=ecm
This article considers the problem of order selection of the vector autoregressive moving-average models and of the sub-class of the vector autoregressive models under the assumption that the errors are uncorrelated but not necessarily independent. We propose a modified version of the AIC (Akaike information criterion). This criterion requires the estimation of the matrice involved in the asymptotic variance of the quasi-maximum likelihood estimator of these models. Monte carlo experiments show that the proposed modified criterion estimates the model orders more accurately than the standard AIC and AICc (corrected AIC) in large samples and often in small samples.
Keywords: AIC, discrepancy, identification, Kullback-Leibler information, model selection, QMLE, order selection, weak VARMA models.
JEL: C52
  • Robust estimation of mean and dispersion functions in extended generalized additive models.
Date: 2010-09
By: Croux, Christophe
Gijbels, Irène
Prosdocimi, Ilaria
URL: http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/277060&r=ecm
Generalized Linear Models are a widely used method to obtain parametric estimates for the mean function. They have been further extended to allow the relationship between the mean function and the covariates to be more flexible via Generalized Additive Models. However the fixed variance structure can in many cases be too restrictive. The Extended Quasi-Likelihood (EQL) framework allows for estimation of both the mean and the dispersion/variance as functions of covariates. As for other maximum likelihood methods though, EQL estimates are not resistant to outliers: we need methods to obtain robust estimates for both the mean and the dispersion function. In this paper we obtain functional estimates for the mean and the dispersion that are both robust and smooth. The performance of the proposed method is illustrated via a simulation study and some real data examples.
Keywords: Dispersion; Generalized additive modelling; Mean regression function; M-estimation; P-splines; Robust estimation;
  • Reality checks and nested forecast model comparisons
Date: 2010
By: Todd E. Clark
Michael W. McCracken
URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2010-032&r=ecm
This paper develops a novel and effective bootstrap method for simulating asymptotic critical values for tests of equal forecast accuracy and encompassing among many nested models. The bootstrap, which combines elements of fixed regressor and wild bootstrap methods, is simple to use. We first derive the asymptotic distributions of tests of equal forecast accuracy and encompassing applied to forecasts from multiple models that nest the benchmark model – that is, reality check tests applied to nested models. We then prove the validity of the bootstrap for these tests. Monte Carlo experiments indicate that our proposed bootstrap has better finite-sample size and power than other methods designed for comparison of non-nested models. We conclude with empirical applications to multiple-model forecasts of commodity prices and GDP growth.
Keywords: Economic forecasting
  • Likelihood-Related Estimation Methods and Non-Gaussian GARCH Processes
Date: 2010-07
By: Christophe Chorro (CES – Centre d’économie de la Sorbonne – CNRS : UMR8174 – Université Panthéon-Sorbonne – Paris I)
Dominique Guegan (CES – Centre d’économie de la Sorbonne – CNRS : UMR8174 – Université Panthéon-Sorbonne – Paris I, EEP-PSE – Ecole d’Économie de Paris – Paris School of Economics – Ecole d’Économie de Paris)
Florian Ielpo (CES – Centre d’économie de la Sorbonne – CNRS : UMR8174 – Université Panthéon-Sorbonne – Paris I, Pictet Asset Management – Pictet Asset Management)
URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00523371_v1&r=ecm
This article discusses the finite distance properties of three likelihood-based estimation strategies for GARCH processes with non-Gaussian conditional distributions : (1) the maximum likelihood approach ; (2) the Quasi maximum Likelihood approach ; (3) a multi-steps recursive estimation approach (REC). We first run a Monte Carlo test which shows that the recursive method may be the most relevant approach for estimation purposes. We then turn to a sample of SP500 returns. We confirm that the REC estimates are statistically dominating the parameters estimated by the two other competing methods. Regardless of the selected model, REC estimates deliver the more stable results.
Keywords: Maximum likelihood method, related-GARCH process, recursive estimation method, mixture of Gaussian distribution, Generalized Hyperbolic distributions, SP500.
  • Exponential conditional volatility models
Date: 2010-09
By: Andrew Harvey
URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws103620&r=ecm
The asymptotic distribution of maximum likelihood estimators is derived for a class of exponential generalized autoregressive conditional heteroskedasticity (EGARCH) models. The result carries over to models for duration and realised volatility that use an exponential link function. A key feature of the model formulation is that the dynamics are driven by the score.
Keywords: Duration models, Gamma distribution, General error distribution, Heteroskedasticity, Leverage, Score Student’s t
JEL: C22
  • Robust forecasting of non-stationary time series.
Date: 2010-09
By: Croux, Christophe
Fried, R.
Gijbels, Irène
Mahieu, Koen
URL: http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/277099&r=ecm
This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable forecasts in the presence of outliers, non-linearity, and heteroscedasticity. In the absence of outliers, the forecasts are only slightly less precise than those based on a localized Least Squares estimator. An additional advantage of the MM-estimator is that it provides a robust estimate of the local variability of the time series.
Keywords: Heteroscedasticity; Non-parametric regression; Prediction; Outliers; Robustness;
  • On model selection and model misspecification in causal inference.
Date: 2010-09
By: Vansteelandt, Stijn
Bekaert, Maarten
Claeskens, Gerda
URL: http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/277522&r=ecm
Standard variable-selection procedures, primarily developed for the construction of outcome prediction models, are routinely applied when assessing exposure effects in observational studies. We argue that this tradition is sub-optimal and prone to yield bias in exposure effect estimates as well as their corresponding uncertainty estimates. We weigh the pros and cons of confounder-selection procedures and propose a procedure directly targeting the quality of the exposure effect estimator. We further demonstrate that certain strategies for inferring causal effects have the desirable features (a) of producing (approximately) valid confidence intervals, even when the confounder-selection process is ignored, and (b) of being robust against certain forms of misspecification of the association of confounders with both exposure and outcome.
Keywords: Causal inference; Confounder selection; Double robustness; Influential weights; Model selection; Model uncertainty; Propensity score;
  • Testing for unconditional predictive ability
Date: 2010
By: Todd E. Clark
Michael W. McCracken
URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2010-031&r=ecm
This chapter provides an overview of pseudo-out-of-sample tests of unconditional predictive ability. We begin by providing an overview of the literature, including both empirical applications and theoretical contributions. We then delineate two distinct methodologies for conducting inference: one based on the analytics in West (1996) and the other based on those in Giacomini and White (2006). These two approaches are then carefully described in the context of pairwise tests of equal forecast accuracy between two models. We consider both non-nested and nested comparisons. Monte Carlo evidence provides some guidance as to when the two forms of analytics are most appropriate, in a nested model context.
Keywords: Economic forecasting
  • Asymmetry and Long Memory in Volatility Modelling
Date: 2010-10-01
By: Manabu Asai
Michael McAleer (University of Canterbury)
Marcelo C. Medeiros
URL: http://d.repec.org/n?u=RePEc:cbt:econwp:10/60&r=ecm
A wide variety of conditional and stochastic variance models has been used to estimate latent volatility (or risk). In this paper, we propose a new long memory asymmetric volatility model which captures more flexible asymmetric patterns as compared with existing models. We extend the new specification to realized volatility by taking account of measurement errors, and use the Efficient Importance Sampling technique to estimate the model. As an empirical example, we apply the new model to the realized volatility of Standard and Poor’s 500 Composite Index to show that the new specification of asymmetry significantly improves the goodness of fit, and that the out-of-sample forecasts and Value-at-Risk (VaR) thresholds are satisfactory. Overall, the results of the out-of-sample forecasts show the adequacy of the new asymmetric and long memory volatility model for the period including the global financial crisis.
Keywords: Asymmetric volatility; long memory; realized volatility; measurement errors; efficient importance sampling
  • Long-run Identification in a Fractionally Integrated System
Date: 2010-09
By: Tschernig, Rolf
Weber, Enzo
Weigand, Roland
URL: http://d.repec.org/n?u=RePEc:bay:rdwiwi:16901&r=ecm
We propose an extension of structural fractionally integrated vector autoregressive models that avoids certain undesirable effects for impulse responses if long-run identification restrictions are imposed. We derive its Granger representation, investigate the effects of long-run restrictions and clarify their relation to finite-horizon schemes. It is illustrated by asymptotic analysis and simulations that enforcing integer integration orders can have severe consequences for impulse responses. In a system of US real output and aggregate prices effects of structural shocks strongly depend on integration order specification. In the statistically preferred fractional model the long-run restricted shock has only very short-lasting influence on GDP.
Keywords: Long memory; structural VAR; misspecification; GDP; price level
JEL: C32
  • « Duopoly in the Japanese Airline Market: Bayesian Estimation for the Entry Game »
Date: 2010-10
By: Shinya Sugawara (Graduate School of Economics, University of Tokyo)
Yasuhiro Omori (Faculty of Economics, University of Tokyo)
URL: http://d.repec.org/n?u=RePEc:tky:fseres:2010cf763&r=ecm
This paper provides an econometric analysis on a duopoly game in the Japanese domestic airline market. We establish a novel Bayesian estimation approach for the entry game, which is free from the conventional identification problem and thus allows the incorporation of flexible inference techniques. We find asymmetric strategic interactions between Japanese firms, which implies that competition will still be influenced by the former regulation regime. Furthermore, our prediction analysis indicates that the new Shizuoka airport will suffer from a lack of demand.
  • Independence Tests based on Symbolic Dynamics
Date: 2010-09-15
By: Helmut Elsinger (Economic Studies Division, Oesterreichische Nationalbank)
URL: http://d.repec.org/n?u=RePEc:onb:oenbwp:165&r=ecm
New methods to test whether a time series is i.i.d. are proposed in a recent series of papers (Matilla-García [2007], Matilla-García and Marín [2008], Matilla-García and Marín [2009], and Matilla-García et al. [2010]). The main idea is to map m-histories of a time series onto elements of the symmetric group. The observed frequencies of the different elements are then used to detect dependencies in the original series. The author will demonstrate that the results presented in the above papers are not correct in the suggested generality. Moreover, simulation results indicate that the performance of the original tests are not as good as betoken. JEL classification: C12, C52
Keywords: Independence Tests, Symbolic Dynamics, Permutation Entropy
  • Forecasting with many predictors – Is boosting a viable alternative?
Date: 2010-09-06
By: Buchen, Teresa
Wohlrabe, Klaus
URL: http://d.repec.org/n?u=RePEc:lmu:muenec:11788&r=ecm
This paper evaluates the forecast performance of boosting, a variable selection device, and compares it with the forecast combination schemes and dynamic factor models presented in Stock and Watson (2006). Using the same data set and comparison methodology, we find that boosting is a serious competitor for forecasting US industrial production growth in the short run and that it performs best in the longer run.
Keywords: Forecasting; Boosting; Cross-validation
JEL: C53

Laisser un commentaire

Entrez vos coordonnées ci-dessous ou cliquez sur une icône pour vous connecter:

Logo WordPress.com

Vous commentez à l'aide de votre compte WordPress.com. Déconnexion / Changer )

Image Twitter

Vous commentez à l'aide de votre compte Twitter. Déconnexion / Changer )

Photo Facebook

Vous commentez à l'aide de votre compte Facebook. Déconnexion / Changer )

Photo Google+

Vous commentez à l'aide de votre compte Google+. Déconnexion / Changer )

Connexion à %s