December 5th, 2018
Dr. Tijana Levajković, öffnet in einem neuen Fenster, TU Wien.
The Stochastic Linear Quadratic Regulator Problem.

Abstract: We consider the stochastic linear quadratic regulator (SLQR) problem in infinite dimensions. This is an optimal control problem governed by a linear stochastic equation subject to a quadratic functional that has to be minimized. These problems arise naturally in science and engineering.  We show that the optimal control is given in feedback form in terms of a Riccati equation. We also present a detailed study of the numerical approximation of this problem, in particular, the convergence of Riccati operators. In addition, we provide a novel framework for solving this problem using a polynomial chaos expansion approach in the framework of white noise analysis. Finally, we present some numerical experiments.

November 28th, 2018
Prof. Siegfried Hörmann,, öffnet eine externe URL in einem neuen Fenster TU Graz.
Large Sample Distribution for Fully Functional Periodicity Tests.

Abstract: Periodicity is one of the most important characteristics of time series, and tests for periodicity go back to the very origins of the field. We consider the two situations where the potential period of a functional time series (FTS) is known and where it is unknown. For both problems we develop fully functional tests and work out the asymptotic distributions. When the period is known we allow for dependent noise and show that our test statistic is equivalent to the functional ANOVA statistic. The limiting distribution has an interesting form and can be written as a sum of independent hypoexponential variables whose parameters are eigenvalues of the spectral density operator of the FTS. 

When the period is unknown our test statistic is based on the maximal norm of the functional periodogram over fundamental frequencies. The limiting distribution of this object is rather delicate: it requires a central limit theorem for vectors of functional data, where the number of components increases proportional to the sample size. 

The talk is based on joint work with Piotr Kokoszka (Colorado State University) and Gilles Nisol (ULB) and Clément Cerovecki (ULB).

November 14th, 2018
Prof. Sylvia Frühwirth-Schnatter,, öffnet eine externe URL in einem neuen Fenster WU Wien.
Sparse Finite Mixtures for Model-Based Clustering.

Abstract: The talk reviews the concept of sparse finite mixture modelling and its application to model-based clustering of data. Sparse finite mixture models operate within a Bayesian framework and rely on a shrinkage prior on the weight distribution that removes all redundant components automatically. This leads to a trans-dimensional approach to select the number of clusters that is easily implemented via Markov chain Monte Carlo (MCMC) methods. The talk first discusses sparse finite mixtures of Gaussian distribution as a special case. This concept is extended in several directions, including sparse finite mixtures, where the component densities themselves are estimated semi-parametrically through mixtures of Gaussian distributions. Illustrative examples show that the framework of sparse finite mixtures works also for non-Gaussian distributions such as skew distributions or latent class models. The talk concludes with a comparison between sparse finite mixtures and Dirichlet process mixtures.

The talk is based on recent work with Gertraud Malsiner-Walli and Bettina Grün (2016, 2017, 2018):

Malsiner-Walli, Gertraud, Frühwirth-Schnatter, Sylvia and Bettina Grün (2016): Model-based clustering based on sparse finite Gaussian mixtures, Statistics and Computing, 26, 303-324.

Malsiner-Walli, Gertraud, Frühwirth-Schnatter, Sylvia and Bettina Grün (2017): Identifying mixtures of mixtures using Bayesian estimation, Journal of Computational and Graphical Statistics, 26, 285-295. 

Frühwirth-Schnatter, Sylvia und Malsiner Walli, Gertraud (2018): From here to infinity - sparse finite versus Dirichlet process mixtures in model-based clustering. Advanced in Data Analysis and Classification, forthcoming (ArXiv 1706.07194v2).

November 13th, 2018
Prof. Marc Hallin,, öffnet eine externe URL in einem neuen Fenster Université libre de Bruxelles.
Center-Outward Distribution Functions, Quantiles, Ranks, and Signs in R^d: A Measure Transportation Approach.

Abstract: Unlike the real line, the d-dimensional space R^d, for d ≥ 2, is not canonically ordered.
As a consequence, such fundamental and strongly order-related univariate concepts as quantile and distribution functions, and their empirical counterparts, involving ranks and signs, do not canonically extend to the multivariate context. Palliating that lack of a canonical ordering has remained an open problem for more than half a century, and has generated an abundant literature, motivating, among others, the development
of statistical depth and copula-based methods. We show that, unlike the many definitions that have been proposed in the literature, the measure transportation-based ones introduced in Chernozhukov, Galichon, Hallin and Henry (2017) enjoy all the properties (distribution-freeness and the maximal invariance property that entails preservation of semiparametric efficiency) that make univariate quantiles and ranks successful tools for semiparametric statistical inference. We therefore propose a new center-outward definition of multivariate distribution and quantile functions, along with their empirical counterparts, for which we establish a Glivenko-Cantelli result—the quintessential property of all distribution functions. Our approach, based on results by McCann (1995), is geometric rather than analytical and, contrary to the Monge-Kantorovich one in Chernozhukov et
al. (2017) (which assumes compact supports, hence finite moments of all orders), does not require any moment assumptions. The resulting ranks and signs are shown to be strictly distribution-free, and maximal invariant under the action of a data-driven class of (order-preserving) transformations generating the family of absolutely continuous distributions; that maximal invariance, in view of a general result by Hallin and Werker (2003), is the theoretical foundation of the semiparametric efficiency preservation property of ranks. The corresponding quantiles are equivariant under the same transformations.

November 2nd, 2018
Prof. Dimitris Politis,, öffnet eine externe URL in einem neuen Fenster University of California, San Diego.
Predictive Inference for Locally Stationary Time Series.

Abstract: The Model-free Prediction Principle of Politis (2015) has been successfully applied to general regression problems, as well as problems involving stationary time series. However, with long time series, e.g. annual temperature measurements spanning over 100 years or daily financial returns spanning several years, it may be unrealistic to assume stationarity throughout the span of the dataset. In the paper at hand, we show how Model-free Prediction can be applied to handle time series that are only locally stationary, i.e., they can be assumed to be as stationary only over short time-windows. Surprisingly there is little literature on point prediction for general locally stationary time series even in model-based setups and there is no literature on the construction of prediction intervals of locally stationary time series. We attempt to fill this gap here as well. Both one-step-ahead point predictors and prediction intervals are constructed, and the performance of model-free is compared to model-based prediction using models that incorporate a trend and/or heteroscedasticity. Both aspects, model-free and model-based, are novel in the context of time-series that are locally (but not globally) stationary.

Joint work with Srinjoy Das.

October 24th, 2018
Prof. Klaus Neusser,, öffnet eine externe URL in einem neuen Fenster University of Bern.
Time-Varying Rational Expectations Models.

Abstract: While rational expectations models with time–varying (random) coefficients have gained some esteem, the understanding of their dynamic properties is still in its infancy. The paper adapts results from the theory of random dynamical systems to solve and analyze the stability of rational expectations models with time–varying coefficients.
Based on the Multiplicative Ergodic Theorem, it develops a “linear algebra” in terms of Lyapunov exponents defined as the asymptotic growth rates of trajectories. They replace the eigenvalue analysis used in constant coefficient models and allow the construction of solutions in the spirit of Blanchard and Kahn (1980). The usefulness of these methods and their numerical implementation is illustrated using a canonical New Keynesian model with a time–varying policy rule.

June 20th, 2018
Dr. David Preinerstorfer, Université libre de Bruxelles.
Power in High-Dimensional Testing Problems.

Abstract: Fan et al. (2015) recently introduced a remarkable method for increasing asymptotic power of tests in high-dimensional testing problems. If applicable to a given test, their power enhancement principle leads to an improved test that has the same asymptotic size, uniformly non-inferior asymptotic power, and is consistent against a strictly broader range of alternatives than the initially given test. We study under which conditions this method can be applied and show the following: In asymptotic regimes where the dimensionality of the parameter space is fixed as sample size increases, there often exist tests that can not be further improved by the power enhancement principle. When the dimensionality can increase with sample size, however, there typically is a range of "slowly" diverging rates for which every test with asymptotic size smaller than one can be improved with the power enhancement principle. While the latter statement in general does not extend to all rates at which the dimensionality increases with sample size, we give sufficient conditions under which this is the case. 

(https://arxiv.org/abs/1709.04418, öffnet eine externe URL in einem neuen Fenster) Joint work with Anders Bredahl Kock, University of Oxford.

May 23rd, 2018
Dr. Gregor Kastner, WU Wien.
Sparse Bayesian Vector Autoregressions in Huge Dimensions.

Abstract: We develop a Bayesian vector autoregressive (VAR) model with multivariate stochastic volatility that is capable of handling vast dimensional information sets. Three features are introduced to permit reliable estimation of the model. First, we assume that the reduced-form errors in the VAR feature a factor stochastic volatility structure, allowing for conditional equation-by-equation estimation. Second, we apply recently developed global-local shrinkage priors to the VAR coefficients to cure the curse of dimensionality. Third, we utilize recent innovations to efficiently sample from high-dimensional multivariate Gaussian distributions. This makes simulation-based fully Bayesian inference feasible when the dimensionality is large but the time series length is moderate. We demonstrate the merits of our approach in an extensive simulation study and apply the model to US macroeconomic data to evaluate its forecasting capabilities.

(joint with Florian Huber, Department of Economics, WU)

May 18th, 2018
Prof. Francis X. Diebold,, öffnet eine externe URL in einem neuen Fenster University of Pennsylvania.
Egalitarian LASSO for Combining Economic Forecasts.

Abstract: Despite the clear success of forecast combination in many economic environments, several important issues remain incompletely resolved. The issues relate to selection of the set of forecasts to combine, and whether some form of additional regularization (e.g., shrinkage) is desirable. Against this background, and also considering the frequently-found good performance of simple-average combinations, we propose a LASSO-based procedure that sets some combining weights to zero and shrinks the survivors toward equality. An ex-post calibration reveals that the optimal solution has a strikingly simple form: The vast majority of forecasters should simply be discarded, and the remainder should be averaged. We therefore propose and explore direct subset- veraging procedures motivated by the structure of egalitarian LASSO and the lessons learned, which, unlike LASSO, do not require choice of a tuning parameter. Intriguingly, in an application to the European Central Bank Survey of Professional Forecasters, our procedures outperform simple averages and perform approximately as well as the ex-post best forecaster.

May 9th, 2018
Prof. Bernard Hanzon, University College Cork.
State Space Cointegration Models: Error Correction Specification, Maximum Likelihood Estimation and Empirics.

Abstract: The concept of cointegration is a standard one for financial and economic time series. It was first developed by Granger and Engle (Nobel prize). Cointegration occurs when two or more scalar time series are each nonstationary but a static linear combination exists which is stationary.  This is interpreted as the effect of economic forces which steers the variables in the direction of an equilibrium relation. Estimation of cointegrated models is by now standard for VAR (vector autoregressive) models, due to Johansen i.a. However, VAR models have some drawbacks. In case the maximum time lag is fixed (say a week) and the frequency of the observations is increased then the number of parameters to be estimated increases quadratically. Therefore at some frequency the number of parameters will be larger than the number of observations and ordinary regression techniques for estimation will break down. An alternative is available in the form of the linear state space model. Here we report on joint work with Dr Th. Ribarits and M. Alqurashi on (1) specification of a cointegrated state space model in so-called error-correction form; (2) partial analytical solution of the resulting maximum likelihood estimation (MLE) problem; (3) parametrization and numerical solution of the remaining likelihood optimization problem. If time permits we hope to give an example of a (daily) data set that exhibits cointegration if modelled by the state space model, but for which the VAR model fails to do so.

March 22nd, 2018
Prof. David E. Tyler,, öffnet eine externe URL in einem neuen Fenster Rutgers University.
Lassoing Eigenvalues.

Abstract: Penalized likelihood approaches for estimating covariance matrices are studied. The properties of such penalized approaches depend on the particular choice of the penalty function. In this talk, we introduce a class of non-smooth penalty functions for covariance matrices, and demonstrate how the corresponding penalized likelihood method leads to a grouping of the eigenvalues. We refer to this method as lassoing eigenvalues or as the elasso. A particularly promising member of this class of non-smooth penalties arises from an application of the Marc̆enko-Pasteur law. 

The elasso in itself is not robust since is based on the sample covariance matrix. Two possible approaches to make the elasso more robust are considered. The first approach is to simply use a robust plug-in method derived by replacing the sample covariance matrix with a robust estimate of scatter. The pluses and minuses of such a plug-in method are discussed. The second approach is to use penalized M-estimators of covariance matrices. Both the M-estimators and the elasso penalty function have the property of being geodesically convex, and hence the corresponding penalized M-estimators have unique solutions. Finally, we present a simple re-weighted algorithm for computing a penalized M-estimators which always converges to the correct solution.

This work is joint with Mengxi Yi, a graduate student at Rutgers University.

January 17th, 2018
Prof. Piercesare Secchi,, öffnet eine externe URL in einem neuen Fenster Politecnico di Milano.
Random Domain Decomposition for Kriging Non-Stationary Object Data.

Abstract: The analysis of complex data distributed over large or highly textured regions poses new challenges for spatial statistics. Available methods usually rely on global assumptions about the stationarity of the field generating the data and are unsuitable for large, textured or convoluted spatial domains, with holes or barriers. We here propose a novel approach for spatial prediction which cope with the data and the domain complexities through iterative random domain decompositions. The method is general and apt to the analysis of different types of object data. A case study on the analysis and spatial prediction of density data relevant to the study of dissolved oxygen depletion in the Chesapeake Bay (US) will illustrate the potential of the novel approach. This is a joint work with Giorgia Gaetani and Alessandra Menafoglio.

January 10th, 2018
Prof. Marco Lippi, Einaudi Institute for Economics and Finance, Rome.
A Survey of Dynamic Factor Models and applications: Forecasting, Structural Models, Aggregation.

Abstract: High-Dimensional  Dynamic Factor Models are usually motivated by  their use in Forecasting. The presentation will insist on different motivations and applications: 
(i) Structural Analysis  of macroeconomic time series,
(ii) Aggregation an Macroeconomic relationships. 
Time permitting, I will also review the results obtained in the literature on High-Dimensional Dynamic Factor Models in the last fifteen years: representation theorems and estimation theory, both with or without the finite-dimension assumption.