Abstract: Out-of-sample tests are widely used for evaluating and selecting between models’ forecasts in economics and finance. Although widely used, underlying these tests is often the assumption of constant relative performance between competing models which is invalid for many practical applications. We propose a new two-step methodology designed specifically for forecast evaluation and selection in a world of changing relative performance. In the first step we estimate the time-varying mean and variance of the series for forecast loss differences, and in the second step we use these estimates to compare and rank models in a changing world. We show that our tests have high power against a variety of fixed and local alternatives.
Link to work