Here, an extended form of the Lorenz ‘96 idealized model atmosphere is used to test whether more accurate forecasts could be produced by lowering numerical precision more at smaller spatial scales in order to increase the model resolution. Both a scale-dependent mixture of single- and half-precision – where numbers are represented with fewer bits of information on smaller spatial scales – and ‘stochastic processors’ – where random ‘bit-flips’ are allowed for small-scale variables – are emulated on conventional hardware. It is found that high-resolution parametrized models with scale-selective reduced precision yield better short-term and climatological forecasts than lower resolution parametrized models with conventional precision for a relatively small increase in computational cost. This suggests that a similar approach in real-world models could lead to more accurate and efficient weather and climate forecasts."
http://onlinelibrary.wiley.com/doi/10.1002/qj.2974/abstract
Single Precision in Weather Forecasting Models: An Evaluation with the IFS - http://journals.ametsoc.org/doi/abs/10.1175/MWR-D-16-0228.1
Earth’s climate is a nonlinear dynamical system with scale-dependent Lyapunov exponents. As such, an important theoretical question for modeling weather and climate is how much real information is carried in a model’s physical variables as a function of scale and variable type. Answering this question is of crucial practical importance given that the development of weather and climate models is strongly constrained by available supercomputer power. As a starting point for answering this question, the impact of limiting almost all real-number variables in the forecasting mode of ECMWF Integrated Forecast System (IFS) from 64 to 32 bits is investigated. Results for annual integrations and medium-range ensemble forecasts indicate no noticeable reduction in accuracy, and an average gain in computational efficiency by approximately 40%. This study provides the motivation for more scale-selective reductions in numerical precision.Stochastic parameterization of subgrid-scale processes: A review of recent physically-based approaches - https://arxiv.org/abs/1701.04742
We review some recent methods of subgrid-scale parameterization used in the context of climate modeling. These methods are developed to take into account (subgrid) processes playing an important role in the correct representation of the atmospheric and climate variability. We illustrate these methods on a simple stochastic triad system relevant for the atmospheric and climate dynamics, and we show in particular that the stability properties of the underlying dynamics of the subgrid processes has a considerable impact on their performances.Stochastic parameterization of subgrid-scale processes in coupled ocean-atmosphere systems: Benefits and limitations of response theory - https://arxiv.org/abs/1605.00461
A stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini [2012] is tested in the context of a low-order coupled ocean-atmosphere model for which a part of the atmospheric modes are considered as unresolved. A natural separation of the phase-space into an invariant set and its complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. In this case, the fluctuation term is an additive stochastic noise. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained, provided that the coupling is sufficiently weak. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts.Multi-level Dynamical Systems: Connecting the Ruelle Response Theory and the Mori-Zwanzig Approach - https://arxiv.org/abs/1208.3080
In this paper we consider the problem of deriving approximate autonomous dynamics for a number of variables of a dynamical system, which are weakly coupled to the remaining variables. In a previous paper we have used the Ruelle response theory on such a weakly coupled system to construct a surrogate dynamics, such that the expectation value of any observable agrees, up to second order in the coupling strength, to its expectation evaluated on the full dynamics. We show here that such surrogate dynamics agree up to second order to an expansion of the Mori-Zwanzig projected dynamics. This implies that the parametrizations of unresolved processes suited for prediction and for the representation of long term statistical properties are closely related, if one takes into account, in addition to the widely adopted stochastic forcing, the often neglected memory effects.Stochastic Climate Theory and Modelling - https://arxiv.org/abs/1409.0423
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations as well as for model error representation, uncertainty quantification, data assimilation and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modelling. In this review we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspectives. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.Simulating weather regimes: impact of stochastic and perturbed parameter schemes in a simple atmospheric model - http://link.springer.com/article/10.1007%2Fs00382-014-2239-9
Representing model uncertainty is important for both numerical weather and climate prediction. Stochastic parametrisation schemes are commonly used for this purpose in weather prediction, while perturbed parameter approaches are widely used in the climate community. The performance of these two representations of model uncertainty is considered in the context of the idealised Lorenz ’96 system, in terms of their ability to capture the observed regime behaviour of the system. These results are applicable to the atmosphere, where evidence points to the existence of persistent weather regimes, and where it is desirable that climate models capture this regime behaviour. The stochastic parametrisation schemes considerably improve the representation of regimes when compared to a deterministic model: both the structure and persistence of the regimes are found to improve. The stochastic parametrisation scheme represents the small scale variability present in the full system, which enables the system to explore a larger portion of the system’s attractor, improving the simulated regime behaviour. It is important that temporally correlated noise is used in the stochastic parametrisation—white noise schemes performed similarly to the deterministic model. In contrast, the perturbed parameter ensemble was unable to capture the regime structure of the attractor, with many individual members exploring only one regime. This poor performance was not evident in other climate diagnostics. Finally, a ‘climate change’ experiment was performed, where a change in external forcing resulted in changes to the regime structure of the attractor. The temporally correlated stochastic schemes captured these changes well.
Stochastic parametrization and model uncertainty - http://www.ecmwf.int/en/elibrary/11577-stochastic-parametrization-and-model-uncertainty
Stochastic parametrization provides a methodology for representing model uncertainty in ensemble forecasts, and also has the capability of reducing systematic error through the concept of nonlinear noise-induced rectification. The stochastically perturbed parametrization tendencies scheme and the stochastic backscatter scheme are described and their impact on medium-range forecast skill is discussed. The impact of these schemes on ensemble data assimilation and in seasonal forecasting is also considered. In all cases, the results are positive. Validation of the form of these stochastic parametrizations can be found by coarse-grain budgets of high resolution (e.g. cloud-resolving) models; some results are shown. Stochastic parametrization has been pioneered at ECMWF over the last decade, and now most operational centres use stochastic parametrization in their operational ensemble prediction systems - these are briefly discussed. The seamless prediction paradigm implies that serious consideration should now be given to the use of stochastic parametrization in next generation Earth System Models.Rounding errors may be beneficial for simulations of atmospheric flow: results from the forced 1D Burgers equation - http://link.springer.com/article/10.1007%2Fs00162-015-0355-8
Inexact hardware can reduce computational cost, due to a reduced energy demand and an increase in performance, and can therefore allow higher-resolution simulations of the atmosphere within the same budget for computation. We investigate the use of emulated inexact hardware for a model of the randomly forced 1D Burgers equation with stochastic sub-grid-scale parametrisation. Results show that numerical precision can be reduced to only 12 bits in the significand of floating-point numbers—instead of 52 bits for double precision—with no serious degradation in results for all diagnostics considered. Simulations that use inexact hardware on a grid with higher spatial resolution show results that are significantly better compared to simulations in double precision on a coarser grid at similar estimated computing cost. In the second half of the paper, we compare the forcing due to rounding errors to the stochastic forcing of the stochastic parametrisation scheme that is used to represent sub-grid-scale variability in the standard model setup. We argue that stochastic forcings of stochastic parametrisation schemes can provide a first guess for the upper limit of the magnitude of rounding errors of inexact hardware that can be tolerated by model simulations and suggest that rounding errors can be hidden in the distribution of the stochastic forcing. We present an idealised model setup that replaces the expensive stochastic forcing of the stochastic parametrisation scheme with an engineered rounding error forcing and provides results of similar quality. The engineered rounding error forcing can be used to create a forecast ensemble of similar spread compared to an ensemble based on the stochastic forcing. We conclude that rounding errors are not necessarily degrading the quality of model simulations. Instead, they can be beneficial for the representation of sub-grid-scale variability.On the use of programmable hardware and reduced numerical precision in earth-system modeling - https://www2.physics.ox.ac.uk/contacts/people/dueben/publications/568683
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.Ten Years of Building Broken Chips: The Physics and Engineering of Inexact Computing - http://dl.acm.org/citation.cfm?id=2465789
Well over a decade ago, many believed that an engine of growth driving the semiconductor and computing industries---captured nicely by Gordon Moore’s remarkable prophecy (Moore’s law)---was speeding towards a dangerous cliff-edge. Ranging from expressions of concern to doomsday scenarios, the exact time when serious hurdles would beset us varied quite a bit---some of the more optimistic warnings giving Moore’s law until. Needless to say, a lot of people have spent time and effort with great success to find ways for substantially extending the time when we would encounter the dreaded cliff-edge, if not avoiding it altogether. Faced with this issue, we started approaching this in a decidedly different manner---one which suggested falling off the metaphorical cliff as a design choice, but in a controlled way. This resulted in devices that could switch and produce bits that are correct, namely of having the intended value, only with a probabilistic guarantee. As a result, the results could in fact be incorrect. Such devices and associated circuits and computing structures are now broadly referred to as inexact designs, circuits, and architectures. In this article, we will crystallize the essence of inexactness dating back to 2002 through two key principles that we developed: (i) that of admitting error in a design in return for resource savings, and subsequently (ii) making resource investments in the elements of a hardware platform proportional to the value of information they compute. We will also give a broad overview of a range of inexact designs and hardware concepts that our group and other groups around the world have been developing since, based on these two principles. Despite not being deterministically precise, inexact designs can be significantly more efficient in the energy they consume, their speed of execution, and their area needs, which makes them attractive in application contexts that are resilient to error. Significantly, our development of inexactness will be contrasted against the rich backdrop of traditional approaches aimed at realizing reliable computing from unreliable elements, starting with von Neumann’s influential lectures and further developed by Shannon-Weaver and others.Inexactness and a future of computing - http://rsta.royalsocietypublishing.org/content/372/2018/20130281
As pressures, notably from energy consumption, start impeding the growth and scale of computing systems, inevitably, designers and users are increasingly considering the prospect of trading accuracy or exactness. This paper is a perspective on the progress in embracing this somewhat unusual philosophy of innovating computing systems that are designed to be inexact or approximate, in the interests of realizing extreme efficiencies. With our own experience in designing inexact physical systems including hardware as a backdrop, we speculate on the rich potential for considering inexactness as a broad emerging theme if not an entire domain for investigation for exciting research and innovation. If this emerging trend to pursuing inexactness persists and grows, then we anticipate an increasing need to consider system co-design where application domain characteristics and technology features interplay in an active manner. A noteworthy early example of this approach is our own excursion into tailoring and hence co-designing floating point arithmetic units guided by the needs of stochastic climate models. This approach requires a unified effort between software and hardware designers that does away with the normal clean abstraction layers between the two.
rpe - An emulator for reduced floating-point precision written in Fortran.
https://github.com/aopp-pred/rpe
https://github.com/aopp-pred/rpe-examples
No comments:
Post a Comment