Making complex systems computable | Nature Computational Science


The development of climate models and numerical methods themselves is of crucial importance. For example, Hasselmann developed the fingerprint method to wed climate models and observations; the idea is to explain the statistical significance of observational trends by incorporating them into a model. This is a specific approach to what we nowadays call data assimilation, which is, in all of its facets, the key to using observations. The challenge here is that we only know what the observations are up to right now; as Niels Bohr once said, “prediction is very difficult, especially if it’s about the future!” This is the problem with models themselves: there is a lot of inter-model variability, the models are hugely expensive to run, and it is not obvious which models provide the best solution. Note that there are still unresolved processes for prediction. For instance, we still cannot resolve turbulent motions, and they are essential: we don’t know which clouds are going to form and where, and we cannot predict the mixing of water masses in the ocean that changes properties such as nutrients, salinity and temperature.

So, where should the computational emphasis be placed? The key is data. My contention is that we are not (yet!) using data to its fullest potential fruition. The satellite era began around 1979 and offered very high-resolution, daily observations of all sorts of things. If we go back in time and look at longer timescales, we have a lot of proxy data of various types with reasonably high resolution in space and time; we also have paleoclimate data on very long timescales, which is coarsely resolved in time and space. The question is then: how to use the confluence of sparse and finely resolved data in a rational manner that is linked to the underlying physics? For example, it is common practice to look at paleoclimate data by just comparing the timing of various wiggles. However, there are very powerful mathematical methods that we and others use from non-equilibrium statistical physics that I think are really valuable but not yet common in the field. Going forward, we need to inform models with the up-to-date data in a robust and self-consistent way, which I think will involve science, software, hardware — an interdisciplinary collaboration. This sounds like a cliché, but it really doesn’t work if people don’t speak each other’s languages.

Recently, we have had more extreme weather events, such as hurricanes. For example, work by Kerry Emanuel shows evidence of an increase in hurricane intensity, but not frequency, that is attributable to greenhouse gases. However, not only do climate models struggle to capture hurricanes, but practically speaking, the havoc they wreak on humanity depends on both their intensity and frequency. Clearly, there is a great need to combine data and models to optimize the utility of both.

Leave a Comment