Building a Better Model
It’s abundantly clear that arctic sea ice is melting, and doing so at an increasing rate. But the corresponding data that provides a temporal context stretches back only 40 years. Among the harsh arctic conditions, tough terrain and the polar night, observations simply weren’t feasible before satellites became an option.
Even with data now streaming in from above, conventional models continue to be climate scientists’ most powerful tools for understanding and predicting the changing arctic. But current sea ice models are having difficulty accounting for the variability and decline we’ve seen in recent years.
That’s why Qinghua Ding, of UC Santa Barbara’s Earth Research Institute, plans to evaluate climate models against observations to determine which work well, which fare poorly and, most importantly, why. His proposal has received a $200,000 grant from the National Oceanic and Atmospheric Administration (NOAA) Climate Program Office as part of the Modeling, Analysis, Predictions, and Projections (MAPP) Program.
“We need a quantitative way to efficiently tell people ‘your model has limitations, your model performs well’ and give them a rank,” said Ding, an assistant professor in the geography department. Most importantly, he wants to understand why some succeed while others fail. The goal is to create a rubric for ranking climate models that explains why some function better than others, an insight that will help improve the next generation of climate models.
Computer modeling presents researchers with a tradeoff: They want to capture as many factors influencing their systems as possible, but the more factors they include, the more complex and inscrutable the models become.
What’s more, different models serve different purposes. A meteorologist curious about storms and cloud formation will likely use a different set of environmental models than will an oceanographer who studies sea ice and ocean currents. It’s not that one set is better than the other; they’re simply designed with different aims in mind.
Ding plans to focus on arctic sea ice models and how they interact with other model components. He’s tracking down the factors that may account for the gaps between observations and model predictions. “People used to think arctic warming is due to anthropogenic factors,” he said. “That’s right, but there’s another component, natural variability, that’s a really significant part.
“Without a good model we cannot resolve this,” he added.
However, different models can produce similar results through spurious methods. For instance, there may be many ways to melt sea ice in a simulation, only a couple of which reflect real-world mechanisms, Ding explained. This is akin to how an algebra problem may yield multiple answers, only a few of which are correct in context.
As a result, it is critical to understand why each model works. “That’s what we’re trying to emphasize: a good model should not only get the right results but also achieve these through right ways,” said Rui Luo, a visiting doctoral student from Fudan University in Shanghai, who’s one of two students who will work with Ding.
“We only have 40 years of sea ice observations,” Luo added, “so if we want to predict what we’re going to see in the future we have to rely on models. But before we use them, we need to make sure these models can replicate reality.”