If an experiment is not reproducible it is not science.
If an hypothesis is not falsifiable it is not science.
Finally, our two guiding principles regarding models:
"The map is not the territory"
-Alfred Korzybski
"A Non-Aristotelian System and its Necessity for Rigour in Mathematics and Physics"
presented before the American Mathematical Society December 28, 1931
...................................................................................................................................................................."All models are wrong, but some are useful"
-George E.P. Box
Section heading, page 2 of Box's paper, "Robustness in the Strategy of Scientific Model Building"
(May 1979)
From Pannell Discussions:
HT: The Big PictureMick Keogh, from the Australian Farm Institute, recently argued that “much greater caution is required when considering policy responses for issues where the main science available is based on modelled outcomes”. I broadly agree with that conclusion, although there were some points in the article that didn’t gel with me.
In a recent feature article in Farm Institute Insights, the Institute’s Executive Director Mick Keogh identified increasing reliance on modelling as a problem in policy, particularly policy related to the environment and natural resources. He observed that “there is an increasing reliance on modelling, rather than actual science”. He discussed modelling by the National Land and Water Resources Audit (NLWRA) to predict salinity risk, modelling to establish benchmark river condition for the Murray-Darling Rivers, and modelling to predict future climate. He expressed concern that the modelling was based on inadequate data (salinity, river condition) or used poor methods (salinity) and that the modelling results are “unverifiable” and “not able to be scrutinised” (all three). He claimed that the reliance on modelling rather than “actual science” was contributing to poor policy outcomes.
While I’m fully on Mick’s side regarding the need for policy to be based on the best evidence, I do have some problems with some of his arguments in this article.
Firstly, there is the premise that “science and modelling are not the same”. The reality is nowhere near as black-and-white as that. Modelling of various types is ubiquitous throughout science, including in what might be considered the hard sciences. Every time a scientist conducts a statistical test using hard data, she or he is applying a numerical model. In a sense, all scientific conclusions are based on models.
I think what Mick really has in mind is a particular type of model: a synthesis or integrated model that pulls together data and relationships from a variety of sources (often of varying levels of quality) to make inferences or draw conclusions that cannot be tested by observation, usually because the issue is too complex. This is the type of model I’m often involved in building.
I agree that these models do require particular care, both by the modeller and by decision makers who wish to use results. In my view, integrated modellers are often too confident about the results of a model that they have worked hard to construct. If such models are actually to be used for decision making, it is crucial for integrated modellers to test the robustness of their conclusions (e.g. Pannell, 1997), and to communicate clearly the realistic level of confidence that decision makers can have in the results. In my view, modellers often don’t do this well enough.
But even in cases where they do, policy makers and policy advisors often tend to look for the simple message in model results, and to treat that message as if it was pretty much a fact. The salinity work that Mick criticises is a great example of this. While I agree with Mick that aspects of that work were seriously flawed, the way it was interpreted by policy makers was not consistent with caveats provided by the modellers. In particular, the report was widely interpreted as predicting that there would be 17 million hectares of salinity, whereas it actually said that there would be 17 million hectares with high “risk” or “hazard” of going saline. Of that area, only a proportion was ever expected to actually go saline. That proportion was never stated, but the researchers knew that the final result would be much less than 17 million. They probably should have been clearer and more explicit about that, but it wasn’t a secret.
The next concern expressed in the article was that models “are often not able to be scrutinised to the same extent as ‘normal’ science”. It’s not clear to me exactly what this means. Perhaps it means that the models are not available for others to scrutinise. To the extent that that’s true (and it is true sometimes), I agree that this is a serious problem. I’ve built and used enough models to know how easy it is for them to contain serious undetected bugs. For that reason, I think that when a model is used (or is expected to be used) in policy, the model should be freely available for others to check. It should be a requirement that all model code and data used in policy is made publicly available. If the modeller is not prepared to make it public, the results should not be used. Without this, we can’t have confidence that the information being used to drive decisions is reliable.
Once the model is made available, if the issue is important enough, somebody will check it, and any flaws can be discovered. Or if the time frame for decision making is too tight for that, government may need to commission its own checking process....MORE
Some of our prior posts on models:
The Financial Modelers' Manifesto
After the Crash: How Software Models Doomed the Markets
Airspace Closure Was Exacerbated by Too Much Modeling, Too Little Research
UPDATED: The Bogus Hurricane Models that Cost Florida Billions
Re-thinking Risk Management: Why the Mindset Matters More Than the Model
Insurance: Is the industry too reliant on models? (BRK.B)
Insurance: "CEO FORUM: Gen Re's Tad Montross on model dependency" (BRK.A)
Lombard Street On Computer Models Versus Looking At The Facts
The computer model that once explained the British economy (and the new one that explains the world)
How Models Caused the Credit Crisis
Quants Lose that Old Black (Box) Magic
Finance: "Blame the models"
Climate Models Overheat Antarctica, New Study Finds
Climate modeling to require new breed of supercomputer
Computer Models: Climate scientists call for their own 'Manhattan Project'
Computer Models: " Misuse of Models" and "No model for policymaking"
Climate prediction: No model for success
Climate Models and Modeling
Based on Our Proprietary "What's on T.V." Timing Model...
How many Nobel Laureates Does it Take to Make Change...And: End of the Universe Puts
The New Math (Quant Funds)
Modeling*: The Map is Not the Territory
Inside Wall Street's Black Hole
Computer Models: Models’ Projections for Flu Miss Mark by Wide Margin
Market Indicators: Which Way Are the Model's Nipples Pointing?Obama: Swedish Model Would Be Impossible Here