Page:Lawhead columbia 0054D 12326.pdf/226

 take falsifying observations not as evidence that a particular model ought to be abandoned, but just that it ought to be refined. This should be taken not as a criticism of mainstream scientific modeling, but rather as an argument that computational modeling is not (at least in this respect) as distinct from more standardly acceptable cases of scientific modeling DMS might suggest. The legitimacy of CGCMs, from this perspective, stands or falls with the legitimacy of models in the rest of science. Sociological worries about theory-dependence in model design are, while not trivial, at least well-explored in the philosophy of science. There’s no sense in holding computational models to a higher standard than other scientific models. Alan Turing’s seminar 1950 paper on artificial intelligence made a similar observation when considering popular objections to the notion of thinking machines: it is unreasonable to hold a novel proposal to higher standards than already accepted proposals are held to.

We might do better, then, to focus our attention on the respects in which computational models differ from more standard models. Simons and Boschetti (2012) point out that computational models are unusual (in part) in virtue of being irreversible: “Computational models can generally arrive at the same state via many possible sequences of previous states .” Just by knowing the output of a particular computational model, in other words, we can’t say for sure what the initial conditions of the model were. This is partially a feature of the predictive horizon discussed in Chapter Five: if model outputs are interpreted in ensemble (and thus seen as “predicting” a range of possible futures), then it’s necessarily true that they’ll be irreversible--at least in an epistemic sense. That’s true in just the same sense that thermodynamic models provide predictions that are “irreversible” to the precise microconditions

216