Model-independent comparison of simulation output
Computational models of complex systems are usually elaborate and sensitive to implementation details, characteristics which often affect their verification and validation. Model replication is a possible solution to this issue. It avoids biases associated with the language or toolkit used to develop the original model, not only promoting its verification and validation, but also fostering the credibility of the underlying conceptual model. However, different model implementations must be compared to assess their equivalence. The problem is, given two or more implementations of a stochastic model, how to prove that they display similar behavior? In this paper, we present a model comparison technique, which uses principal component analysis to convert simulation output into a set of linearly uncorrelated statistical measures, analyzable in a consistent, model-independent fashion. It is appropriate for ascertaining distributional equivalence of a model replication with its original implementation. Besides model-independence, this technique has three other desirable properties: a) it automatically selects output features that best explain implementation differences; b) it does not depend on the distributional properties of simulation output; and, c) it simplifies the modelers' work, as it can be used directly on simulation outputs. The proposed technique is shown to produce similar results to the manual or empirical selection of output features when applied to a well-studied reference model.
READ FULL TEXT