Conventional model selection evaluates models on their ability to represent data accurately, ignoring their dependence on theoretical and methodological assumptions. Drawing on the concept of underdetermination from the philosophy of science, the author argues that uncritical use of methodological assumptions can pose a problem for effective inference. By ignoring the plausibility of assumptions, existing techniques select models that are poor representations of theory and are thus suboptimal for inference. To address this problem, the author proposes a new paradigm for inference-oriented model selection that evaluates models on the basis of a trade-off between model fit and model plausibility. By comparing the fits of sequentially nested models, it is possible to derive an empirical lower bound for the subjective plausibility of assumptions. To demonstrate the effectiveness of this approach, the method is applied to models of the relationship between cultural tastes and network composition.