Lawyers keep the gates of public justice institutions, particularly through their roles in formal procedures like hearings and trials. Yet, it is not clear what lawyers do in such quintessentially legal settings: conclusions from past research are bedeviled by a lack of clear theory and inconsistencies in research design. Conceptualizing litigation work in terms of professional expertise, I conduct a theoretically grounded synthesis of the findings of extant studies of lawyers’ impact on civil case outcomes.
ASA speaks with sociologist Doug Hartmann at the 2016 ASA Annual Meeting on August, 2016, in Seattle, WA. Hartmann talks about what it means to “do sociology,” how he uses sociology in his work, highlights of his work in the field, the relevance of sociological work to society, and his advice to students interested in entering the field.
The meaning of objectivity in any specific setting reflects historically situated understandings of both science and self. Recently, various scientific fields have confronted growing mistrust about the replicability of findings, and statistical techniques have been deployed to articulate a “crisis of false positives.” In response, epistemic activists have invoked a decidedly economic understanding of scientists’ selves. This has prompted a scientific social movement of proposed reforms, including regulating disclosure of “backstage” research details and enhancing incentives for replication.
Researchers studying income inequality, economic segregation, and other subjects must often rely on grouped data—that is, data in which thousands or millions of observations have been reduced to counts of units by specified income brackets.
Autoregressive latent trajectory (ALT) models combine features of latent growth curve models and autoregressive models into a single modeling framework. The development of ALT models has focused primarily on models with linear growth components, but some social processes follow nonlinear trajectories. Although it is straightforward to extend ALT models to allow for some forms of nonlinear trajectories, the identification status of such models, approaches to comparing them with alternative models, and the interpretation of parameters have not been systematically assessed.
As Michael Schultz notes in his very interesting paper (this volume, pp. 52–87), standard model selection criteria, such as the Akaike information criterion (AIC; Akaike 1974), the Bayesian information criterion (BIC; Schwarz 1978), and the minimum description length principle (MDL; Rissanen 1978), are purely empirical criteria in the sense that the score a model receives does not depend on how well the model coheres with background theory. This is unsatisfying because we would like our models to be theoretically plausible, not just empirically successful.
Conventional model selection evaluates models on their ability to represent data accurately, ignoring their dependence on theoretical and methodological assumptions. Drawing on the concept of underdetermination from the philosophy of science, the author argues that uncritical use of methodological assumptions can pose a problem for effective inference. By ignoring the plausibility of assumptions, existing techniques select models that are poor representations of theory and are thus suboptimal for inference.
The problem of model uncertainty is a fundamental applied challenge in quantitative sociology. The authors’ language of false positives is reminiscent of Bonferroni adjustments and the frequentist analysis of multiple independent comparisons, but the distinct problem of model uncertainty has been fully formalized from a Bayesian perspective.
False positive findings are a growing problem in many research literatures. We argue that excessive false positives often stem from model uncertainty. There are many plausible ways of specifying a regression model, but researchers typically report only a few preferred estimates. This raises the concern that such research reveals only a small fraction of the possible results and may easily lead to nonrobust, false positive conclusions. It is often unclear how much the results are driven by model specification and how much the results would change if a different plausible model were used.
Logit and probit models are widely used in empirical sociological research. However, the common practice of comparing the coefficients of a given variable across differently specified models fitted to the same sample does not warrant the same interpretation in logits and probits as in linear regression. Unlike linear models, the change in the coefficient of the variable of interest cannot be straightforwardly attributed to the inclusion of confounding variables. The reason for this is that the variance of the underlying latent variable is not identified and will differ between models.