Lawyers keep the gates of public justice institutions, particularly through their roles in formal procedures like hearings and trials. Yet, it is not clear what lawyers do in such quintessentially legal settings: conclusions from past research are bedeviled by a lack of clear theory and inconsistencies in research design. Conceptualizing litigation work in terms of professional expertise, I conduct a theoretically grounded synthesis of the findings of extant studies of lawyers’ impact on civil case outcomes.
ASA speaks with sociologist Doug Hartmann at the 2016 ASA Annual Meeting on August, 2016, in Seattle, WA. Hartmann talks about what it means to “do sociology,” how he uses sociology in his work, highlights of his work in the field, the relevance of sociological work to society, and his advice to students interested in entering the field.
The meaning of objectivity in any specific setting reflects historically situated understandings of both science and self. Recently, various scientific fields have confronted growing mistrust about the replicability of findings, and statistical techniques have been deployed to articulate a “crisis of false positives.” In response, epistemic activists have invoked a decidedly economic understanding of scientists’ selves. This has prompted a scientific social movement of proposed reforms, including regulating disclosure of “backstage” research details and enhancing incentives for replication.
Nuance is not a virtue of good sociological theory. Although often demanded and superficially attractive, nuance inhibits the abstraction on which good theory depends. I describe three “nuance traps” common in sociology and show why they should be avoided on grounds of principle, aesthetics, and strategy. The argument is made without prejudice to the substantive heterogeneity of the discipline.
The present essay will take readers through the bookshelf of this sociologist of diagnosis. It will demonstrate the wide-reaching topics that I consider relevant to the sociologist who considers diagnosis as a social object and also as a point of convergence where doctor and lay person encounter one another, where authority is exercised, health care is organized, political priorities are established, and conflict is enacted.
Researchers studying income inequality, economic segregation, and other subjects must often rely on grouped data—that is, data in which thousands or millions of observations have been reduced to counts of units by specified income brackets.
Randomized experiments on a network often involve interference between connected units, namely, a situation in which an individual’s treatment can affect the response of another individual. Current approaches to deal with interference, in theory and in practice, often make restrictive assumptions on its structure—for instance, assuming that interference is local—even when using otherwise nonparametric inference strategies.
As Michael Schultz notes in his very interesting paper (this volume, pp. 52–87), standard model selection criteria, such as the Akaike information criterion (AIC; Akaike 1974), the Bayesian information criterion (BIC; Schwarz 1978), and the minimum description length principle (MDL; Rissanen 1978), are purely empirical criteria in the sense that the score a model receives does not depend on how well the model coheres with background theory. This is unsatisfying because we would like our models to be theoretically plausible, not just empirically successful.
In his article, Michael Schultz examines the practice of model selection in sociological research. Model selection is often carried out by means of classical hypothesis tests. A fundamental problem with this practice is that these tests do not give a measure of evidence. For example, if we test the null hypothesis β = 0 against the alternative hypothesis β ≠ 0, what is the largest p value that can be regarded as strong evidence against the null hypothesis? What is the largest p value that can be regarded as any kind of evidence against the null hypothesis?
Conventional model selection evaluates models on their ability to represent data accurately, ignoring their dependence on theoretical and methodological assumptions. Drawing on the concept of underdetermination from the philosophy of science, the author argues that uncritical use of methodological assumptions can pose a problem for effective inference. By ignoring the plausibility of assumptions, existing techniques select models that are poor representations of theory and are thus suboptimal for inference.