Lawyers keep the gates of public justice institutions, particularly through their roles in formal procedures like hearings and trials. Yet, it is not clear what lawyers do in such quintessentially legal settings: conclusions from past research are bedeviled by a lack of clear theory and inconsistencies in research design. Conceptualizing litigation work in terms of professional expertise, I conduct a theoretically grounded synthesis of the findings of extant studies of lawyers’ impact on civil case outcomes.
Previous work on conservative Protestant creationism fails to account for other creationists who are much less morally invested in opposition to evolution, raising the sociological question: What causes issues’ moral salience? Through ethnographic fieldwork in four creationist high schools in the New York City area (two Sunni Muslim and two conservative Protestant), I argue that evolution is more important to the Christian schools because it is dissonant with their key practices and boundaries.
ASA speaks with sociologist Doug Hartmann at the 2016 ASA Annual Meeting on August, 2016, in Seattle, WA. Hartmann talks about what it means to “do sociology,” how he uses sociology in his work, highlights of his work in the field, the relevance of sociological work to society, and his advice to students interested in entering the field.
Using General Social Survey data, we examine perspectives on science and religion in the United States. Latent class analysis reveals three groups based on knowledge and attitudes about science, religiosity, and preferences for certain religious interpretations of the world. The traditional perspective (43 percent) is marked by a preference for religion compared to science; the modern perspective (36 percent) holds the opposite view. A third perspective, which we call post-secular (21 percent), views both science and religion favorably.
The meaning of objectivity in any specific setting reflects historically situated understandings of both science and self. Recently, various scientific fields have confronted growing mistrust about the replicability of findings, and statistical techniques have been deployed to articulate a “crisis of false positives.” In response, epistemic activists have invoked a decidedly economic understanding of scientists’ selves. This has prompted a scientific social movement of proposed reforms, including regulating disclosure of “backstage” research details and enhancing incentives for replication.
“The genetics revolution may be well underway,” write Dalton Conley and Jason Fletcher in The Genome Factor, “but the social genomics revolution is just getting started” (p. 11). They are not alone in their excitement for recent developments bringing together social science and genetic research. Decades from now, folks may well look back at this time as the start of a golden age for the field.
Researchers studying income inequality, economic segregation, and other subjects must often rely on grouped data—that is, data in which thousands or millions of observations have been reduced to counts of units by specified income brackets.
As Michael Schultz notes in his very interesting paper (this volume, pp. 52–87), standard model selection criteria, such as the Akaike information criterion (AIC; Akaike 1974), the Bayesian information criterion (BIC; Schwarz 1978), and the minimum description length principle (MDL; Rissanen 1978), are purely empirical criteria in the sense that the score a model receives does not depend on how well the model coheres with background theory. This is unsatisfying because we would like our models to be theoretically plausible, not just empirically successful.
Conventional model selection evaluates models on their ability to represent data accurately, ignoring their dependence on theoretical and methodological assumptions. Drawing on the concept of underdetermination from the philosophy of science, the author argues that uncritical use of methodological assumptions can pose a problem for effective inference. By ignoring the plausibility of assumptions, existing techniques select models that are poor representations of theory and are thus suboptimal for inference.