American Sociological Association

Search

Search

The search found 213 results in 0.044 seconds.

Search results

  1. Neoliberalism

    Johanna Bockman unpacks a hefty term, neoliberalism. She cites its roots and its uses, decoding it as a description of a “bootstraps” ideology that trumpets individualism and opportunity but enforces conformity and ignores structural constraints.

  2. Why Liberals and Atheists Are More Intelligent

    The origin of values and preferences is an unresolved theoretical question in behavioral and social sciences.

  3. Estimating the Relationship between Time-varying Covariates and Trajectories: The Sequence Analysis Multistate Model Procedure

    The relationship between processes and time-varying covariates is of central theoretical interest in addressing many social science research questions. On the one hand, event history analysis (EHA) has been the chosen method to study these kinds of relationships when the outcomes can be meaningfully specified as simple instantaneous events or transitions.
  4. Limitations of Design-based Causal Inference and A/B Testing under Arbitrary and Network Interference

    Randomized experiments on a network often involve interference between connected units, namely, a situation in which an individual’s treatment can affect the response of another individual. Current approaches to deal with interference, in theory and in practice, often make restrictive assumptions on its structure—for instance, assuming that interference is local—even when using otherwise nonparametric inference strategies.
  5. Comment: The Inferential Information Criterion from a Bayesian Point of View

    As Michael Schultz notes in his very interesting paper (this volume, pp. 52–87), standard model selection criteria, such as the Akaike information criterion (AIC; Akaike 1974), the Bayesian information criterion (BIC; Schwarz 1978), and the minimum description length principle (MDL; Rissanen 1978), are purely empirical criteria in the sense that the score a model receives does not depend on how well the model coheres with background theory. This is unsatisfying because we would like our models to be theoretically plausible, not just empirically successful.
  6. Comment: Evidence, Plausibility, and Model Selection

    In his article, Michael Schultz examines the practice of model selection in sociological research. Model selection is often carried out by means of classical hypothesis tests. A fundamental problem with this practice is that these tests do not give a measure of evidence. For example, if we test the null hypothesis β = 0 against the alternative hypothesis β ≠ 0, what is the largest p value that can be regarded as strong evidence against the null hypothesis? What is the largest p value that can be regarded as any kind of evidence against the null hypothesis?
  7. The Problem of Underdetermination in Model Selection

    Conventional model selection evaluates models on their ability to represent data accurately, ignoring their dependence on theoretical and methodological assumptions. Drawing on the concept of underdetermination from the philosophy of science, the author argues that uncritical use of methodological assumptions can pose a problem for effective inference. By ignoring the plausibility of assumptions, existing techniques select models that are poor representations of theory and are thus suboptimal for inference.
  8. Comment: Bayes, Model Uncertainty, and Learning from Data

    The problem of model uncertainty is a fundamental applied challenge in quantitative sociology. The authors’ language of false positives is reminiscent of Bonferroni adjustments and the frequentist analysis of multiple independent comparisons, but the distinct problem of model uncertainty has been fully formalized from a Bayesian perspective.
  9. Comment: Some Challenges When Estimating the Impact of Model Uncertainty on Coefficient Instability

    I once had a colleague who knew that inequality was related to an important dependent variable. This colleague knew many other things, but I focus on inequality as an example. It was difficult for my colleague to know just how to operationalize inequality. Should it be the percentage of income held by the top 10 percent, top 5 percent, or top 1 percent of the population? Should it be based on the ratio of median black income to median white income, or should it be the log of that ratio? Should it be based on the Gini index, or perhaps the Theil index would be better?
  10. Qualitative Comparative Analysis in Critical Perspective

    Qualitative comparative analysis (QCA) appears to offer a systematic means for case-oriented analysis. The method not only offers to provide a standardized procedure for qualitative research but also serves, to some, as an instantiation of deterministic methods. Others, however, contest QCA because of its deterministic lineage. Multiple other issues surrounding QCA, such as its response to measurement error and its ability to ascertain asymmetric causality, are also matters of interest.