American Sociological Association



The search found 141 results in 0.021 seconds.

Search results

  1. Estimating Income Statistics from Grouped Data: Mean-constrained Integration over Brackets

    Researchers studying income inequality, economic segregation, and other subjects must often rely on grouped data—that is, data in which thousands or millions of observations have been reduced to counts of units by specified income brackets.
  2. Deciding on the Starting Number of Classes of a Latent Class Tree

    In recent studies, latent class tree (LCT) modeling has been proposed as a convenient alternative to standard latent class (LC) analysis. Instead of using an estimation method in which all classes are formed simultaneously given the specified number of classes, in LCT analysis a hierarchical structure of mutually linked classes is obtained by sequentially splitting classes into two subclasses. The resulting tree structure gives a clear insight into how the classes are formed and how solutions with different numbers of classes are substantively linked to one another.
  3. Nonlinear Autoregressive Latent Trajectory Models

    Autoregressive latent trajectory (ALT) models combine features of latent growth curve models and autoregressive models into a single modeling framework. The development of ALT models has focused primarily on models with linear growth components, but some social processes follow nonlinear trajectories. Although it is straightforward to extend ALT models to allow for some forms of nonlinear trajectories, the identification status of such models, approaches to comparing them with alternative models, and the interpretation of parameters have not been systematically assessed.
  4. Causal Inference with Networked Treatment Diffusion

    Treatment interference (i.e., one unit’s potential outcomes depend on other units’ treatment) is prevalent in social settings. Ignoring treatment interference can lead to biased estimates of treatment effects and incorrect statistical inferences. Some recent studies have started to incorporate treatment interference into causal inference. But treatment interference is often assumed to follow a simple structure (e.g., treatment interference exists only within groups) or measured in a simplistic way (e.g., only based on the number of treated friends).
  5. Item Location, the Interviewer–Respondent Interaction, and Responses to Battery Questions in Telephone Surveys

    Survey researchers often ask a series of attitudinal questions with a common question stem and response options, known as battery questions. Interviewers have substantial latitude in deciding how to administer these items, including whether to reread the common question stem on items after the first one or to probe respondents’ answers. Despite the ubiquity of use of these items, there is virtually no research on whether respondent and interviewer behaviors on battery questions differ over items in a battery or whether interview behaviors are associated with answers to these questions.
  6. Limitations of Design-based Causal Inference and A/B Testing under Arbitrary and Network Interference

    Randomized experiments on a network often involve interference between connected units, namely, a situation in which an individual’s treatment can affect the response of another individual. Current approaches to deal with interference, in theory and in practice, often make restrictive assumptions on its structure—for instance, assuming that interference is local—even when using otherwise nonparametric inference strategies.
  7. Rejoinder: On the Assumptions of Inferential Model Selection—A Response to Vassend and Weakliem

    I am grateful to Professors Vassend and Weakliem for their comments on my paper (this volume, pp. 52–87) and its admittedly unusual approach to model selection and to the Sociological Methodology editors for the opportunity to respond. My goal here is not to defend the inferential information criterion (IIC) against all the points brought out by Vassend (this volume, pp. 91–97) and Weakliem (this volume, pp. 88–91). My paper aimed to (1) show how methodological assumptions interfere with inferences about theory and (2) develop a practical approach to minimize this interference.
  8. Comment: The Inferential Information Criterion from a Bayesian Point of View

    As Michael Schultz notes in his very interesting paper (this volume, pp. 52–87), standard model selection criteria, such as the Akaike information criterion (AIC; Akaike 1974), the Bayesian information criterion (BIC; Schwarz 1978), and the minimum description length principle (MDL; Rissanen 1978), are purely empirical criteria in the sense that the score a model receives does not depend on how well the model coheres with background theory. This is unsatisfying because we would like our models to be theoretically plausible, not just empirically successful.
  9. Comment: Evidence, Plausibility, and Model Selection

    In his article, Michael Schultz examines the practice of model selection in sociological research. Model selection is often carried out by means of classical hypothesis tests. A fundamental problem with this practice is that these tests do not give a measure of evidence. For example, if we test the null hypothesis β = 0 against the alternative hypothesis β ≠ 0, what is the largest p value that can be regarded as strong evidence against the null hypothesis? What is the largest p value that can be regarded as any kind of evidence against the null hypothesis?
  10. The Problem of Underdetermination in Model Selection

    Conventional model selection evaluates models on their ability to represent data accurately, ignoring their dependence on theoretical and methodological assumptions. Drawing on the concept of underdetermination from the philosophy of science, the author argues that uncritical use of methodological assumptions can pose a problem for effective inference. By ignoring the plausibility of assumptions, existing techniques select models that are poor representations of theory and are thus suboptimal for inference.