FOOTNOTES
homeprev issuesexecpublic affairsSTAFFASA home
 
 

A Year in the Life of ASA’s Sociology of Education: A View from the Inside

by Karl Alexander,* Angela Estacion, Christopher Tracy and Yingyi Ma, The Johns Hopkins University

At Sociology of Education (SoE), only one in five submissions is accepted for publication, and then typically after one, two, or even three rounds of often extensive revisions. How does that sifting and sorting get done, and why should we care? The latter is easy. As intellectual and professional gatekeepers, journal editors help define (if tacitly) what constitutes good scholarship (Clemens, et al., 1995), and reputations rise and fall on their decisions (Markovsky, 2000). But what of the process that leaves behind 80% of submissions? That part of journal gatekeeping is shrouded in mystery. To help clear the mystery we report data on characteristics of 85 of the 95 first-time submissions received during 2003, Alexander’s first year as SoE editor.1

The Submission Profile

Table 1 provides data on the pool of 2003 submissions. These are the manuscripts received for consideration. Most use primarily quantitative methods (66%), deal with social inequality (70%) and issues of academic achievement and/or attainment (74%), pertain to older youth (just 9% address pre-high school), and offer a decidedly U.S.-centric focus (74%).2

First authorship is reasonably balanced by gender (54% female; 46% male), and there is a slight preponderance of solo-authored submissions (58%). Larger differences are associated with authors’ institutional base. All but a handful of submissions are from academic institutions, ASA members account for 60% of first authors, and because first authors at foreign institutions are only 20% of SoE submissions, U.S. academic sociology clearly dominates the pool. With 60% of submissions coming from schools outside the U.S. News and World Report’s top tier of graduate programs (2005), and roughly a fourth authored by professionals in training (i.e., graduate students and post-doctoral fellows), submissions are not especially elitist or exclusionary.

The next section considers whether these manuscript (Table 2) and author (Table 3) characteristics distinguish successful from unsuccessful submissions.

Publication Prospects

Twenty-six percent of the 85 submissions were rejected at the editor’s discretion (Column 1),3 52% were rejected in light of outside reviews, and 22% were accepted for publication.4,5 These figures are the frame of reference for identifying disparities, but case coverage we caution is small and small differences should not be over-interpreted.

According to Table 2, the papers most often rejected by the editor employ methodologies other than secondary analysis of quantitative data (including quantitative analysis of original data), do not fit into a life-stage framework (or center on higher education), focus on topics other than achievement or attainment (or try to address both), and are situated outside the United States. These kinds of papers also are a small fraction of the submission pool, so being “different” elevates risk of in-house rejection. That said, some uncommon topics and approaches fare reasonably well. Papers employing qualitative methods and those not focused on inequality are accepted at close to the overall rate, while papers about early schooling (a tiny number) do better still, with about half being accepted.

The most prominent consideration involving author characteristics (Table 3) is ASA membership, with 41% of non-member submissions being rejected outright by the editor (compared to the editor’s 26% overall rejection rate). Papers from outside the United States and outside the U.S. News and World Report’s favored circle of top-25 institutions also are subject to high levels of “in house” rejection.

Half (52%) of the submissions are rejected based upon the advice of external reviews.6 The kinds of authors who fare best (and worst) parallel the pattern for in-office rejections, with the main distinctions being author characteristics that identify the paper as being inside or outside the mainstream of U.S. academic sociology. Just 9% of submissions from those who are not ASA members and 12% of submissions from authors located outside the country and at non-academic institutions are accepted (compared to the 22% overall acceptance rate). On the other hand, ASA membership, top-25 institutional standing, and graduate student/post-doc professional standing enhance publication prospects, with acceptance rates roughly 10 percentage points above the overall rate.

With a small pool of manuscripts, it is not possible to push very far examining manuscript characteristics in combination. Even so, the exercise can be instructive. For example, of the 14 papers authored by non-ASA members and rejected outright by the editor, 36% do not focus on a particular life stage, 43% use some “other” type of methodology/data source, 36% are about neither achievement nor attainment, and 86% of first authors are not at top-ranked sociology programs. (These figures are not reported in tables.) This accords with the “outsider” image for in-office rejections in that these papers’ contents are atypical relative to the SoE “norm” and just two first authors can claim high disciplinary institutional stature.

Another example: Nine of the 11 accepted first authors at top-ranked sociology programs are well positioned institutionally in that they are ASA members and live in the United States. Eight use quantitative analysis of secondary data, the “industry standard” methodology. On the other hand, six examine multiple life stages (well above the overall acceptance rate for such papers), and three report original qualitative data. So, perhaps operating from a secure institutional base affords license to do the unusual (or those so situated do the unusual uncommonly well).

Discussion

The present exercise suffers obvious limitations. Without an external standard of quality, the truly large questions cannot be resolved (e.g., whether deserving manuscripts are rejected simply because they are different). Additionally, just one year of one editor’s term no doubt harbors idiosyncrasies, the submission pool is too small to support fine-grained inspection, and a specialty journal with “education” in its name likely invites a larger number of out-of-field submissions than would a generalist journal. The patterns presented for these reasons may lack generality, but they are informative nonetheless.

The submission “modalities” show topical and methodological skews, with secondary quantitative analyses, issues of inequality or stratification, and older stages of the student career dominating the pool. In raw numbers, these submissions also comprise the bulk of accepted papers, so as an inductive exercise their intersection might be said to comprise the core of mainstream sociology of education, at least as represented in SoE. Still, original qualitative submissions, papers that overlap multiple life stages or focus on the early period of schooling, and papers not about inequality all have average or above-average rates of acceptance. This suggests that encouraging more “non-traditional” paper submissions would broaden SoE’s content, assuming that is desirable (see Lucas, 1999, for comment).

But submitted by whom? The authors who fare best are based in the United States at high-ranking universities and are members of the ASA. The journal’s editorial procedures thus appear to have a decidedly domestic-academic-professional sociology bias. ASA very likely intends the last of the three, but it is less clear what to make of biases that favor U.S. submissions and authors at top-ranking sociology programs. Do they know the standards better? Or, are they simply more adept at packaging “what sells?” These are questions we would like to be able to answer, but cannot. On the other hand, that there is no discernable publication advantage or disadvantage by professional rank reflects well on the journal’s openness.

As McGinty (1999) notes, the scholarly journal ultimately rests on a foundation of trust. For advice, editors typically rely on those whom they know personally or by professional reputation. With “like advising like,” it is easy to see how deserving outsiders could be closed out. An editor’s integrity is the only real safeguard, but with trust expected, there is a reciprocal obligation to openness. The annual editors’ reports published by ASA are informative, but not in the way the present article has tried to be. Our procedures were ad hoc and limited,7 but the data we coded on author and manuscript characteristics easily could be procured upon intake and later linked to disposition. Our one-year experiment establishes that a more probing kind of editorial accounting is feasible. Is it worth doing routinely?

* Karl Alexander is the immediate past editor of ASA’s journal Sociology of Education (SoE). Barbara Schneider, Michigan State University, is the current editor. SoE was first published in 1963 and is one of ASA’s 10 scholarly journals.

Notes

1 The other 10 were still open when Alexander’s term ended.
2 We also coded detailed substantive foci (e.g., tracking, teacher effects) using a classification culled from several sociology of education handbooks. Unfortunately, the separate categories were too sparsely populated to support meaningful comparisons.
3 With peer review as the norm in science (Clemens, et al., 1995: 442), the 26% figure for “in-office” rejections seems high (noted by Alexander in his first two editorial reports, 2003 and 2004). Editors have considerable latitude in this regard (e.g., McGinty, 1999). At ASR, for example, Simon (1995) believes as many as a third of submissions could properly have been decided by her alone.
4 This set of accepted manuscripts is not identifiable through the journal’s table of contents. Their publication schedule spans three volume years and they are intermingled with papers from other years.
5 Our approach in tracking a fixed pool submissions yields a higher acceptance rate than ASA’s “official” accounting system. Figures in editors’ Annual Reports are calculated as the ratio of accepted manuscripts in a given year to the number of submissions during the year, with resubmissions counted separately in the denominator, e.g., a paper received on January 1 and twice revised during the year will count as three papers in the base. England’s editorial report for ASR (1995) makes a like point: the journal’s 13.4% acceptance rate for 1994 increases to 19.7% when calculated as “# accepted manuscripts/(acceptances + rejections)”.
6 Exceptions are papers: (1) using secondary quantitative analysis, (2) focusing on high schools and/or (3) overlapping multiple stages of schooling. All are rejected at a rate higher than the overall rate, based on the advice of external reviewers.
7 For example, we could not think of a good way to ascertain author race/ethnicity.

References

Alexander, Karl L. 2004. “Sociology of Education Editorial Report: 2003.” Footnotes 32(4).

———. 2005. “Sociology of Education Editorial Report: 2004.” Footnotes 33(4).

Clemens, Elisabeth S., Walter W. Powell, Kris McIlwaine, and Dina Okamoto. 1995. “Careers in Print: Books, Journals, and Scholarly Reputations.” American Journal of Sociology 101(2):433–94.

England, Paula. 1995. “Editors Reports: American Sociological Review.” Footnotes 23(4):13.

Lucas, Samuel R. 1999. “In Defense of Diversity.” Footnotes 27(7).

Markovsky, Barry. 2000. “Departments Ranked by Journal Publications.” Footnotes 28(2).

McGinty, Stephen. 1999. Gatekeepers of Knowledge: Journal Editors in the Sciences and the Social Sciences. Westport, CT: Bergin and Garvey.

Simon, Rita J. 1994. “An Effective Journal Editor: Insights Gained from Editing the American Sociological Review.” Pp. 33–44 in Editors as Gatekeepers: Getting Published in the Social Sciences, edited by R. J. Simon and J. J. Fyfe. Lanham, MD: Rowman and Littlefield Publishers, Inc.

U.S. News and World Report. 2005. “America’s Best Graduate Schools 2006.” Special issue of U.S. News and World Report.