- What's New
- Research &
- Awards &
- ASA Home
In late September, the National Research Council (NRC) released its long-awaited assessment of the quality of U.S. doctoral programs, which includes data on more than 5,000 programs in 62 fields at 212 universities nationwide. According to the NRC, the Data-Based Assessment of Research-Doctorate Programs in the United States report was designed to help universities evaluate and improve the quality of their programs and to provide prospective students with information on the nation’s doctoral programs. For the full report, including the data and illustrative rankings and characteristics for all programs, see www.nap.edu/rdp/.
The $6.7 million assessment of doctoral programs includes an extensive report explaining the study committee’s methodology and general findings about doctoral education. With appendices, the prepublication assessment of research-doctoral programs runs 280 pages. The report’s spreadsheet includes five sets of illustrative ranges of rankings that show how the data can be used to compare programs according to specific program characteristics depending on the interests of users (e.g., students, faculty, administrators, the general public). Additionally, PhDs.org, an independent website not affiliated with the National Research Council, incorporated data from the research-doctorate assessment into its Graduate School Guide. Users can choose the weights assigned to the program characteristics measured by the National Research Council and others, and rank graduate programs according to their own priorities. In addition, the report contains ranges of rankings for three dimensions of program quality: research activity, student support and outcomes, and diversity of the academic environment. This is a marked contrast the traditional way of ranking used by other organizations that attempt to summarize all program characteristics in a single number.
The assessment’s data cover 20 characteristics such as faculty publications, grants, and awards; student GRE scores, financial support, and employment outcomes; and program size, time to degree, and faculty composition. Measures of faculty and student diversity are also included. Two characteristics are intended to portray the overall quality of a program—one from a reputational survey, the R scale, and the other from a quantitative analysis, the S scale. The data set will allow universities to update key information on a regular basis.
"All those interested in graduate education can learn much from studying the data, comparing programs, and drawing lessons for how programs can be improved," said the presidents of the National Academy of Sciences, National Academy of Engineering, and Institute of Medicine in a foreword to the assessment.
The first assessment since its 1995 report and three years behind schedule, the new NRC evaluation has sparked fierce debate among scholars and academic programs. One key critique centers on the fact that the survey data were collected from questionnaires distributed to faculty, administrators, and students, as well as from public sources from academic year 2005-06. According to an October 25 Inside Higher Ed article, another major issue that has caused some frustration with the assessment is that the NRC used two methodologies, yielding sometimes divergent results, and that the results for each methodology were reported as ranges, not as a single ranking. This can lead to some confusing over the actual rankings. An October 1 Science magazine article likened the NRC assessment to a Mr. Potato Head, which "can look quite different depending on your definition of ‘best’." Divergence between results on the R and S scales can add to confusion over the program assessments, making it difficult for schools to draw conclusions about their departments’ rankings.
National Research Council officials made a point of saying that the system was designed to allow departments to report two ranges to reflect their relative status. They qualified their data by stating in their press release, "The rankings are not endorsed by the Research Council as an authoritative conclusion about the relative quality of doctoral programs, but illustrate the kinds of valuations that can be generated from the data. . . . While the illustrative rankings should not be interpreted as definitive conclusions about the relative quality of doctoral programs, they provide important insights on how programs can be ranked according to different criteria and on the characteristics faculty value most."
Each doctoral program’s score is presented as a range of rankings reflecting the 5th and 95th percentiles. The NRC report urges prospective students, administrators, faculty members, and others to consider which characteristics are most important to them and to compare programs accordingly. Tutorials that demonstrate how to use the spreadsheets to compare programs can be found at http://www.nap.edu/rdp.
A not-so-fun fact from the report: The number of students enrolled in doctoral programs has increased in engineering by 4 percent and in the physical sciences by 9 percent but has declined in the social sciences by 5 percent and the humanities by 12 percent. And, over 50 percent of students in agricultural science and engineering complete their degrees in six years or less, while only 37 percent of those in the social sciences do so. On the bright side, minority Ph.D.s increased from 5.2 percent to 10.1 percent in engineering, and from 5 percent to 14.4 percent in the social sciences.Back to Top of Page