Skip to main content

Full text of "ERIC ED482469: Method or Madness? Inside the "SNWR" College Rankings."

See other formats


DOCUMENT RESUME 



ED 482 469 



HE 036 328 



AUTHOR 

TITLE 

SPONS AGENCY 
PUB DATE 
NOTE 



PUB TYPE 
EDRS PRICE 
DESCRIPTORS 



IDENTIFIERS 



Ehrenberg, Ronald G. 

Method or Madness? Inside the "USNWR" College Rankings. 

Andrew W. Mellon Foundation, New York, NY. 

2003-09-07 

23p.; Prepared by Cornell Higher Education Research 
Institute, Cornell University. Support also provided by 
Atlantic Philanthropies. Paper presented at the Wisconsin 
Center for the Advancement of Post secondary Education Forum 
on The Use and Abuse of College Rankings (Madison, WI, 
November 20-21, 2003). 

Reports - Descriptive (141) — Speeches/Meeting Papers (150) 
EDRS Price MF01/PC01 Plus Postage. 

Academic Achievement; Competition; ^Educational Quality; 
Evaluation Methods; Higher Education; ^Institutional 
Characteristics 

^Ranking; Scholastic Aptitude Test; U S News and World Report 



ABSTRACT 

This paper examines why Americans are so preoccupied with the 
"U.S. News and World Report" ("USNWR") annual rankings of colleges and 
universities and why higher education institutions have become equally 
preoccupied with them. It discusses the rankings categories (academic 
reputation, student selectivity, faculty resources, graduation and retention 
rate, alumni giving, financial resources, and graduation rate performance) , 
and it notes how the rankings methodology allows colleges and universities to 
take actions to manipulate their rankings and the effects that such actions 
have on higher education. The paper questions why colleges and universities 
continue to participate in the rankings if they are flawed, discussing some 
of the major problems with the rankings. The paper concludes that the problem 
with "USNWR" rankings is not its presentation of information on individual 
data elements but rather its effort to aggregate these elements into a single 
index, noting that if it stopped doing this, many of the obj ections . that 
people have about the ratings would stop. Finally, the paper offers thoughts 
about how the "USNWR" could alter its rating formula in ways that would be 
more socially desirable. (Contains 20 references.) (SM) 



Reproductions supplied by EDRS are the best that can be made 
from the original document. 



482 469 



Revised Draft 
September 7, 2003 
Comments Solicited 



§ 



“Method or Madness? Inside the USNWR College Rankings” 



by 



Ronald G. Ehrenberg* 



U S DEPARTMENT OF EDUCATION 
Office of Educational Research and Improvement 
EDUCATIONAL RESOURCES INFORMATION 
/ CENTER (ERIC) 

IJjr This document has been reproduced as 
received from the person or organization 
originating it. 

□ Minor changes have been made to 

improve reproduction quality. 



• Points of view or opinions stated in this 
document do not necessarily represent 
official OERI position or policy. 



PERMISSION TO REPRODUCE AND 
DISSEMINATE THIS MATERIAL HAS 
BEEN GRANTED BY 



ft 6h 



TO THE EDUCATIONAL RESOURCES 
INFORMATION CENTER (ERIC) 

11 



(Prepared for presentation at the Wisconsin Center for the Advancement of 
Postsecondary Education Forum on The Use and Abuse of College Rankings , Madison 
Wisconsin, November 20-21, 2003) 



<R 



* Irving M. Ives Professor of Industrial and Labor Relations and Economics at Cornell 
University, Director of the Cornell Higher Education Research Institute (CHERI), and 
Research Associate at the National Bureau of Economic Research. I am grateful to the 
Atlantic Philanthropies (USA) Inc. and the Andrew W. Mellon Foundation for their 
financial support of CHERI. . 



I. Introduction 

College guides have been providing information about the characteristics of different 
undergraduate institutions to help high school students decide to which institutions to 
apply for longer than most people can remember. Barron 's Profile of American Colleges 
2003 (which is updated every other year), The Fiske Guide to Colleges 2004, Peterson ’s 
Four Year Colleges 2004 and the Insider's Guide to Colleges 2003 represent the 25 th , 
20 th , 34 th and 29 th editions, respectively, of these venerable publications. In addition to 
providing detailed data and narratives about each college, many of the long-standing 
guides group institutions into broad categories. Barron 's, for example, ranks each 
institution by the selectivity of its entering freshman class (measured by entrance test 
scores), grouping institutions into broad categories such as highly selective, selective, 
nonselective and open enrollment. No attempt is made, however, to differentiate between 
institutions within each group. Similarly, The Fiske Guide awards up to 5 stars to each 
institution on three dimensions thought to be important to potential students; academics, 
social life and quality of life. 

U.S. News & World Report ( USNWR ) shook up the college guide industry when it 
began publishing its annual rankings of colleges in 1983. The summary of its annual 
rankings of colleges as undergraduate institutions that appear in a fall issue each year is 
by far the best selling issue of USNWR each year and, together with its more 
comprehensive annual America ’s Best Colleges publication, it has become the “gold 
standard” of the college ranking business. 

USNWR 's rapid rise to the top derives from its rankings’ appearance of scientific 
objectivity (institutions are rated along various dimensions with explicit weights being 



ERIC 



3 



assigned to each dimension), along with the fact that USNWR then ranks the top 50 
institutions in each category (for example national universities and liberal arts colleges). 1 
Each year immediately before and after the USNWR college rankings issue hits the 
newsstand, stories about the USNWR rankings appear in virtually every major newspaper 
in the United States. 

I begin my remarks by discussing why Americans have become so preoccupied with 
the USNWR rankings and why higher education institutions have become equally 
obsessed with them. Next I discuss how the rankings methodology allows colleges and 
universities to take actions to manipulate their rankings and the effects that such actions 
have on higher education. I then ask if the rankings are flawed, why do colleges and 
universities continue to participate in them and I discuss some of the major problems with 
the ratings. Finally, I offer some brief concluding thoughts about how USNWR could alter 
its rating formula in ways that I believe would be socially desirable. 

II. Why American’s Have Become Obsessed with College Rankings 

As Caroline Hoxby (1999) has pointed out, American higher education has 
experienced a dramatic change in its market structure during the last 60 years. In 1949 
about 93% of all undergraduate college students attended college in the state in which 
they went to high school, this figure fell to about 85% in the early 1960s, 77% in the 
early 1980s, and 75% by the mid 1990s. 2 Accompanying this increased mobility of 
students across state lines has become an increased stratification of students and colleges 
by students’ academic backgrounds. For example average SAT scores of entering 

1 This number increased to 1 26 for the top national universities and 1 10 for the top national liberal arts 
colleges in the 2004 USNWR rankings. 

2 Caroline Hoxby (1998a), table la. The changes have been even more dramatic for private higher 
education- falling from about 85% to 56% during the period. 




2 



students now vary much more across colleges than they did in the past and within each 
college the range of SAT scores of entering students has declined. 3 These changes have 
been attributed to a number of factors including reductions in transportation and 
communication costs, the establishment of federal financial aid programs and a shift to 
need blind admissions at many institutions in the 1970s, the growing use of standardized 
admissions tests in admission decisions and the growth of tuition reciprocity agreements 
by public institutions, which allow students from one state to attend another state’s public 
colleges and universities (if they qualify for admission) at less than the second state’s 
normal out-of-state tuition. 4 As a result of these changes, colleges and universities have 
increasingly found themselves competing for students in a national market. 

During the 1980s and 1990s, the distribution of earnings in the United States 
became more unequal on a number of dimensions. 5 The earnings of college graduates 
grew relative to the earnings of high school graduates. For example, the ratio of the mean 
earnings of male college graduates ages 35-44 to the mean earnings of male high school 
graduates in the same age range rose from 1.41 to 1.76 between 1980 and 1999 and the 
comparable ratio for females rose from 1.36 to 1.79. 6 Perhaps more important, the 
dispersion of earnings among college graduates also grew. For example, in 1980 male 
college graduates ages 25-34 at the 80 th percentile of the earning distribution of their 
group earned about 2.27 times the earnings of similar male college graduates at the 20 th 
percentile of the earnings distribution. By 1997, this ratio had increased to 2. 54. 7 Not 
only is obtaining a college degree increasingly important for an individual’s economic 

3 Hoxby (1998a), tables 3 and 5 

4 Hoxby (1998a) and Michael Rizzo and Ronald Ehrenberg (forthcoming) 

5 Ronald G. Ehrenberg and Robert S. Smith (2003), chapter 14 

6 Ehrenberg and Smith (2003), table 14.3 

7 Ehrenberg and Smith (2003), table 14.5 



well- being but taking actions to increase the chances that he or she will wind up in the 
upper, rather than the lower, tail of college graduates’ earnings distributions is also 
increasingly important. 

With one exception, virtually all studies by economists suggest that attending higher 
quality colleges, as measured by the average SAT scores of entering students at the 
institution, is associated with higher post-college earnings and higher probabilities of 
enrolling in top graduate programs . 8 As such, parents, especially those with top test score 
students, have become increasingly preoccupied with, in my colleague Robert Frank’s 
terminology, “buying the best” and the competition for slots at top schools has heated 
up . 9 Put simply, American high school graduates are increasingly seeking to go to the 
“best” college” that they can. 

The average SAT score of the entering class is but one characteristic of the many 
characteristics of a college or university and the finding that average SAT scores 
influence post college success does not imply that this is the only characteristic of an 
academic institution that matters. By providing an ordinal ranking based upon a more 
comprehensive set of characteristics, USNWR helps to fuel the competition for slots at the 
top institutions. However, it is important to stress that it is only exacerbating the 
pressures that already exist; it is not the major cause of these pressures. 

While academic institutions regularly claim that they pay no attention to their 
USNWR rankings, they of course do. And well they should; an econometric study by 

8 See for example, Dominic Brewer, Eric Eide and Ronald Ehrenberg (1999), Eric Eide, Dominick Brewer 
and Ronald Ehrenberg (1998), Caroline Hoxby (1998b) and Caroline Hoxby and Bridget Terry (1999). The 
one exception is Stacy Dale and Alan Krueger (2002). However, Dale and Krueger did find that attendance 
at colleges that had higher expenditures per student was associated with higher earnings - a point that I will 
return to below. 

9 Robert Frank (2001) 




4 



8 



James Monks and myself of the experiences of 3 1 selective private colleges and 
universities found that when an institution improved in the rankings, other factors held 
constant, the next year it received more applications, could accept a smaller fraction of 
these applications (which made it look more selective), would have a greater fraction of 
its applicants accept its offers of admission (which further made it look more selective), 
would find that its entering students had higher SAT scores (which again would make it 
look more selective) and would be able to accomplish all these things by offering 
somewhat less generous financial aid packages. 10 Conversely, if it fell in the rankings, 
then the reverse of all of these things would occur. Lest one think that the USNWR 
rankings are of concern only to selective private colleges and universities, in my 
Reaching for the Brass Ring article, I document that lesser privates and public institutions 
also are concerned about the rankings. 11 

III. How Higher Education Institutions Try to Manipulate the USNWR 
Rankings 

Table 1 displays the seven categories (academic reputation, student selectivity, 
faculty resources, graduation and retention rate, financial resources, alumni giving and 
graduation rate performance) that USNWR uses to rank national universities and liberal 
arts colleges in its 2003 and 2004 rankings, the weight it assigns to each category, the 
sub-factors (if any) within each category and the sub-factor weights within each category. 
The only changes in USNWR 's methodology between the two years was the elimination 
of an institution’s yield on admitted applicants from its student selectivity ranking and 
changes in the sub-factor weights for the remaining sub-factors included in this category. 

10 James Monks and Ronald G. Ehrenberg (1999) 

"Ronald G. Ehrenberg (2003) 




5 



7 



The most important category, worth 25%, is an institution’s academic reputation, as 
measured by a survey of presidents, provosts and deans of admission at peer institutions. 
While institutions always like to publicize all of the wonderful things that are happening 
on their campuses to prospective students, recently some institutions have resorted to 
sending expensive publicity materials to key administrators at their competitor 
institutions as a way of influencing the rankings. 12 Hard data on the cost of such PR 
actions does not exist, but one must wonder whether the resources involved in such 
activities could have been more profitably devoted to further improving what is going on 
at the institutions. Informing competitors of all of the wonderful things that an institution 
is doing also puts pressure on competitors to emulate some of these things (or find more 
good things of their own to do) and thus this fuels the expenditure race that already exists 
in higher education and puts upward pressure on tuition. 

Student selectivity has a weight of 15% in the USNWR rankings. The institution’s 
acceptance rate, the proportion of its freshman applicants to whom it offers admission, 
counts for 10% of this category’s weight in 2004, down from 15% in 2003. Inclusion of 
the acceptance rate encourages institutions to reject otherwise outstanding applicants, 
who it believes are unlikely to enroll, encourages institutions to generate large pools of 
applicants who have little chance of being admitted to the institution and encourages 
institutions to admit students early decision because, other things equal, the higher the 
proportion of students admitted early admission, the few the number of students that need 
to be admitted to generate any given class. The first practice increases potential students’ 
uncertainty, since they can’t be sure that their “safety schools” will admit them, the 
second puts extra workloads on the institutions admissions’ officers and leads to many 
12 Amy Argetsinger (2002) 




6 



more students’ hopes being dashed and the third increases the pressure to apply early 
admissions that many students face. Indeed, in response to concerns by the academic 
community that USNWR was further contributing to this pressure by including an 
institution’s yield (fraction of admitted students that accept an offer of admission), 
USNWR did eliminate yield from its rankings methodology in 2004. 

The final two sub-factors in the student selectivity category are the proportion of the 
institution’s entering first year class that is ranked in the top 10% of their high school 
classes and the average SAT (or ACT) score of all enrolled freshman who took the test. 
Increasingly high schools are not reporting the class rank of their students, for example 
45% of Cornell’s enrolled freshman in the class of 2006 did not have their class ranks 
reported to the university, so the usefulness of this measure is unclear. 13 Just as there has 
been concern expressed that top 1 0% admission rules, such as those used by public 
higher education institutions in Texas prior to the recent Supreme Court ruling, may 
discourage students from attending challenging high schools with lots of top students, 
USNWR 's use of the top 1 0% criteria may influence who institutions admit at the margin 
and, via this route, where high school students go to school. 14 

Use of the average SAT score for all enrolled freshman (who report such scores) 
affects institutional behavior in two ways. First, it provides an incentive for them to make 
the reporting of test scores optional. Doing so should lead more applicants to apply to a 
school (making the institution look more selective) because low test score students with 
otherwise acceptable records will now be more likely to apply. It should also increase the 
average test scores of students who report their scores, because it will be students with 

13 Cornell University Profile of the Class of 2006, available at 
http://dpb. comell.edu/iiD/factbook/admissions/undergraduate/profite.htm 

14 Edward Blum and Roger Clegg (2003) 




9 



7 



lower test scores who will be the non-reporters. Whether on balance students admitted 
without submitting their test scores will do as well at the institution as students who 
failed to submit test scores is an open question. 15 

Second, the use of average test scores provides an incentive for institutions to use 
merit aid to improve the average test scores of its entering class. To the extent that this 
leads to an institution’s having less resource available for need-base aid, this may limit 
access to higher education for individuals from lower-income families. Academic 
institutions, especially public ones that have a special obligation to provide access to all 
qualified applicants, need to seriously rethink if the focus on improving their students’ 
average test scores is really in the public interest. 

The third category, with a weight of 20% in the USNWR rankings is faculty 
resources. The largest sub-factor in this category, with a weight of 35%, is faculty 
compensation, which is defined as the average pay and benefits of full-time assistant, 
associate and full professors, adjusted for regional cost-of-living. An institution that hired 
full-time lecturers, at lower salaries, to do more of its undergraduate teaching and 
devoted the resources that it saved from doing so to increasing the average salaries of its 
tenure- track faculty would, other factors held constant, go up in the rankings and would 
suffer no penalty for this substitution. 16 Its full-time faculty would be better paid and 
happier but would its students be worse off from having a smaller share of their classes 
taught by tenure and tenure track faculty? 

15 Michael Robinson and James Monks (2002) study the early experiences at Mount Holyoke College after 
the college made submission of SAT scores optional for freshman applicants. They found that students who 
“under-performed” on the SAT relative to their high school GPA’s were more likely not to submit their 
scores, that admissions officers rated these students higher than they otherwise would have ranked them 
and that students who withheld their SAT scores had lower GPAs at Mount Holyoke than students who 
submitted their scores. 

16 It would suffer a penalty if it increased its usage of part-time faculty, but this sub-factor only has a 
weight of 5% in this category 



An academic’s inclination is to say yes, but there are surprisingly few studies that 
have addressed this question. This is a fundamental question facing public higher 
education which has seen this type of substitution, as well as increased substitution of 
part-time for full-time faculty occurring in recent years. For example, between the fall of 
1992 and the fall of 2001, the percentage of undergraduate credit hours generated by 
tenured and tenure track faculty fell from 81.0 to 58.4 percent at the four SUNY 
university centers (Albany, Binghamton, Buffalo and Stony Brook). 17 Unless the higher 
education community can demonstrate the negative impacts that such changes have on 
students, state policymakers are unlikely to consider taking actions to reduce these trends. 

USNWR’s next category, with a weight of 20 percent in the rankings, is the 
institution’s graduation and retention rate averaged over a number of years. The most 
important sub-factor in this category is the institution’s 6-year graduation rate for 
entering freshman (with a weight of 80%) and its freshman retention rate (with a weight 
of 20%). Given the characteristics of admitted students, an institution can improve both 
rates by improving its instructional program and providing more support services to 
students or by relaxing its standards. Hopefully, institutions will not choose the latter 
course, but the rankings cannot distinguish between these two methods of improvement. 

As I discuss in Tuition Rising, transfer students compose a large share of all new 
students at many academic institutions. For example, of the 3622 new undergraduate 
students enrolling at Cornell University in the fall of 2002, 558 (or 15.4%) were transfer 
students. 18 At the SUNY 4-year campuses, the percentages are typically much higher, 



17 Ronald G. Ehrenberg and Daniel B. Klaff (2003), table 2 

18 Cornell University Fact Book, available at http://dpb.comell.edu/irp/factbook.html 



ranging from 20.1 to 53.3 across the campuses in the fall of 1999. 19 While academic 
institutions have an educational interest, as well as a financial interest, in seeing their 
transfer students succeed through to graduation, USNWR’ s preoccupation with the 
success of full-time freshman, provides an incentive for academic institutions to worry 
more about these students than their transfer student classmates. 

A related problem associated with the retention and graduation rate variables is 
that USNWR cannot distinguish between people leaving the institution because of 
academic, personal, or financial problems and people leaving because of the opportunity 
to attend a better institution. My alma mater Harpur College (now Binghamton 
University) has a 6-year graduation rate that hovers around 80% which always places it at 
or near the top of the campuses in the SUNY system on this measure, but well below the 
6-year graduation rates of over 90% at Ivy League colleges. Part of the reason for 
Binghamton’s not doing better on this measure is that a number of its top students 
transfer to Ivy League institutions, such as Cornell, at the end of their first semester or 
first year. Indeed, at Cornell we make it easy for many of these students to do this by 
guaranteeing them the ability to do so when they initially apply to us. Should 
Binghamton be penalized in the rankings because some of its students leave to go to 
higher rated institutions? If it enrolled fewer top students, it might actually have a higher 
6-year graduation rate 

Financial Resources is the fifth USNWR category and it has a weight of 10% in 
the overall ranking. Financial resources are measured by the amount that the institutions 
spend per student on instruction, research, public service, academic support, student 
services, institutional support and operations and maintenance. Inclusion of expenditures 
19 Ronald G. Ehrenberg and Christopher L. Smith (forthcoming), table 2 




10 



12 



per student in the ranking penalizes institutions that attempt to hold down their 
expenditures and thus puts upward pressure on tuitions. Inclusion of research 
expenditures in this measure provides institutions with extra incentives to push their 
faculty to generate more external research funding, even if this diverts their faculty 
members’ attention away from undergraduate teaching. 

Alumni giving, as measured by the percentage of undergraduate alumni who 
donated money to an institution, with a weight of 5% in the index, is included as a proxy 
for how satisfied students are with the institution. The proportion of annual giving that 
institutions receive from alumni, as opposed to from other individuals, corporations and 
foundations varies widely across institutions for reasons that have little to do with alumni 
satisfaction and thus the incentive that institutions have to devote resources to soliciting 
alumni funding vary widely across institutions. 20 For example, institutions with large 
medical colleges and large biomedical research programs often find it easier to raise 
funds from corporations and other individuals (former hospital patients) than from 
alumni. The USNWR ratings methodology provides an incentive for these institutions to 
devote more resources to alumni fund raising than otherwise might be optimal for them. 
Similarly, many institutions have learned that the marginal cost of raising funds from a 
few major donors is much lower than the marginal cost of raising an equivalent amount 
of money from many small donors. The USNWR rating methodology penalizes them for 
concentrating on large donors and provides an incentive for them to devote more 
resources to fundraising (to attract more small donors) than is otherwise optimal. 

The final category USNWR includes is graduation rate performance and its 
weight is also 5% in the ratings methodology. Graduation rate performance is computed 
20 Ronald G. Ehrenberg and Christopher L. Smith (2002) 



ERiC 



13 



by comparing an institution’s actual 6-year graduation rate to its predicted 6-year 
graduation rate, the latter is obtained from a model that specifies that graduation rates are 
a function of student characteristics (such as entering test scores) and institutional 
characteristics (such as expenditures per student). As I have already noted above, an 
institution’s predicted graduation rate may be higher than its actual graduation rate 
because it is doing a poor job educating its students or because it has the misfortune of 
having its better students attracted to more selective institutions as transfer students. 

IV. What’s Wrong with the Ratings 

One may reasonably ask, if the USNWR rankings are flawed, why do academic 
institutions participate in it? The answer, quite simply, is that it is in their best interest to 
do so. Institutions that do well in the rankings trumpet their success on their web pages 
and in published materials. Institutions that do not do as well as they had hoped in the 
rankings ignore the rankings and publicize other things that make the institutions look 
good. Indeed, what is included on the institutional web page and what the institutions 
brag about vary from year to year. If an institution’s graduates win several prestigious 
awards, such as Rhodes and Marshall Scholarships in a year, you can bet that this will be 
widely publicized. However, if the institution’s graduates fail to win any of these awards 
the next year, this fact will never be mentioned. Academic institutions always put a good 
spin on things and never mention their shortcomings. 

The real problem with the USNWR rankings does not lie with the categories and the 
subcategory factors that it uses. Each of these provides information that some students 
and their parents feel is very useful in deciding to which colleges to apply. Indeed, many 
institutions actually provide all of the information that they submit to USNWR and other 




12 



14 



college guides directly on their own web sites in the form of their submissions to the 
Common Data Set (CDS) 1X The CDS was developed via a collaborative process that 
involved many publishers of college guides, the academic community, high school 
counselors and the National Center for Education Statistics. The goal was to ease 
institutions’ reporting burdens by asking questions across a wide number of surveys in a 
standard way so that one response would satisfy the needs of all users of the data. 

Rather, the real problem is USNWR ’s arbitrary assignment of weights to each 
category and to each subcategory factor within a category. For a given student, how one 
institution compares to another will depend upon a whole set of factors that are not 
included in the ranking scheme including, but not limited to, the match of a student’s 
interests with the curriculum offered by the institution, the costs of attendance and the 
availability of financial aid, the region of the country from which the student is coming 
and in which the institution is located, the rural/urban nature of the campus, whether the 
student’s parents are alumni of the institution, the religious orientation of the student and 
the institution, the interests of the student in participating in intercollegiate athletics, 
intramural athletics and the whole range of other student activities, the athletic programs 
and other activities that the institution offers and the availability of support services for 
students with special needs. No set of weights, regardless of whether they are determined 
by USNWR or any group of “experts”, will accurately rank which of two schools a given 
student should attend. 



21 For example, Cornell currently has all of its data for the 1999-2000 to 2002-2003 academic years on line 
at http://dpb . comell.edu/irp/cds . html 




13 



15 



USNWR understands this and repeatedly counsels readers of its publications not to 
choose which schools to apply to based solely upon its rankings. Indeed, its 2004 
ratings issues also talked about eight types of programs that are thought to be associated 
with student success; these include the nature of first year experiences, the presence of 
learning communities, study-abroad options, opportunities for undergraduate research 
and service learning. USNWR asked presidents, provosts and deans to list 10 institutions 
with outstanding programs in each area and then it listed alphabetically the institutions 
that appeared frequently on these lists. 23 However, as the Monks/Ehrenberg study 
indicated, prospective students don’t always take USNWR advice seriously. The ratings 
do matter to students and their families and therefore they do matter to the institutions. 

To say that the data elements that USNWR collects information on are not the real 
problem with the ratings is not to say that they are necessarily the only data elements, or 
even the best data elements, upon which higher education institutions should be judged. 
Most of them relate to the resources that the institution has available to educated students, 
measures of the academic quality of the entering first-year class, and the academic 
reputation of the institution, which is presumably highly correlated with the quality of the 
entering students and the wealth of the institution. 24 Only one of the data elements, the 
comparison of actual and predicted graduation rates, is at all related to the value added 
that an institution provides its students and this variable only has a weight of 5% in the 
rating formula. Unfortunately, one can always quibble with the methodology used to 
obtain such comparisons and argue that a different methodology might have yielded 

22 See for example, Robert J. Morse and Samuel M. Flanagan (2003) 

23 Morse and Flanagan (2003) 

24 No study that I know of has looked at determinants of academic reputation of undergraduate programs, 
although Ronald G. Ehrenberg and Peter J. Hurst (1998), among others, have done this for graduate 
programs. 




14 



16 



different results. So the use of value added measures in these types of ratings formulae 
will always be open to question. 

It is not an accident that none of the top 20 national universities in the 2004 USNWR 
ranking was a public institution. Over the last several decades, the restricted financing of 
public higher education has led the publics to increasingly lag behind the privates in 
expenditures per student and in average faculty salaries. The implication of the USNWR 
rankings methodology is that the high quality publics, such as Berkeley, Michigan, North 
Carolina and Wisconsin appear to be increasingly less attractive places to study - the 
focus on resource levels, rather than on the nature of the undergraduate curriculum and 
how it is delivered to students surely overstates the changes that have occurred. 

Similarly, the heavy weight that student selectivity has in the ratings and the quest by 
all institutions to become “more selective” may lead public higher education away from 
one of its most fundamental historic goals, namely to provide access to all qualified 
students. Nowhere in the rankings methodology (save in the comparison of actual and 
predicted graduation rates) is there any mention of the income distribution of an 
institution’s students’ families, the education levels of the institution’s students’ parents, 
nor the fraction of its students for whom English is a second language. Institutions that 
recruit students from underrepresented and disadvantaged populations - students that 
tend to have lower scores on entrance exams - and that do a wonderful job educating 
these students through to graduation should be more highly valued than the USNWR 
methodology currently permits. 




15 



17 



V. Concluding Remarks 

USNWR is not the evil empire. It has repeatedly modified the way it computes its 
rankings of institutions over time in response to requests from an academic advisory 
panel and the more general academic community . 25 While some (including myself) have 
pointed out that the repeated change in its formula invariably leads to changes in the 
rankings of institutions, which provides a larger market for each fall’s new rankings 
issue, I take at face value USNWR’ s efforts to improve the information that it is providing 
its readers. 

The problem with the USNWR rankings lies not in its presentation of the information 
on individual data elements, but in its effort to aggregate these elements into a single 
index. If it stopped doing this, many of the objections that people have about its ratings 
would go away. Of course, so too would the rankings; the annual USNWR college issue 
would begin to look more and more like other college guides. 

The rankings exacerbate, but are not the major cause of the increased competition in 
American higher education that has taken place over the last few decades. The real shame 
is that this competition has focused institutions on improving the selectivity of their 
entering first-year classes. Institutions appear to be increasingly valued for the test scores 
of the students they attract, not for their value added to their students and to society. 

This problem appears to be particularly acute for our public higher education 
institutions at which the vast majority of American college students are educated. Cut 
backs in state appropriations have led to tuitions to rise at many of these institutions. At 

25 As far back as 1 986, 1 expressed the concern that the use of average faculty salaries in the faculty 
resource category penalized institutions located in low cost-of-living areas that did not have to offer high 
salaries to attract high quality faculty. USNWR quickly responded to my concern by deflating an 
institution’s average faculty salaries by an area cost-of-living index and using this measure in its ratings 
formula. 




16 



18 



the same time, the institutions are increasingly pouring money into merit scholarships to 
attract high test-score students, leaving fewer funds available for institutional need-based 
financial aid. More and more students from low-income families find that attendance at 
two-year public institutions is the only way that they can begin their higher education 
careers. 

The public 4-year institutions need to remember their responsibilities to provide 
access to a broad range of citizens of their states. They and their private counterparts also 
need to do a better job of facilitating the transfer of students from 2-year institutions and 
of improving the academic success rates of students who do transfer to them. 

USNWR could contribute to helping these things occur by incorporating additional 
data elements into its rankings methodology. Public institutions (at the least) should be 
given “credit” for enrolling (and graduating) students from lower-income and 
disadvantaged backgrounds. Given the large and growing importance of transfer student 
enrollments at most institutions, institutions should be required to provide information on 
transfer student success that is analogous to the 6-year graduation rate data for freshman 
and the two success rates weighted by the proportions of new students that enroll in each 
category to help judge how well an institution is performing on this dimension. 




17 



19 



References 



Amy Argetsinger, “Colleges Lobbying to Move up in the Polls: Schools Politicking 
Each Other to Advance in the Annual Rankings”, Washington Post, September 14, 2002, 
pAl. 

Edward Blum and Roger Clegg, “Percent Plans: Admission of Failure”, Chronicle of 
Higher Education 49 (March 21, 2003): BIO 

Dominic J. Brewer, Eric R. Eide and Ronald G. Ehrenberg, “Does it Pay to Attend an 
Elite Private College? Cross-Cohort Effects of College Type on Earnings”, Journal of 
Human Resources 34 (Winter 1999): 104 -123 

Stacy B Dale and Alan B. Krueger, “Estimating the Payoff to Attending a More 
Selective College: An Application of Selection on Observables and Unobservables”, 
Quarterly Journal of Economics 117 (November 2002): 1491 - 1527 

Ronald G. Ehrenberg, Tuition Rising: Why College Costs So Much (Cambridge MA, 
Harvard University Press, 2000) 

Ronald G. Ehrenberg, “Reaching for the Brass Ring: The U.S. News & World Report 
Rankings and Competition”, The Review of Higher Education 26 (Winter 2002): 145-162 
Ronald G. Ehrenberg and Peter J. Hurst, “The 1995 Ratings of Doctoral Programs: A 
Hedonic Model”, Economics of Education Review 17 (April 1998): 137 -148 

Ronald G. Ehrenberg and Daniel B. Klaff, “Changes in Faculty Composition within 
the State University of New York System: 1985 - 2001”, Cornell Higher Education 
Research Institute Working Paper 38 (August 2003), available at 
www.ilr.comell.edu/cheri 




0 



18 



Ronald G. Ehrenberg and Christopher L. Smith, “The Sources and Uses of Annual 
Giving at Selective Private Research Universities and Liberal Arts Colleges”, Economics 
of Education Review 22 (June 2003): 223-235 

Ronald G. Ehrenberg and Christopher L. Smith, “Analyzing the Success of Student 
Transitions from 2-Year to 4-Year Public Institutions within a State”, Economics of 
Education Review (forthcoming), table 2 

Ronald G. Ehrenberg and Robert S. Smith, Modern Labor Economics: Theory and 
Public Policy 8 th edition (Boston MA: Addison Wesley, 2003) 

Eric R. Eide, Dominic J. Brewer and Ronald G. Ehrenberg, “Does it Pay to Attend am 
Elite Private College: Evidence on the Effects of Undergraduate College Quality on 
Graduate School Attendance”, Economics of Education Review 17 (Winter 1998): 37 1 - 
376 

Robert J. Frank, “Higher Education: The Ultimate Winner-Take-All Market” in 
Maureen Devlin and Joel Meyerson eds. Forum Futures- Exploring the Future of Higher 
Education (San Francisco CA: Jossey- Bass, 2001) 

Caroline M. Hoxby, “The Effects of Geographic Integration and Increasing 
Competition in the Market for College Education” (Harvard Economics Department 
Working Paper, 1998a) 

Caroline Hoxby, “The Return to Attending a More Selective College: 1960 to the 
Present” (Harvard Economics Department Working Paper, 1998b) 

Caroline M. Hoxby and Bridget Terry (Long), “Explaining Rising Income and Wage 
Inequality Among the College Educated”, National Bureau of Economic Research 
Working Paper W 6873 (Cambridge MA: January 1999) 




19 



21 



James Monks and Ronald G. Ehrenberg, “U.S. News & World Report Rankings: Why 
They Do Matter”, Change 31 (November/December 1999): 43-51 

Robert J. Morse and Samuel M. Flanagan, “Using the Rankings”, available on the 
web at http://www.usnews.com/usnews/edu/college/rankings/about/04rank brief.php 
Michael J. Rizzo and Ronald G. Ehrenberg, “Resident and Nonresident Tuition and 
Enrollment at Flagship State Universities” in Caroline Hoxby ed., College Choices: The 
Economics of Which Colleges, When College and How to Pay for It (Chicago IL: 
University of Chicago Press, forthcoming) 

Michael Robinson and James Monks, “Making SAT Scores Optional in Selective 
College Admissions: A Case Study”, paper presented at the November 2002 meeting of 
the National Bureau of Economic Research higher education working group meeting. 
Available at www.mtholvoke.edu/~mrobins/monks.pdf 




20 



22 



i 



Table 1 

Criteria and Weights Used in USNWR 2003 and 2004* 

Ranking of National Universities and Liberal Arts Colleges as Undergraduate Institutions 



Ranking Category 


Category Weight 


Subfactor 


Subfactor Weight 


Academic 


25% 


Academic 


100% 


Reputation 




reputation 

survey 




Student Selectivity 


15% 


Acceptance Rate 


15% (10%) 






Yield 


10% 






High school class 
standing-top 10% 


35% (40%) 






SAT/ACT scores 


40% (50%) 


Faculty Resources 


20% 


Faculty compensation 


35% 






Percent faculty with top 
terminal degree 


15% 






Percent full-time faculty 


5% 






Student/faculty ratio 


5% 






Class size, 1-19 students 


30% 






Class size, 50+ students 


10% 


Graduation and 


20% 


Average 6 Year 


80% 


Retention Rate 




Graduation rate 
Average freshman 
retention rate 


20% 


Financial Resources 


10% 


Average educational 
expenditures per student 


100% 


Alumni Giving 


5% 


Average alumni giving 
rate 


100% 


Graduation Rate 


5% 


Graduation rate 


100% 


Performance 




performance 





Source: America's Best Colleges, 2003 Edition (Washington, DC: U.S. News & World Report, 2002), p79- 
81 and America ' s Best Colleges, 2004 edition (available at 
www.news.com/usnews/edu/college/rankings/about/weight_brief.php ) 

* Numbers in parentheses indicate 2004 weights that are different than the 2003 weights 





21 




U.S. Department of Education 

Office of Educational Research and Improvement (OERI) 
National Library of Education (NLE) 
Educational Resources Information Center (ERIC) 




NOTICE 

Reproduction Basis 




This document is covered by a signed "Reproduction Release (Blanket)" 
form (on file within the ERIC system), encompassing all or classes of 
documents from its source organization and, therefore, does not require a 
"Specific Document" Release form. 



This document is Federally- funded, or carries its own permission to 
reproduce, or is otherwise in the public domain and, therefore, may be 
reproduced by ERIC without a signed Reproduction Release form (either 
"Specific Document" or "Blanket"). 



EFF-089 (1/2003) 

3