Geotimes
Untitled Document

Feature 
Assessing University Research, the British Way
David Rickard

Sidebars:
Who’s number one?

Australian research excellence
Centering industry-funded research


“The public doesn’t know anything about wasting government money. We are the experts.”
-Sir Humphrey Appleby, the eponymous civil servant in the hit British TV comedy,
Yes Minister

Throughout the world, governments are struggling to find ways to fund public university research in a way that is accountable to their citizens and good for science. The majority of countries, including Germany, Sweden and New Zealand, use student numbers or something similar to determine funding levels. In Austria and France, funding is more or less “open to negotiation,” while in the United States, Canada and the Netherlands, funding and research performance appear to be relatively separated. Yet other countries use various performance-based approaches to distribute funding, for example, Poland, Australia and the United Kingdom.

In the United Kingdom, a stringent government review process determines which universities get funding. Cardiff University in Wales (shown here) has fared well in the assessment, putting it into an elite group of schools. Courtesy of Cardiff University.


The history of the system in the United Kingdom arguably begins with Margaret Thatcher in 1985, when she became the first Oxford-educated U.K. Prime Minister to be refused an honorary degree by her alma mater. Perhaps coincidentally, a year later, the British government under Prime Minister Thatcher’s leadership announced the introduction of the first complex and searching review of research in the country’s universities. To British academics, the review, called the Research Assessment Exercise, has become perhaps the most invasive of various government attempts to impose accountability on university life. Like the plague, it has erupted at irregular intervals within the community ever since. It hangs over the average faculty like the Sword of Damocles, largely determining which individuals and departments are on the chopping block.

Program basics

The Research Assessment Exercise (RAE) aims to assess the quality of research in U.K. universities, rather than just quantity. The RAE covers all subjects, including medicine, science, engineering, the arts and humanities. The system is used by the British government to distribute some $8 billion of public funds for research in a manner that rewards excellence. The outcomes provide public information on the quality of research in universities and colleges throughout the United Kingdom, and are often quoted and combined into ranking tables by the media.

The aim is to ensure the protection and development of the infrastructure of universities and colleges carrying out the best research in the country. First conducted in 1986, the RAE has taken place every four or five years since then, like a research Olympic Games, with the most recent exercise occurring in 2001.

The RAE is run through one of the government university funding agencies, the Higher Education Funding Council for England. It operates by peer review. In RAE 2001, research in the United Kingdom was divided into 68 subject areas, one of which was the earth sciences. An assessment panel is appointed to examine research in each of these areas. The academic community establishes the panels through nomination by universities, industry, learned societies and government institutions. Each university submits a report to the funding council on each of the subject areas in which it has research activities.

Each RAE submission lists for each department the “research-active” faculty, a new concept through the RAE. It refers to faculty who contribute enough research in the university’s view that they warrant funding from the RAE. Whether or not the average faculty member is designated research-active affects careers, promotions and contracts. It also varies with the aspirations of the department and university.

Each researcher submits the titles of up to four research papers published since the last RAE. The panel reads all these papers and grades the researchers. RAE 2001 assessed more than 2,400 submissions and examined more than 150,000 publications. The grade is determined both by the publications as well as information on research grants, postgraduate research student numbers and postdoctoral researchers, and evidence of esteem such as awards and prizes.

The earth science panel for RAE 2001, for example, reported that its grading procedure would be primarily based on publications or other forms of research output. Publication quality was determined by originality, contribution to the advancement of knowledge, impact on the discipline, scope of work, methodological strength and scholarly rigor. In addition, the panel took into account the extent of research activity (the number of research students, research assistants and completed research degrees), evidence of esteem by external funders, and evidence of vitality and a strong research culture, as well as the prospects for continuing development and the uniqueness of the department. Unlike some subject panels, the earth science panel did not introduce a formal algorithm for weighing these various parameters.

The Matthew Principle

The Matthew Principle (Matthew 13:12) states: “For whosoever hath, to him shall be given, and he shall have more abundance: but whosoever hath not, from him shall be taken away even that he hath.”

The outcome of all the RAE analyses is a series of grades for each individual subject in each university. In 2001, panels produced grades on a seven point scale: 1, 2, 3a, 3b, 4, 5 and 5*, with 5* being the highest. On the basis of the results of RAE 2001, the $8 billion research funding was distributed by first ensuring that the average unit of resource for 5*-rated departments was maintained in real terms, then restoring the level of funding to 5-rated departments and finally allocating what was left over to 4-rated departments. Basically 3- and lower-rated departments received little or no allocation.

The funding received by an individual department is based on its subject grade multiplied by the number of research-active staff who contributed to the grade. This level of funding is fixed for the following four (or in some cases, five, six or seven) years until the next RAE, independent of the subsequent changes in the size, faculty or performance of the department during this period.

The grade is used by government, research funding councils and the university itself to determine which university departments might receive further funding. For example, only grade 5 and 5* departments might be eligible to apply for a round of large-scale infrastructure funding initiated by the government or its research councils. So the benefit from the RAE is not only the direct RAE funding but also further opportunities for funding throughout the RAE period. The result is that higher RAE grades become a multiplier and the top-graded departments get even more money, whereas the lower-graded departments get even less.

Dramatic grade changes are rare between RAEs, and the earth science panel has rarely, if ever, changed any grade by more than one point. In RAE 2001, the ratings of nine of the 24 geoscience departments were changed: Seven went up one grade, and two went down one grade. The reasons for this intrinsic inertia are clear. The RAE defines research quality for the next four- to seven-year period on the basis of its performance during the last assessment period.

A department, for example, graded 4 at one RAE will be both funded at this level and also defined as grade 4 for the period up to the next review. To be rated grade 5 at the next review, the grade-4 department will need to have punched above its weight in the preceding period. This advancement is possible but requires extra funding from the university itself, which in turn depends on a strategic decision by the school to support its earth science department (probably at the expense of other departments).

In this system, universities themselves can be graded, by the number and quality of the research units they include. Universities with a particular high-grade level might be invited to apply for extra funding opportunities. The newspapers are able to publish ranks of universities from the published RAE grades. These often vary because individual newspapers use the published grades in different ways (and have their own prejudices). However, in this way the RAE outcomes feed back to the taxpaying voters.

Focus on the earth sciences

In recent years, British university earth science departments have suffered a double whammy. In addition to the RAE, in 1988 the Thatcher government decided to look into creating larger departments on the basis that critical mass would improve both research and teaching. Earth science departments were used as a guinea pig.

The result of this earth science review showed that the costs of the exercise would be prohibitive, especially for the larger physical sciences, and the scheme was dropped. However, by that time several well-known U.K. geology departments had been closed or merged with nearby departments.

In 1992, the government decided to increase the number of universities in the United Kingdom by the simple expedient of declaring that all the polytechnic institutes could be universities. Some of these new universities saw the earth sciences as a relatively inexpensive way of getting their begging bowls under the golden goose of extra physical science research dollars through the RAE. They have since fared poorly, and few have made the grade (literally grade 4) where substantial extra funding would come in. The departments have just as promptly been closed or merged internally. The intrinsic inertia in the system has proven to be the major barrier.

The effect of the RAE program has been to establish three university earth science departments as world-class, Cambridge, Oxford and Bristol. Beneath these three, a group of nine well-known departments jockey for position as international centers of excellence for earth science research. And then there is a tail off of departments that vary in perceived research quality.

Playing the game

Given a game to play with a defined set of rules, the brightest community will rapidly become good at it. The RAE game is no exception, although the rules change for each competition in order to make the challenge more worthy of the players.

Ivory towers: The effect of the performance-based Research Assessment Exercise in the United Kingdom is to preserve the perceived university pecking order, maintaining Cambridge and Oxford (shown here) universities at the top. Copyright Oxford University.


In RAE 2001, some 80 percent of university faculty were deemed to be grade 4 and above. In other words, the rating scale has become congested at the top end. After RAE 2001, the government commissioned a new report (the Roberts Report) to recommend methods for better selecting among the top-heavy end that they have created through the RAE process. In other words, the rules have to be changed again because the players are getting too good at it.

There is no question that, during the history of the RAE, the quality of British research in general, and earth science research in particular, has improved. This fact is proven for whatever metric you choose to apply: publications numbers, journal impact factors, international prizes. In RAE 2001, 70 percent of British earth scientists were judged to have produced research at an international or national standard compared with 40 percent in 1992.

In the same period of time, however, the average rating of U.K. earth science dropped from one of the top five subjects in the country to somewhere in the middle of the table. The most likely reason is that other subjects improved their RAE game plans. It is difficult to ensure equality of standards between subject panels, and it may be that other subject panels decided to be less restrictive in awarding higher grades. Alternatively, the earth sciences may have actually decreased in research standards relative to other subjects.

Selectivity in the disbursement of limited research funds is necessary in our publicly funded research world, but at what cost has this been achieved? The RAE has distorted life for the average British faculty member. There is no longer any hiding place for nonproductive academics. Publishing books is no longer desirable because such activities are generally not rated by the earth science RAE panels. Thus, old-fashioned scholarship has been downgraded.

Faculty members are often polarized into those who are research-active and those who are not and do most of the teaching and administration. The British government has had to take major measures to ensure that university teaching quality does not suffer relative to research by introducing various reviews of teaching quality. But the RAE pervades teaching too.

The Matthew Principle ensures that the top research universities get more funds, and they also tend to be the top teaching universities. If you are a student applying to study earth sciences in college, you will choose to be taught by leading world authorities in the subject, which are publicly defined by the RAE. So the top RAE-rated departments attract not only the best students but also the most students. The result is that departments rated at less than RAE grade 4 struggle to maintain teaching capabilities. Ultimately, many of these departments close.

A market has also developed in the trade of top-rated academics between departments, rather like professional footballers. The stars are sought with enhanced salaries, working conditions, extensive laboratories and support staff. Of course, this requires cash, so the more successful departments are able to attract the high-profile researchers.

From TV-character Sir Humphrey’s viewpoint, the RAE process is good value for the money. The RAE 2001 cost $11 million directly (costs of the central bureaucracy and assessment panels) and $67 million total (including costs of faculty time in filling in the forms). Because the RAE distributes $8 billion dollars, the cost of running it is less than 1 percent of the money disbursed. On the other hand, a normal person might ask if the $67 million might be better spent directly on research.

Some cynics might conclude that it is a zero-sum game. That is, the amount of money available for disbursement is not changing dramatically; it’s merely being reallocated to fewer, larger centers. In fact, the problem for the earth sciences is to ensure that they can compete for the research dollar. If you have a choice to spend your research dollar on the human genome or the metamorphic petrology of the Adirondacks, which would you choose? The creation of a number of large research centers has enabled earth sciences to compete on a level playing field with big science. For example, the 5* Cambridge earth science department attracted a $40 million investment from BP-Amoco for fundamental research in multiphase fluid flow.

A problem in changing the assessment to reallocate the funds is the public perception of the RAE. If any table of university rankings is produced in the United Kingdom that does not include Oxford and Cambridge at or near the top, the public will not believe it. So the credibility of the process will be damaged and the politicians, ever sensitive to voter opinions, will question the process itself. So in a way, the RAE must be designed to ensure the status quo.

As the United Kingdom moves forward with its unique process, many governments in different countries are examining the RAE system in detail; many political voices are expressing interest in introducing this scheme in their countries. However, to my knowledge, no other country has taken on the U.K. model yet. Why not? Please don’t send me the answer: It’s a rhetorical question ...

Who’s number one?

Cambridge, Oxford and the University of Bristol are currently the highest ranking earth science programs in the United Kingdom and, between like departments, receive the biggest slice of the federal funding pie. In the United States, Stanford, MIT and Caltech are perennially among the highest rated geoscience programs, but rarely come away with the most federal research dollars. The perceived incongruities are in part a result of different ranking systems.

In the United Kingdom, federal funding agencies, rather than independent organizations, rank universities (see feature). “There is zero independent data collection from schools in England; all the data comes from the government,” says Bob Morse, the director of data research at U.S. News and World Report. The opposite is true in the United States, where a number of independent systems seek to rate the top schools. And while the British funding groups use their government ratings to distribute funding, American federal funding agencies eschew rankings altogether in favor of a peer-review process. The idea in the United States is to make funding decisions science-driven rather than bureaucratically determined, but not everyone is confident in the separation between rankings and funding.

“If you press the funding agencies they will say, ‘no, we don’t use the data,’ but I’ve talked to people at the National Science Foundation [NSF] and other funding agencies, and they do look at ratings to get a sense of a program,” says Jim Voytuk, a senior program officer for the National Research Council (NRC). “The ratings do have some influence in their funding policies.”

Alex McCormick, a senior scholar at the Carnegie Foundation for the Advancement of Teaching, agrees. “When a foundation like NSF is making a decision, they look at a program and wonder if they have the expertise to do [research], and to do it well. Do they have the infrastructure and what is the probability of success?” This is the type of data found within a ranking system, he says.

Securing public funds for research, however, is not the only reason colleges and universities worry about rank. Many institutions use the ratings produced by the Carnegie Foundation or NRC to attract students and establish benchmarks — a policy not everyone is pleased with.

McCormick, who is involved with the production of the Carnegie Classification of Institutions of Higher Learning, regrets the implicit ranking effect within the classification. The Carnegie classification was originally intended for educational research, and McCormick worries that political pressure from universities seeking to change their standing could subvert the purpose of the publication. “You get this perverse situation where something that is supposed to be an objective look at higher education becomes a policy lever during institutions’ drive to change themselves,” he says.

NRC is less concerned about subversion and “welcomes use by institutions who want to know where they stand in relation to other institutions,” Voytuk says.

NRC’s report, Assessing the Quality of Research-Doctorate Programs, is revised every 10 years or so and examines each department within a university or college separately. Data is collected on topics as disparate as gender, percentage of faculty funded, teaching quality and reputation. Because the results are not paired together, it is difficult to get an overall feel for an institution. The pages and pages of data from NRC are also not easily approachable.

For a more public-friendly snapshot of higher education, most people read the rankings put forth annually by U.S. News and World Report. The magazine independently collects data on institutions and plugs their findings into a complex and secret formula to calculate rank. U.S. News is theoretically subject to the same pressures as the Carnegie classification and NRC assessment, but must also satisfy a crueler mistress — the newsstand.

“Some people say that U.S. News adjusts their formula so they don’t come up with the same numbers year after year,” Voytuk says. Although there is no hard evidence for (or law against) U.S. News changing their methodology, “there is really very little difference between these clusters of schools,” and it is hard to believe there are significant changes from year to year, Voytuk says.

U.S. News plans to update its geology department ratings in 2006 (last reviewed in 1998). Other rankings of U.S. schools include Top American Research Universities, published by “TheCenter” of the University of Florida and a variety of publications written by Irwin Feller at Pennsylvania State University. With keen interest in rankings among the public and universities alike, the ratings likely will continue to push American universities toward excellence.

Jay Chapman
Geotimes intern


Back to top

 

Australian research excellence
The Australian government has taken its own approach to solving the problem of performance-based research funding. Research in Australian universities is funded through two performance-based funding schemes. These systems use routinely available data to assess university departments. It is a rating method with many shortcomings, but at least it is in the public realm and known in advance. The system is inexpensive because most of the data are readily available; it is unselective, because all academics — and not just a selected subset of premier researchers or institutions — are considered. It is transparent and simple enough to be run annually, so the results of a single evaluation do not control research funding for years on end.

In addition to these performance-based schemes are the high-profile centers of excellence. In this system, the government sets up a competition for research funds for a specific time period. The objective of the strategy is to develop centers of critical mass that will have a significant international impact in selected areas of perceived advantage or importance to Australia. In fact, the United Kingdom and most other governments run similar schemes, but the Australian spin doctors have been very successful in marketing their concept internationally.

There are two types of centers: the Cooperative Research Centers (CRCs) and the Australian Research Council (ARC) Centers of Excellence. The CRC program was established in 1990 and currently supports 71 CRCs, two of which are earth science centers — the Landscape Environments and Mineral Exploration and the Predictive Mineral Deposit Discovery programs. By their nature, CRCs involve collaborations between universities, government organizations and industry.

ARC centers also involve collaborations between universities and sometimes with industry. In the earth sciences, the council currently funds the Ore Deposit Research program in Tasmania and the Tectonics program in Western Australia.

The government funding for any center lasts a maximum of five to nine years and is reviewed every three years. Less successful centers are closed down, and then those resources are freed up to start new centers. Some successful centers may regenerate by changing focus or adding partners.

The effects of these funding systems on the earth sciences are surprising. First-year enrollments in geology programs at Australian universities have dropped steadily since 1995, with some 1,400 students per year now enrolled in the earth sciences. Along the same trend, the Minerals Council of Australia predicts a 60 percent drop in completed earth science Ph.D.s by 2006 from a high in 2001. Because Ph.D. numbers and completion rates are key components of the algorithm for basic research infrastructure funding by the commonwealth government, the future looks bleak.

Still, the Australian research centers program has been very effective in building critical mass. In the earth sciences it has led to Australia becoming the world-leader in minerals deposit research.

Other areas of earth science research, however, appear to have been less successful in attracting funding through the centers program. This fact is partly offset by a government block grant to the Research School of Earth Sciences at the Australian National University (ANU) to conduct fundamental research. This grant is not determined by the national research selectivity process. The $10 million per year funding for research at ANU is equivalent in size and funding to about five ARC Centers of Excellence.

The problem is exacerbated for the earth sciences by the strong funding influence by central government in its National Research Priorities (NRPs), which are used to determine which centers will receive funding. Globally, NRPs often tend not to prioritize the earth sciences because the priorities are based on areas perceived to be important to current national commercial interests.

The Australian CRC system claims 97 percent alignment with the national priorities. The Australian 2003 NRP list initially defined just four subjects, which did not seem to hold out much hope for the future of the earth sciences. Fortunately, these subjects were subsequently expanded to include four more NRP areas — one of which, Environmentally Sustainable Australia, is particularly relevant to the earth sciences; it includes water, soil, deep Earth resources and climate change as priority research areas.

The Australian system has been criticized for placing too great an emphasis on quantity rather than quality. The recent government review Higher Education at the Crossroads does not appear to have delivered the expected widespread reforms of the research system. The March 2004 responses of Education Minister Brendan Nelson to a series of specific research-oriented reviews seem to suggest that the status quo will be maintained, at least for the time being.

DR

Back to top

 

Centering industry-funded research

Due to recent economic pressures, interest has shifted from research consortia toward industry funded centers of excellence. This new trend is changing the landscape of university-industry partnerships.

Jim Borer (middle), a researcher with the new ChevronTexaco Center of Research Excellence at the Colorado School of Mines, discusses geologic interpretations with graduate students Brad Sinex (left) and Erik Kling (right) in the Delaware Mountains in Texas. Image courtesy of Jim Borer/Marieke Dechense.


In the United States, centers of excellence are based on programs of the same name in Australia, which concentrate funding on a specific topic to produce directed results (see sidebar, above). The ChevronTexaco Center of Research Excellence (CoRE) at the Colorado School of Mines is one of the most recent centers to open its doors.

Michael Gardner, a professor and principal investigator at CoRE, says the move toward centers of excellence “is a new trend the petroleum geosciences are really championing; other oil companies are out there now looking for universities to partner with.” One of those universities is the University of Texas, according to Charles Kerans, a senior research scientist with the Texas Bureau of Economic Geology. Kerans says he expects more centers to start popping up around the country due to the petroleum industry’s recent cutbacks.

Indeed, economic pressures in the 1990s forced the petroleum industry to prune its research spending, Gardner says. “The research labs were the first to go. Now companies have to look elsewhere for training and research.” By funding centers of excellence, oil companies are able to continue research in conjunction with university efforts. The new centers also create an ideal setting for the petroleum industry to educate its employees. “ChevronTexaco’s Centers of Research Excellence provide educational opportunities for our employees, particularly foreign nationals seeking advanced degrees,” says Frank Harris, a research scientist with ChevronTexaco.

Universities are happy to take on foreign employees of the oil companies, Gardner says. “They get multiple international students paying full, out-of-state tuition in addition to the research collaboration and funding.” Individual scientists are not completely sold on the idea yet, however.

Because university researchers typically conduct research on more than one project, Gardner worries that centers of excellence might be perceived as a conflict of interest. “Other existing funding sources look at this research nervously,” he says. Companies already funding research through consortia or other means might worry about the exclusivity of these research relationships,” Gardner says. “It is an issue concerning academic freedom and it is not simple, but I am fully engaged and comfortable with it.”

Kerans, a principle investigator with the Reservoir Characterization Research Laboratory, agrees. “It is clearly a concern,” he says, “although I don’t see it stopping anybody — there are ways to work around it.” Kerans also points out some of the benefits of a long-term, exclusive relationship with an industry partner. “With consortia, you are catering to a lot of different interests. With a center of excellence you are dealing with one group with one contact.”

In the meantime, other types of industry-university partnerships are still going strong. For ChevronTexaco at least, “Centers of Research Excellence are intended to complement, not replace, academic research consortia,” Harris says.

Jay Chapman
Geotimes intern


Back to top


Rickard is a professor of geochemistry in the School of Earth, Ocean and Planetary Sciences at Cardiff University. He was chair of the school from RAE 1992 through RAE 1996 to RAE 2001.

Back to top

Geotimes Home | AGI Home | Information Services | Geoscience Education | Public Policy | Programs | Publications | Careers

© 2024 American Geological Institute. All rights reserved. Any copying, redistribution or retransmission of any of the contents of this service without the express written consent of the American Geological Institute is expressly prohibited. For all electronic copyright requests, visit: http://www.copyright.com/ccc/do/showConfigurator?WT.mc_id=PubLink