InformED Blog

You are here

Taking a Broader Look at Dept. of Ed's Proposed College Ratings
05 Jan 2015
by Tom Weko

By Tom Weko | Jan. 5, 2015

The Obama administration took a tangible—if tentative—step toward the President’s planned college ratings system on December 19, releasing a 24-page “Framework” for college ratings.   

The framework is still too preliminary to be usefully evaluated. Instead, I suggest we take a broader view of college ratings, and ask: Are the Department of Education's college ratings likely to become an enduring feature of the nation’s higher education landscape?  If not, then will this have been a fruitless venture? Or might the work it undertakes still have important and lasting implications?

Given that the college ratings developed by the administration are not embedded in law or regulation, nor are they rooted in widespread collaboration among the nation’s colleges, it seems unlikely they will be an enduring part of the education landscape.  But while college ratings may be unlikely to endure, as it builds a system of ratings, the Department will have created a data infrastructure and made analytical choices that may well have lasting implications for higher education.

1. College ratings will result in more data about student outcomes. To develop college ratings, the Department is creating new (and improved) measures of student outcomes. Key additions may include student loan repayment rates, rates of graduate school attendance, and earnings of college graduates (among federally aided students). Publishers, advocacy organizations, bond-rating agencies and Washington policymakers have a keen appetite for data on student outcomes to use for their own purposes, and they will be eager to see these continue – albeit detached from the Department ratings.

2. College ratings may set a precedent that changes how the federal government holds colleges accountable. Holding institutions accountable for student outcomes is complex. Outcomes—e.g. graduate earnings—can reflect, in part, the quality of education being provided. But we know that earnings reflect student and institutional characteristics over which colleges may have little direct and immediate control, including where they are located, their institution’s mission and mix of programs, and the characteristics of their incoming students. Historically Black Colleges and Universities in the rural South with a special focus on teacher preparation have reason to be concerned about their graduates’ earnings when compared to those of highly selective institutions based in metropolitan areas, such as MIT with a special focus on engineering programs. To ensure that institutions are rewarded for student outcomes that result from actual differences in education quality—and not advantages of location, mission, or student characteristics—the Department proposes to compare student outcomes that a statistical model predicts an institution should achieve—given its student and institutional characteristics—to the student outcomes it actually achieves.

Federal higher education accountability policies have long eschewed adjusted outcomes, establishing uniform performance requirements for higher education institutions. For example, much to the dismay of some, requirements for institutions’ student loan default rates take no account of borrower characteristics, such as gender or family income. Likewise, proposed regulations establishing permissible debt-to-earnings ratios for career education programs set a uniform threshold for all “gainful employment” programs. If the Department’s college ratings give credence to demands that all accountability policies take into account student and institutional characteristics, they will have an impact well beyond college ratings.

3. Online tools will allow users to choose and weigh the data they consider important for judging colleges and institutions. Recognizing that “many consumers may want … to make their own value judgments about what is important to them,” the Department proposes letting users determine for themselves which criteria to apply to institutions, and how to weigh these criteria to come up with their best choice. In the end, these consumer-driven evaluations are likely to prove a more powerful lever for institutional improvements—and a more politically sustainable way of improving performance—than Department of Education-determined ratings.

Federal college ratings as currently proposed may not endure. But, to the extent that the work done in producing the ratings does persist—leading to new ways of measuring institutional performance, of adjusting the outcomes of institutions, and customizing consumer information—the administration will leave behind better and more usable information about higher education, which may be a more valuable legacy than any evaluation of higher education institutions it can provide.

Tom Weko is a Managing Researcher at AIR. Prior to joining AIR he served as Associate Commissioner for Postsecondary Education at the National Center for Education Statistics and Director of Policy and Program Studies in the Department of Education, where he assisted in the Department’s work on college ratings.



Like our Facebook Page

Follow us on Twitter

Search InformED Blog

ED Policy Twitter Feed

1 day 17 hours @EdPolicyAIR
RT @Education_AIR: Exec. Dir. Beverly Lyke describes how @IllinoisCSI uses evidence-based practices to transform schools:
2 days 13 hours @EdPolicyAIR
RT @careercruising: CTE classes can let students try out their dream jobs via @EdPolicyAIR #education #futurereadin
2 days 18 hours @EdPolicyAIR
RT @Education_AIR: .@RELMidwest tool: The "I" in #QRIS Survey: Collecting data on quality improvement activities for #EarlyEd prgms https:/…


Add new comment

Enter Your Name
Enter Your Comment