InformED Blog

You are here

Three Takeaways on College Ratings from the PIRS Symposium
18 Feb 2014
|
by Andrew Gillen

The Department of Education held a technical symposium last week to discuss what kind of data and analysis the federal government should use for President Obama’s accessibility, affordability, and  outcomes rating for U.S. colleges (Official title: Postsecondary Institution Rating System).

Three key takeaways from the meeting:

First, the current higher education data infrastructure urgently needs improvement. This message was delivered by just about every presenter, and it is probably the most important message of the day. There was general consensus that student-level data (the student unit record) rather than institution-level data  would provide a much stronger foundation for the ratings.

One example:  When a student transfers from a community college to a four-year college, it should count as a success for the community college.  But because institution-level data cannot distinguish between a student who drops out and one who transfers, that student counts as a failure for that community college.  Student-level data from a student unit record could easily distinguish between the two.

Second, there is growing skepticism over the feasibility of using the rating system to accomplish the two original goals of the ratings:

1) providing consumer information, and

2) using it as an accountability tool to reallocate federal financial aid.

The two goals aren’t necessarily contradictory, but there isn’t much overlap between them either. As a result, the better the ratings do on one goal, the less relevant the ratings are for the other. For example, to provide consumer information, you’d want to group institutions by location, since most students only consider schools within a limited geographic location. Yet a geographic-based grouping makes no sense for an accountability system. Why hold colleges in New York to a different standard than colleges in Texas?

Third, much debate remains concerning creating peer/comparison groups – ranking schools against similar schools. Several presenters had experience using peer groupings for projects, and the difficulties those presenters mentioned, and their discomfort in using those groupings for accountability further convinced me that a regression method (comparing broad groups of colleges after statistically accounting for differences in inputs, such as the percentage of students receiving Pell grants) is better than the peer-group method.

A prototype of the rating system is due to be released this spring, with the full ratings to come sometime during the 2014-15 academic year.

Share

Connect

Like our Facebook Page

Follow us on Twitter

Search InformED Blog

Related Issues

ED Policy Twitter Feed

1 day 18 hours @EdPolicyAIR
RT @Education_AIR: Exec. Dir. Beverly Lyke describes how @IllinoisCSI uses evidence-based practices to transform schools: https://t.co/DfGz
2 days 13 hours @EdPolicyAIR
RT @careercruising: CTE classes can let students try out their dream jobs https://t.co/ZpPpqoEvbn via @EdPolicyAIR #education #futurereadin
2 days 18 hours @EdPolicyAIR
RT @Education_AIR: .@RELMidwest tool: The "I" in #QRIS Survey: Collecting data on quality improvement activities for #EarlyEd prgms https:/…

0 Comments

Add new comment

Enter Your Name
Enter Your Comment