What the Proposed Federal Data Collection Changes Mean for IPEDS
On August 15, 2025, the US Department of Education published a notice in the Federal Register announcing proposed revisions to the Integrated Postsecondary Education Data System (IPEDS). The proposed changes would expand data collection to include new items on college admissions and applicants, such as standardized test scores, family income, information on students who apply but are not admitted, and disaggregate the data in very complex ways. Stakeholders are invited to comment on these revisions as part of the regular federal review process for information collections.
Higher education experts have raised concerns that some of these changes may be burdensome for institutions and difficult to collect and report the data accurately, especially when asked to report retrospectively on admissions classes for which they may not have these data. To understand the significance of this debate, it is helpful to revisit the history and continuing purposes of IPEDS.
A brief history of IPEDS
The federal government has been collecting higher education data for more than 150 years, beginning in 1869 when it first reported on enrollment, degrees, and faculty. These efforts became more systematic in 1966 with the Higher Education General Information Survey (HEGIS), and in 1986, IPEDS replaced HEGIS as the central federal collection. Today, roughly 6,000 institutions that participate in federal student aid report to IPEDS annually.
Over the years, IPEDS has taken on multiple roles[1]:
- Federal reporting and analysis: The National Center for Education Statistics (NCES) uses IPEDS to continue a 150-year tradition of tracking historical trends in US higher education.
- Federal compliance: Since 1992, institutions receiving federal student aid must report to IPEDS or risk penalties.
- State-level planning and comparison: States rely on IPEDS both to benchmark institutions within their borders and to compare performance across state lines.
- Research and benchmarking: Institutional researchers, state analysts, and national policy experts use IPEDS to study enrollment, student success, costs, and finances.
- Consumer information: IPEDS data power College Navigator and the College Scorecard, public-facing tools that help students and families make informed choices.
Who uses IPEDS data?
The broad utility of IPEDS is one of its defining features. Federal policymakers depend on it to oversee student aid programs and to ensure accountability across thousands of colleges and universities. The National Center for Education Statistics (NCES) relies on IPEDS to produce annual reports, providing continuity in how the nation tracks higher education over time.
At the state level, education agencies use IPEDS to compare their institutions with those in other states and to inform funding models, accountability frameworks, and workforce alignment strategies. Institutions themselves draw on IPEDS data to benchmark against peer colleges. Researchers make extensive use of IPEDS to analyze national and state trends in enrollment, completion, and finance. Finally, students and families interact with IPEDS through tools like College Navigator and College Scorecard, which translate federal data into practical information for choosing a college.
This wide range of uses makes accuracy and consistency especially important. The same dataset underpins federal oversight, state policy debates, institutional planning, academic research, and individual decision making by students.
Why accuracy matters
Because IPEDS is used by federal agencies, state policymakers and analysts, institutional leaders, researchers, students, and families, the quality and comparability of its data are paramount. Small changes in definitions or reporting requirements can ripple outward, affecting consumer tools, federal reports, interstate comparisons, and institutional benchmarks.
Based on my experience as a former program director of IPEDS from 2006 to 2012, I can attest that implementing significant changes is a complex process that requires great care. During that time, IPEDS added net price reporting, for example, in response to the 2008 Higher Education Opportunity Act (HEOA) and introduced new standards for collecting and reporting race and ethnicity data in 2010–11. Both changes demanded significant effort from institutions—and NCES and its contractor staff—and required close collaboration between the US Department of Education (NCES and its contractors, in particular), reporting colleges and universities, and the many stakeholders who rely on the data. That experience underscores why any proposed revisions to IPEDS must be approached thoughtfully, with an emphasis on accuracy, feasibility, and strong engagement with both data providers and users.
The newly proposed additions will require the same level of care. Without careful planning and collaboration, even well-intentioned changes risk producing data that are inconsistent or difficult to use across institutions and states.
Implementation challenges
Based on my own experience directing IPEDS, I have seen how changes to reporting requirements can create challenges for institutions, especially when they are asked to provide data they do not already collect in a consistent format. The newly proposed admissions items represent a major expansion of IPEDS reporting, comparable in scope to earlier additions such as the collection of Net Price data in response to the Higher Education Opportunity Act of 2008 (HEOA) or the introduction of the Outcomes Measures survey. In both of those cases, institutions and NCES had to make substantial adjustments to systems, definitions, and reporting processes. Importantly, NCES also relied on structures such as the National Postsecondary Education Cooperative (NPEC) and Technical Review Panels (TRPs) to gather input from institutions, researchers, and policymakers before submitting revisions for approval. Those processes helped ensure feasibility, accuracy, and buy-in. Today, however, NPEC is no longer funded or staffed, and the future of TRPs remains uncertain, making it harder to secure the same kind of stakeholder engagement. Similarly, requiring institutions to capture and report data on applicants who are denied admission would mark a significant shift from current practice. Without robust mechanisms for collaboration, the risk is that the new requirements could result in uneven reporting across institutions and states.
Some experts have gone further, warning that the proposed requirements may be “inaccurate or impossible” for institutions to fulfill, since most do not keep detailed records on rejected applicants. Concerns have also been raised about the potential for data misuse, especially if income and test score information on rejected applicants were misinterpreted or applied outside the intended policy context. Critics argue that these challenges could undermine the credibility of the data while adding a significant administrative burden.
The current debate
The admissions-related proposals highlight the recurring tension between collecting more detailed information and ensuring that the data are feasible to report accurately.
Organizations such as the Institute for Higher Education Policy (IHEP) underscore the potential value of admissions data for promoting opportunity for students who have historically been underrepresented in higher education. They argue that, if collected responsibly, disaggregated information on applicants, admitted and not admitted, could shed light on where barriers to access persist and inform efforts to expand opportunity. But IHEP cautions that the Department’s current proposal could have the opposite effect. If the data are incomplete or inconsistent, or if the results are used without context, they may misrepresent institutional practices and risk reinforcing or entrenching these barriers rather than alleviating them.
Questions about how much detail IPEDS should capture, and at what level, are part of a broader, long-standing conversation. Policymakers have debated moving IPEDS from institution-level reporting to student-level reporting, which was prohibited under a provision in the HEOA (see Section 113), through proposals such as the College Transparency Act. While such shifts promise more detailed insights into student outcomes, they also introduce concerns about privacy, cost, and reporting complexity.
Moving forward
As with all IPEDS revisions, these proposed changes are open for public comment through October 14, 2025. This provides institutions, state agencies, researchers, and the public an opportunity to weigh in on feasibility, data quality, and use. Everyone needs to weigh in early, and the Department should also recognize that public comment periods alone will not be sufficient to gather the input and collaboration required to implement the new data collection in IPEDS successfully.
As the history of IPEDS shows, the system evolves continually to meet the needs of many different stakeholders. From my time directing the program, I learned that the successful implementation of new requirements depends on engaging closely with the institutions that provide the data and the stakeholders who rely on it. Any changes should therefore be guided not only by principles of accuracy, transparency, and feasibility, but also by active collaboration across the higher education community.
By approaching changes in this way, IPEDS can continue to serve as a trusted foundation for higher education decision making at the federal, state, institutional, and individual levels, even as the questions we ask of the data evolve and grow more complex, just as higher education evolves and becomes more complex.
[1] For more information on the history and evolution of the IPEDS data collection, see Elise S. Miller and Jessica M. Shedd, “The History and Evolution of IPEDS,” New Directions for Institutional Research 2019, no. 181 (2019): 47–58, https://doi.org/10.1002/ir.20297.