The American higher education sector is diverse and creative. In 2014-15, the sector produced over 1 million associate’s degrees, nearly 1.9 million bachelor’s degrees, over 758,000 master’s degrees, and over 178,000 doctoral degrees.[1] The world leader in innovation for decades, the sector continues to produce cutting edge research and contributes mightily to the American economy. Recent estimates concluded that the United States spends a larger percentage of GDP on higher education than any other country.[2]

But while the sector continues to be vital to our country, over the last decade it has come under increasing scrutiny and criticism. Among the many statistics that capture the challenges facing higher education in the U.S., a few stand out: the one trillion dollars in student debt that students have accumulated and fact that, among first-time, full-time students, only 60 percent complete bachelor’s degrees and less than 40 percent complete associate’s degrees at the institution where they started.[3] A third revelatory figure is the nearly $160 billion in federal higher education investment in 2015-16.[4]

Fifteen years ago the question of higher education quality assurance was one only a small number of insiders concerned themselves with, but today it is a major topic of national media and political campaigns.

The massive public and private investment the country is making in higher education, combined with increasing concerns about the success of the sector in promoting positive outcomes for students, have raised the issue of quality assurance to one of prominence. This has led to an intensifying debate among government officials and policymakers about the best ways to regulate the sector to increase its productivity. Fifteen years ago the question of higher education quality assurance was one only a small number of insiders concerned themselves with, but today it is a major topic of national media and political campaigns.

The purpose of this landscape paper is to organize some of the current debates about higher education quality assurance and to present a possible path forward to enable higher education leaders, policy makers, and the twenty-plus million current students to achieve their common goal of improving the success of the sector.[5]

We elaborate and advocate a “management-based” approach to higher education quality assurance. In a management-based approach, institutions document their own outcome goals and plans for achieving them, subject to ongoing third-party monitoring of progress toward goals and the quality and implementation of plans and processes, as well as achievement of standard, minimum performance thresholds. All evaluation is contextualized and benchmarked against the experience of peer organizations. When implemented effectively, such a management-based approach weeds out the poorest performers, while motivating and facilitating other institutions to reexamine and improve their processes and results continuously.

We draw examples from management-based quality assurance systems in other sectors and countries to illustrate features like the combined assessment of standardized outcomes and program-defined outcomes; monitoring of targeted quality improvement plans; frequent interaction between regulators and providers; and differentiated reviews, consequences, and ratings. Applying a management-based approach to U.S. higher education quality assurance, we identify several high-level design principles to strengthen the current system:

  • Initial approval and a probationary period should focus on provider track record, program coherence and value proposition, student outcome goals and a plan for achieving them, and exit strategy in the event of failure. This is similar to the current system, though a management-based approach would encourage more opportunities, even if on an experimental basis, for different models to be given a chance.
  • A more significant departure from the current system is the principle that there should be standard and program-defined measures for both organizational efficacy and student outcomes. Both should be peer-benchmarked with greater coordination around measuring student learning.
  • Also unlike the current system, we recommend annual review of a small set of student outcome and financial stability measures that are standard for a peer set of programs and appropriately account for conditions of operation.
  • In addition, programs should be assessed every three years on evidence-based, provider-defined goals for planning, implementation, and effectiveness of core educational processes, with a focus on processes identified as areas for improvement in prior years.
  • Results of reviews should be differentiated, not binary, and conclusions and the evidence supporting them should be reported publicly, in an accessible format, by the reviewer.
  • Finally, we recommend an escalating series of supports and consequences based on institutional performance. High-performing institutions should receive designations of excellence or extended periods between reviews. Institutions that fail to meet benchmarks, implement improvement plans, or repeatedly fail to achieve improvement should receive tailored supports for organizational learning, and may be subject to more-frequent or -detailed review, externally imposed goals, loss of funds, or loss of accreditation for some or all programs.

Many of these are subtle variations on the existing system, in some cases consistent with reforms already being piloted by accreditors; some are more significant departures. In general, our view is that the basic infrastructure of our current system of accreditation is consistent with a management-based approach. Mainly what is needed are some changes to the focus, standards, and timing of review, and to consequences and reporting, as well as a more streamlined initial approval process. Notwithstanding their apparent modesty, the changes we suggest have the potential to open the door wider to innovative providers, while doing a better job than the current system of ensuring minimum standards and promoting ongoing improvement in quality.

Importantly, these design principles . . . are grounded in a theory of change that views institutional learning as the primary mechanism for sustained improvement.

Importantly, these design principles—and our broader focus on management-based quality assurance—are grounded in a theory of change that views institutional learning as the primary mechanism for sustained improvement. The goal of the process is not merely to ensure that minimum standards are met, or to enforce program designs and practices that fit a particular image of what postsecondary education should look like. Rather, it is to reinforce an institution’s own examination of its practices and their effects on outcomes of social value, in a cycle of continuous improvement.

The paper proceeds as follows. We begin with an overview of the accreditation system and other quality assurance mechanisms for higher education in the United States, including their history, processes, and shortcomings. We review how some recent efforts to improve quality assurance that emphasize performance-based assessment have sought to overcome some of these shortcomings, but, ultimately, poorly accommodate institutional diversity and do little to support improvement. We then introduce management-based regulation and offer a number of examples from the U.S. and elsewhere, in higher education and other fields. Drawing on these examples, we conclude by elaborating on our broad design principles for reforming the higher education quality assurance system in the U.S to make it more rigorous, consistent, and supportive of innovation and improvement.[6]

Higher Education Accreditation and Its Critics

More than 7,000 institutions of higher education exist in the United States today.[7] The sector is richly diverse, with everything from large public research institutions to small religious colleges to for-profit institutions located fully on the web, and much in between. Recognizing this diversity, government has historically taken a flexible approach to the regulation of higher education. Although states have sometimes been more prescriptive of the methods and processes by which institutions (particularly public higher education institutions) must operate, in general, higher education institutions have been given the flexibility to set their own goals and determine the methods by which they will achieve them.

The Higher Education Act of 1965 (HEA), the law that created the current federal system of higher education finance, does not prescribe how institutions should teach, research, or provide service to the community. However, Title IV of the HEA states that an institution must be “accredited” for its students to be eligible for Pell Grants and student loans under federal programs. [8] Accreditation is a non-governmental system under which regional or national private entities work with individual higher education institutions to review and critique their operations. The higher education institution provides a “self-assessment” that the accreditor then uses as a framework for examining the institution’s successes and challenges.

Accreditation has been around for more than a century. Before World War II, accreditation was a fully non-governmental initiative that provided a process for institutions to assess themselves, and it also served as a basis upon which institutions would allow students to transfer from one to another. Since the 1950s, accreditation has taken on a second, somewhat conflicting, responsibility of assuring the government of institutional quality control. Non-governmental accreditors help the federal government by certifying that institutions are worthy of participation in higher education financial programs.[9] Under the current accreditation system, seven regional and seven national accreditors work with institutions to assess their programs. Numerous “program accreditors” also certify academic programs in specific disciplines and professions. For example, the American Bar Association accredits schools of law.[10]

With some meaningful exceptions in the areas of finance and safety, where accrediting entities must follow federally prescribed inspection procedures, the regulation of higher education is flexible. Institutions set their own goals and measures that indicate progress toward them.

The initial accreditation process for a new institution is especially detailed—some would argue overly burdensome—and can take between five and ten years to complete. After initial accreditation, for most institutions, re-accreditation happens every ten years usually with a mid-term check on their operations, providing an opportunity for them to update their goals and methods and to work with a third party to evaluate themselves. With some meaningful exceptions in the areas of finance and safety, where accrediting entities must follow federally prescribed inspection procedures, the regulation of higher education is flexible. Institutions set their own goals and measures that indicate progress toward them. Academic priorities are established internally, and institutions themselves assess whether they are successfully pursuing them. Supporters argue that the flexibility of the accreditation system allows it to be responsive to the diversity of higher education: the numerous different academic programs at different kinds of institutions that serve different kinds of students. A “one-size-fits-all” approach to regulation, they argue, would be destined to fail.[11]

Many have argued that the coupling of accreditation’s gatekeeping function with its quality assurance function obscures the efficacy of the latter, leaving few intermediary or graduated options that could support and incentivize poor performing actors’ improvement without removing them from the system altogether.

Despite this flexibility, if an accreditor finds that the organization has not met the basic requirements of financial stability and academic rigor, the institution is no longer eligible to participate in federal financial aid programs (this happens rarely). For almost every institution, removal of an accreditation would be a death knell, so they work hard to ensure this does not happen. Many have argued that the coupling of accreditation’s gatekeeping function with its quality assurance function obscures the efficacy of the latter, leaving few intermediary or graduated options that could support and incentivize poor performing actors’ improvement without removing them from the system altogether.[12]

Because of the Title IV requirement that an institution be accredited in order for its students to receive federal financial aid, the federal government plays a large role in the higher education regulatory system. But states also play a meaningful role. Every institution must receive approval by their home state to operate, and an institution cannot receive funding without this approval. Many states have significant requirements (such as proof of financial ability and a sound curricular program) that institutions must meet to be approved, and most provide meaningful financial oversight of approved institutions. However, few states require that approved institutions meet academic benchmarks, and most have not have taken action against low-quality institutions.[13]

This system has brought increasing complaints over the years. First, critics argue that the accreditation system has few teeth. Institutions only need to share their plans with the third-party accreditor, and are not required to submit it to the government regulator. They are also not required to implement their plans, and there are no penalties if, for example, at reaccreditation the institution has not implemented the plans it established a decade before.[14]

Accreditation decisions are enforced on a binary scale (reaccredited or not reaccredited), and almost every institution is reaccredited, whether they are doing well or poorly. This lack of differentiation makes it difficult for students (and regulators) to determine which institutions provide a quality education and which ones should be avoided.

Additionally, accreditation decisions are enforced on a binary scale (reaccredited or not reaccredited), and almost every institution is reaccredited, whether they are doing well or poorly. This lack of differentiation makes it difficult for students (and regulators) to determine which institutions provide a quality education and which ones should be avoided. Critics also argue that, since an institution meeting minimum requirements gets the same access to funds as a high-performing one, this approach does not give incentives for institutions to improve.

To enhance the strength of the regulatory system, Congress amended the HEA in 1992 to require the U.S. Department of Education (ED) to approve accreditors.[15] The ED must certify that each accrediting agency has the capacity to assess higher education institutions. The amended HEA also gave more direction to accreditors, requiring them to certify that institutions meet “minimum standards” in ten areas, including student achievement and compliance with Title IV.

Since the adoption of these requirements, Education Department oversight of accreditors has increased—but many still argue that the system has had little substantive impact and that the regulations have pushed accreditors to become “box checkers,” certifying that institutions meet minimum standards in their operations. While infrequent, this process is burdensome and costly, requiring the institution to dedicate thousands of person-hours and hundreds of pages to respond to self-study prompts and document requests. Yet critics contend that the process still does little to help institutions improve processes or student academic outcomes.[16]

Over the past decade, policy makers have increasingly questioned the value of accreditation in the federal financial aid system. A 2006 report by a commission appointed by then-Secretary of Education Margaret Spellings argued that accreditors needed to push institutions to focus more on student academic outcomes and that the system should be more transparent and accountable to public concerns.[17] While she was the Education Secretary, Spellings tried to change department regulations to require accreditors to measure student learning and other outcomes, but higher education institutions convinced Congress to prohibit the department from adding these requirements. More recently, former Education Secretary Arne Duncan called accrediting agencies “the watchdogs that don’t bark.”[18]

Critics also argue that accreditation serves as a barrier to entry for innovative new educational approaches. For example, federal regulations require that student aid be allocated based on the number of “academic credits” being taken by the student. Many argue that this approach, which calculates the total number of credits received using the number of hours the student is expected to spend in class, does not measure actual student learning. New approaches, such as “competency-based” education, do not fit well into existing Title IV requirements focused on Carnegie units because they measure student progress based on achievement of learning outcomes rather than seat time. With some exceptions in experimental initiatives, competency-based approaches have struggled to achieve accreditation, and have not been able to compete for students desirous of financial support.[19] New providers like coding boot camps, MOOCs, and MOOCs specializations, and other short-term skills-based credentialing programs also do not meet traditional accreditation requirements, and, in most cases, are ineligible for federal financial aid.

Federal regulations impose barriers to innovation, but critics also argue that accreditors themselves, which rely on volunteers from established institutions, also serve as obstacles to new approaches…

Federal regulations impose barriers to innovation, but critics also argue that accreditors themselves, which rely on volunteers from established institutions, also serve as obstacles to new approaches, and might be more open to innovation if experts from industry or other educational organizations were included in review. Senator Marco Rubio (R-FL) has called the system a “cartel.”[20]

Performance-based approaches to reforming higher education quality assurance

Given the significant increase in federal funding and the even greater increase in family contributions to higher education over the past decade, as well as the overall student debt at $1.2 trillion, it is not surprising that there are increasing demands for regulation to focus more on student and financial outcomes. Concerns about outcomes and efficiency often lead regulators to consider performance-based reforms. Performance-based approaches to regulation rely on measurable proxies of the desired outcomes to evaluate the regulated entity. The evaluations may be made public to encourage consumers to “vote with their feet,” or they may be used by the regulator to distribute rewards and consequences.

This section describes some of the performance-based approaches to regulation and quality assurance that institutions and their regulators have adopted over the past decade. It focuses, in particular, on the obstacles the Obama Administration faced in implementing an outcomes-based college rankings and federal financing system as indicative of the larger challenges inherent in using a performance-based quality assurance system for such a heterogeneous sector. At the same time, these efforts have pushed the debate around norms of transparency and coordination in quality assurance, laying the groundwork for potential future reforms that rely on common data and definitions.

Prior to the Obama Administration’s efforts, some institutions began to create voluntary accountability and quality assurance frameworks based largely on outcomes. For example, in 2007, feeling pressure from federal regulators to provide more transparency about costs and outcomes, a group of higher education institutions belonging to the Association of Public Land Grant Universities (APLU) and the American Association of State Colleges and Universities (AASCU) created their own “Voluntary System of Accountability” (VSA) to enable the public to better understand the structure and impact of these institutions. Through the system, which now has approximately 400 members, each institution contributes a “college portrait” which describes college costs, an income-based estimation of available financial aid for applicants, and data regarding student outcomes, campus life and experiences and other information relevant to applicant decision-making.[21] The college portrait has been moderately useful in providing information to prospective students, but critics argue that information has not been presented in a user friendly way, and that the use of data from standardized assessments offered too simplistic a view of learning across institutions.[22]

Similarly, in 2011, the American Association of Community Colleges launched the Voluntary Framework for Accountability (VFA), which defines metrics for community-college-specific outcomes such as term-to-term retention rates, share of students who start in developmental courses, progress towards college level work, and data on transfers. The primary purpose of the data is to help institutions better understand how well they are serving their students, but the system was also designed to provide information to policymakers and the public.[23]

These voluntary efforts have been accompanied by an increasing focus on outcomes in state-funding. Historically, states have allocated appropriations to higher education based on student enrollments. But over the past decade, many states have adopted an approach known as “outcomes-based funding” or “performance-based funding” (PBF), which uses formulas that identify particular outcome targets and “rewards” institutions for meeting these targets. PBF models incentivize universities to achieve specific policy objectives based on the percentage of state funding that is appropriated to accomplish those objectives. Currently, twenty-six states have PBF policies for their publicly-funded institutions, and many other states are considering this approach.[24]

Most PBF policies tend to emphasize outcomes such as graduation rates and retention rates, but states include many different objectives in their regulations. In Tennessee, the law rewards institutions that increase graduation rates overall and among selected groups that tend to graduate at lower rates.[25] In Ohio, institutions that produce STEM graduates receive incentive funding.[26] And in Pennsylvania the law provides incentives for improvements in many areas, including fundraising and enrollment of first-generation students.[27] In general, the amount of the incentive is small in relation to universities’ budgets, usually constituting only about one to five percent of the total budget. As a result, to date, the impact of such approaches has been mixed, and studies have found only small changes in outcomes in states with this approach. In addition, some studies have found that PBF policies produce unintended consequences such as greater restrictions on student admissions, which lead to a decrease in the diversity of student populations.[28]

While regional accreditors have not used statistical outcome measures or minimum standards for accreditation, most national and programmatic accreditors set threshold requirements for metrics like completion rates, exam pass rates, and employment rates.[29] Regional accreditors recently announced plans to use minimum graduation rates to direct further review of poor performing institutions. And some have incorporated assessments of more broadly defined competencies into accreditation—though not without pushback. For example, in a 2013 redesign, the Western Association of Schools and Colleges (WASC) incorporated into their standards a set of core competencies in five skill areas. To be accredited, institutions must “describe how the curriculum addresses each of the five core competencies, explain their learning outcomes in relation to those core competencies, and demonstrate, through evidence of student performance, the extent to which those outcomes are achieved.”[30] Originally, WASC planned to have institutions compare their expectations for degree recipients to the Lumina Foundation’s Degree Qualifications Profile. The redesign effort was pared back after leaders at many institutions argued that the use of a common framework would lead to homogenization and increase the burden of compliance.[31]

At the federal level, the Obama Administration undertook several performance-based reforms in the higher education sector. One, which has become known as the “gainful employment” rule, was introduced in 2012 and imposes tighter regulations on for-profit institutions and requires them to prove that students who pursue credentials in their programs will secure jobs upon graduation that compensate them for their investments. This rule responded to criticisms that too many students at for-profits schools take on unsustainable debt in exchange for degrees and certificates that carry limited value in the job market. Under the gainful employment regulations, programs whose graduates have annual loan payments greater than 12 percent of total earnings and greater than 30 percent of discretionary earnings in two out of any three consecutive years are no longer eligible for federal student aid for a minimum of three years. Gainful employment regulations also require that institutions provide disclosures to current and prospective students about their programs’ performance on key metrics, like earnings of former students, graduation rates, and debt accumulation of student borrowers.[32]

In January 2017, the Department of Education released data showing that more than 800 programs had failed to meet accountability standards for the gainful employment rule. Ninety-eight percent of these programs were offered by for-profit institutions.[33] Because of the restrictions these regulations could place on for-profit institutions, the gainful employment regulations have been the subject of withering dispute between the Education Department and the for-profit sector. The original rules were declared invalid by a federal district court, but so far, the revised rules have withstood legal attack.[34] Their fate under the Trump Administration is far from clear.

The Obama Administration also increased its enforcement activities against institutions in financial distress. Perhaps the most prominent example is the Education Department’s removal of financial aid eligibility from the Corinthian Colleges system. The college, plagued by financial instability resulting from declining enrollments and legal battles over false advertising and unlawful debt collection practices, was forced to close after the Department placed it on “hold” status (preventing it from accessing the federal financial aid system).[35] Strengthened standards for assessing institutional viability led to the closure of several small institutions and greater oversight of other institutions in financial peril, as well. And, in 2016, the Department terminated its recognition of the Accrediting Council for Independent Colleges and Schools, which accredited 245 institutions, most of which were for-profit (including several Corinthian Colleges locations). The decision, prompted by pervasive “compliance problems,” marks the first time that the federal government has officially denied recognition to an established accreditor.[36]

Finally, the Obama Administration tried to promote transparency in the sector by providing more information to students and their families. In August 2013, President Obama announced that he would create a “college rating system” to assess the nation’s higher education institutions on their cost of attendance, student graduation rates, and graduates’ post-college earnings. The plan was to use these ratings to inform the allocation of federal funding, including financial aid, to institutions.[37]

The tortured development of this effort—and its ultimate outcome—is indicative of the challenges involved in performance-based regulatory approach for the higher education sector. When President Obama first announced his rating proposal, he called for reforms to federal higher education financing that would reward colleges that offer low tuition, provide “value” (defined as programs that had high graduation rates), enable graduates to obtain good-paying jobs, and give access to low-income students. “What we want to do is rate them on who’s offering the best value so students and taxpayers get a bigger bang for their buck,” Obama explained. “Colleges that keep their tuition down and are providing high-quality education are the ones that are going to see their taxpayer funding go up. It’s time to stop subsidizing schools that are not producing good results.”[38]

Although their responses to the former president’s proposal differed in tone and substance, higher education institutions around the country were vocal critics of the proposal, and their assessments were representative of more general critiques of performance-based regulation. Higher education leaders argued that such a rating system would be impossible to create because higher education is too diverse and has too many goals; the “value” of education was difficult to meaningfully quantify.[39] In one characteristic argument, David Warren, director of the National Association of Independent Colleges and Universities, asserted that “private, independent college leaders do not believe it is possible to create a single metric that can successfully compare the broad array of American higher education institutions without creating serious unintended consequences.” Any rating system, Warren argued, would reflect policymakers’ choices more than those of individual students.[40]

Although the Obama Administration claimed that the proposal would distinguish among different types of schools, higher education leaders and their lobbyists expressed concerns that such a proposal would further exacerbate the divide between the elite schools—where students from mostly wealthy backgrounds graduate at high rates and secure well-paying employment—and the many institutions that provide open access and have lower graduation and employment outcomes. “It’s not fair or reasonable, really, to rate institutions on their performance without consideration of the nature of their student body,” argued Peter McPherson, president of the Association of Public and Land Grants Universities.[41] Furthermore, according to critics, an exclusive focus on limited metrics, such as earning data, could result in colleges neglecting programs in low-paying occupations such as teaching and nursing.[42]

Higher education leaders also questioned the ability of the government to gather and manage accurate data on these complicated factors. “Several of the data points that the Department is likely to include in a rating system, such as retention and graduation rates, default rates and earning data—are flawed,” argued Molly Corbett Broad, president of the American Council on Education. “The Department of Education’s retention and graduate rates, for example, count as a dropout any student who transfers from one institution to another, regardless of whether they complete their education at another institution,” she continued.[43] Furthermore, at the time, federal graduation rate calculations only included first-time, full-time students, leaving out most students who attend community colleges and for-profit schools.

In the summer of 2015, after more than two years of discussions with higher education institutions, educational advocates, and congressional leaders, the administration pivoted away from the idea of creating a rating system. Instead, the Department of Education released the “College Scorecard,” an online system providing a considerable amount of institution-level data on students’ academic, employment, and financial outcomes at the nation’s colleges and universities.  “The College Scorecard aligns incentives for institutions with the goals of their students and community,” a White House statement reads, “although college rankings have traditionally rewarded schools for rejecting students and amassing wealth instead of giving every student a fair chance to succeed in college, more are incorporating information on whether students graduate, find good-paying jobs, and repay their loans.”[44] A policy briefing that was published with the Scorecard notes both the challenges of comparing institutional performance across the sector, as well as the importance of baseline expectations and shared values. [45]

Performance-based regulation reinforces—intentionally—competition among higher education institutions, exacerbating incentives for institutions to keep effective practices to themselves.

The college rating saga revealed both the growing push for outcome related transparency and accountability, as well as challenges inherent in performance-based regulation. Even if there is a general consensus on aspirations like accessibility, affordability, and quality, defining those goals concretely, measuring them meaningfully, and then applying them uniformly to the highly heterogeneous world of higher education creates its own kinds of problems—both technical and political. Furthermore, absent oversight of process, institutions may seek to game a narrow set of high-stakes outcome metrics, while failing to advance the underlying goals for which they are a measurable proxy, or undermining other goals not captured by the metrics (like broad and equitable access to higher education). Finally, performance-based regulation reinforces—intentionally—competition among higher education institutions, exacerbating incentives for institutions to keep effective practices to themselves.

Management-based approaches to quality assurance

Across a number of domains, management-based regulation has emerged as a viable alternative to both input-focused, bureaucratic approaches and outcome-focused, performance-based approaches. In a management-based system, the regulator establishes some threshold common metrics as well as a requirement that the regulated entities develop their own outcome targets and plans for achieving them. Oversight is focused on how the regulated entities have carried out their plans, progressed toward the self-determined targets, and achieved the common metrics. Failure to meet the minimum threshold of the common metrics or to make progress on self-determined targets triggers a reexamination and refinement of the regulated entity’s plans. Repeated failure to meet the targets, follow-through, or revise the plan can have more-significant consequences.[46]

Management-based regulation is grounded in the idea that regulation should reinforce organizational learning by the regulated entity. Regulatory intervention at the planning stage—coupled with monitoring, benchmarking against peer organizations, and enforcement of minimum standards—helps organizations build capacity for self-regulation and improvement towards rigorous, context-appropriate outcomes. Both regulators and institutions gain a better sense of how contextualized actions are related to outcomes, as well as the feasibility of achieving outcomes in different contexts. This learning process equips the institutions to sustain continuous improvement, and also enables the regulator to discover and set new minimum thresholds based on best-in-class outcomes.

This approach resembles the current system of higher education accreditation, but there are some key differences, and several ways in which features of a management-based system could make the current quality assurance system for higher education more rigorous, transparent, and open to innovation and improvement. First, common metrics for similar institutions focused on characteristics such as financial viability and student outcomes, would improve comparability among peers and open pathways for innovative providers whose processes or infrastructure do not meet input-based standards. These metrics would trigger additional review and support by regulators (federal or regional), while incentivizing institutions to build their own evaluative capacity.

In the context of higher education quality assurance, more-frequent monitoring has the potential to improve accountability, offer more opportunities for institutional improvement, and reduce the resource burden and tendency for “box-checking” that is criticized in the current process.

The management-based approach we propose also requires closer, more frequent interaction between the regulator and the regulated entity to monitor planning, revision, and progress. In the context of higher education quality assurance, more-frequent monitoring has the potential to improve accountability, offer more opportunities for institutional improvement, and reduce the resource burden and tendency for “box-checking” that is criticized in the current process. Finally, the management-based approach contemplates differentiated supports and consequences for institutions that struggle to implement plans or achieve goals.[47] This provides institutions with information and incentives for improvement that are more useful than bimodal accreditation decisions do. If reports and results are made public, the approach would also provide consumers with more meaningful information about institutions than the current system does.

This section reviews a variety of quality assurance schemes in United States higher education, international higher education, and across other sectors, each of which uses elements of a management-based approach. For each system, we provide a high level description, and then explain the key mechanisms of the system that incentivize institutional learning and continuous improvement, leading to improved quality at the organizational and system-wide level. Though few empirical evaluations of the efficacy of these schemes exist, we draw from these examples and the literature about them to distill several key design principles for an improved system for U.S. postsecondary education.

Management-based efforts to reform higher education quality assurance in the United States

A number of professional and disciplinary accrediting bodies in the United States have adopted elements of this management-based approach, and some have evaluated the effectiveness of these approaches in supporting improvement. For example, to ensure baseline quality and comparability while allowing for programmatic diversity, the Accreditation Board for Engineering and Technology (ABET) bases its assessment and accreditation of engineering programs on student performance on broad, industry-aligned learning outcomes, as well as program-defined objectives and plans for continuous improvement. Baseline and program-defined standards and plans are assessed through a self-study and site-visit by volunteer peers, industry experts, and government experts, and reviews can result in a number of actions, including approval for accreditation of reaccreditation, requests for further reporting or additional visits to show progress in identified weaknesses, or non-accreditation. Programs that are approved for accreditation are reviewed on a six-year cycle; those that demonstrate weaknesses or deficiencies are reviewed via additional reporting, site visits, or both on a more frequent basis.[48]

There is evidence that ABET’s focus on baseline outcomes standards and program-defined standards has been effective at helping programs improve. A 2006 impact evaluation of these standards found that the hybrid approach, which replaced a more input-based, prescriptive approach in 1997, incentivized programs to focus curricula more on professional skill development while encouraging planning for continuous improvement among varied stakeholders. In addition, students report higher levels of engagement, perform better on learning outcomes assessments, and maintain more technical skills than they did with an input-based approach.[49]

The Department of Education is also experimenting with tailored, management-based forms of quality assurance, with the goals of accommodating innovative providers while ensuring accountability amongst high-risk entrants. Launched in 2015, the ED’s Educational Quality through Innovative Partnerships (EQUIP) experimental sites initiative extends federal financial aid eligibility to programs run in partnership between traditional educational providers (Title IV institutions) and non-traditional educational providers (such as coding boot camps and MOOC providers). The initiative aims to provide innovation options to low-income students, while testing ways to assure rigor and quality for new postsecondary models.[50]

To achieve these ends, each pair of partners works with an independent Quality Assurance Entity (QAE) to develop program-specific outcomes, plans for meeting those outcomes, and methods of assessment. For example, Entangled Ventures, the QAE for UT Austin and the coding boot camp Maker Square, audits the program’s performance based on learning outcome assessments, employment outcomes, and student satisfaction. The Council for Higher Education Accreditation, which is the QAE for Dallas Community College District and Straighter Line partnership, will evaluate outcomes related to learning, transfers, and comparability across programs in order to assess the program’s effectiveness at providing students with low-cost, high-quality educational experiences that can be recognized for credit at Title IV institutions. Assessment processes differ by partnership, but typically include a self-review, an assessment of processes, documents, and student work by an external team of experts, and recommendations for both accreditation and improvement.[51]

Finally, regional accreditors have made some recent changes that indicate movement toward management-based regulation. In September 2016, the Council of Regional Accreditation Commissions (C-RAC) announced that it would expand its review of four-year institutions with graduation rates at or below 25 percent and two-year institutions at or below 15 percent. Accreditors would supplement this review with data on transfer rates, and then follow up with institutions identified as high-risk to get more information on the conditions that led to low graduation rates and institutional plans for improvement.[52] Combined with regional accreditors’ shift over the past two decades toward a focus on defining and assessing learning outcomes, these changes lay the foundation for management-based regulation.

Though varied, each of these systems represents a hybrid approach that combines an evaluation against common metrics with a review of plans, processes, and, in some cases, provider-determined outcomes and self-assessments. Moreover, the EQUIP quality assurance system and the new C-RAC policies provide increased opportunities for review and interaction for new entrants and bad actors. These features have the potential to ensure a higher level of quality among higher-risk providers, while providing supported and regular opportunities for organizational learning and improvement.

International approaches to quality assurance in higher education

Proposals for accreditation and quality assurance reform in the U.S. often look to international systems for reform cues related to differentiated review, the timing of review cycles, reviewer group composition, review focus, and consequences and reporting.[53] This section briefly reviews international academic audits (including recent reforms to systems that use them) and internal quality assurance systems to provide context for our proposed changes to the U.S. system. These case studies illuminate the benefits of as well as the challenges in creating quality assurance approaches that balance improvement with accountability and efficiency. New developments provide some cues for how a U.S. quality assurance system might better negotiate these tensions.

Several jurisdictions, including Hong Kong, Sweden, the Netherlands, and, until recently, Australia and the UK, use academic quality audits as part of their quality assurance systems. A form of internal quality assurance that gained popularity through the 1990’s and early 2000’s, academic audits typically include a self-study and a site visit (by peers or trained reviewers) to evaluate “education quality processes,” and to determine how faculty members organize their work and use data to make decisions and achieve academic goals. This is not unlike the process in the U.S., though the focus remains more strictly on academics.[54]

Academic quality audits often exist as one part of a larger quality assurance and accreditation system in the countries in which they operate. For example, distinct from the program approval process, the quality assurance process in New Zealand is overseen by the Academic Quality Agency for New Zealand Universities (AQA), which focuses audit cycles on particular components of academic quality and the internal processes for monitoring and improving upon them. After a self-study and peer review, the AQA publishes a report with commendations for improvement, and institutions are held accountable for implementing improvement plans through progress reports and follow-up reviews.[55] Tertiary programs in the non-university sector, such as technical institutes and private training establishments, are approved for inclusion in the New Zealand Qualifications Framework by a separate body, the New Zealand Qualifications Authority. In a less process-based and improvement-oriented approach, new entrants are assessed by external reviewers on organizational features and criteria related to nationally-aligned learning outcomes, assessment methods, and resources. Once approved, providers are reviewed periodically based on educational performance and capacity for self-assessment; results of reviews are differentiated and range from closer monitoring to sanctions or other legal actions.[56]

Evaluations of academic audits like those used in New Zealand’s university sector have shown that components of audits, including self-assessments, discussions with regulators, and external recommendations, have increased capacity for self-regulation, strengthened internal quality assurance processes, and changed stakeholder behavior.[57] However, process-based academic audits have also drawn criticism for paying too little attention to standards and outcomes, and for being inefficient, or for being too easily “gamed,” by institutions. For example, a 2008 government review of the Australian Universities Quality Agency, which oversaw quality audits at Australian tertiary education providers in the 1990s and early 2000s, found that academic audits placed too much emphases on processes, made comparisons across institutions difficult, and were insufficiently rigorous. [58]

Responding to these concerns about accountability, efficiency, and comparability, Australia and the UK have supplanted academic audits with more standards-based and risk-based systems, though both approaches are still under development. In Australia’s risk-based system, implemented in 2008 and overseen by the Tertiary Education Quality and Standards Agency (TEQSA), peer reviewers and external experts assess new entrants on a robust set of threshold quality standards, but tailor the scope and depth of assessment for better-established providers based on their history of providing higher education, their track record of quality, their financial standing, and their performance on a number of risk indicators. The process places a heavy emphasis on outcomes and documentation, and rarely involves a site visit or self-assessment. Institutions that are deemed low-risk during this review are reaccredited and can go as long as seven years before another review; institutions that are deemed higher-risk undergo more frequent, extended reviews based on TEQSA quality standards and are subject to a graduated scale of consequences.[59]

Though reforms in the UK are not as well developed, current plans—as written—balance a differentiated approach with additional “meta”-monitoring of internal quality assurance processes. Under the planned new system, new entrants will be visited and reviewed by a contracted quality assurance agency against baseline regulatory requirements related to how they meet standards set by a national qualifications framework, their financial stability, management, governance, student protection measures, mission, and strategy. This initial review will also identify areas for development to be targeted during a four-year probationary period of extended review and support. Better established providers will undergo a one-time formative review of their internal processes for monitoring and improving student outcomes, as well as annual, differentiated reviews of operations and student outcomes, and five-year reviews of financial viability and internal regulation processes.[60]

While it is too early to assess the efficacy of these new systems in assuring quality and incentivizing institutional learning, the U.K.’s planned approach—which differentiates review and emphasizes the formation of internal quality assurance processes—resembles, in some ways, systems in other European and Asian countries that permit self-accreditation for lower-risk institutions. Under these systems, institutions that demonstrate robust internal quality assurance systems are held to different, typically less burdensome, accreditation requirements, and can self-accredit their own processes or programs. For example, in 2009 the German Accreditation Council made it possible for German institutions to accredit their own academic programs (rather than rely on external accreditation). In order to qualify for “system accreditation,” institutions undergo a self-assessment and peer review, and must meet a number of criteria related to documentation, data, reporting, and assessment.[61]

A number of factors, including the burdensome nature of program accreditation, the growing prevalence of institutional evidence-based decision making, and political pressure for quality enhancement led the German Accreditation Council to push for internal quality assurance options. A study of the internal quality assurance system at Duisberg-Essen, a German university, found that faculty were well supported in their quality work, that the system’s development was strongly incentivized by external quality review mechanisms, and that the internal quality assurance system was embedded in other management processes as part of a larger quality enhancement effort.[62]

Though international developments, many of which are too new to assess, hardly point a clear way forward for a U.S. system, they do illustrate efforts to create systems that incentivize institutional improvement, organizational learning, and self-regulation while improving sector-wide accountability and rigor.

Though international developments, many of which are too new to assess, hardly point a clear way forward for a U.S. system, they do illustrate efforts to create systems that incentivize institutional improvement, organizational learning, and self-regulation while improving sector-wide accountability and rigor. In many cases, international approaches have aimed to achieve these through differentiated systems that use threshold standards and intensive reviews for higher risk-providers (new entrants, poorer performers), and focus more on systems of self-regulation for better established or better performing institutions. Examples and assessments show that approaches that give institutions more autonomy in creating processes and monitoring systems can incentivize institutional learning, but also point to the need for common outcomes standards to assure comparability and accountability.

Management-based quality assurance in other sectors

Quality Improvement in Health Care

Management-based approaches to quality assurance are also observable in systems outside of higher education and, in general, emphasize outcome and process standards, building capacity for self-regulation, and differentiated reviews, consequences, and ratings. For example, continuous quality improvement methodologies, which are used in many sectors but have become particularly popular in health care, emphasize an organization’s responsibility in understanding how its own resources, delivery systems, and processes can be addressed together to improve outcomes (in this case, quality of care) and reduce adverse effects.[63] These methods emphasize data-driven decision making, collective responsibility, and a user-centered focus. For example, using the Plan-Do-Study-Act methodology of continuous quality improvement, which originated at Bell Labs but has been adapted to the health care sector by the Institute for Healthcare Improvement, organizations engage in an iterative cycle of setting goals for outcome improvement, developing hypotheses for actions that may meet those goals, implementing those actions, studying the results, and making a policy decision or refining the hypothesis and testing again. Plan-Do-Study-Act methods usually focus on small-scale changes and aim to promote organizational learning, but are often embedded in larger organizational improvement initiatives.[64]

The Six Sigma methodology originated in the 1980’s at Motorola, and uses similar, evidence-driven methods in which organizations define a problem and goals for improvement, create a data collection and measurement plan, analyze results and their deviations from expected outcomes, improve upon the plan, and develop quality controls for ongoing processes. The Six Sigma methodology focuses primarily on areas in which there are common errors and high levels of inconsistency, and uses evidence about these processes and their outcomes to create more predictability. The methodology has been used to achieve outcomes such as increased capacity in X-ray rooms, reduced bottlenecks in emergency departments, reduced length of stay, and reduced post-operative infections. While evaluations have found that context has a significant bearing on effectiveness, many implementations of PDSA, Six Sigma, and similar methodologies show that these methods can reduce errors, improve safety and quality of care, and reduce costs in health care.[65]

Continuous quality improvement methodologies have been supported and developed by organizations and agencies like Institute for Healthcare Improvement, the Health Resources and Services Administration, and the Center for Medicare and Medicaid.[66] These approaches are also built into accreditation standards for healthcare providers, and are combined with assessments on rigorous, standardized outcome measures of quality of care in accreditation decisions. For example, the National Committee for Quality Assurance’s Health Plan accreditation process assesses how program structure and program operations support a quality improvement plan, and focuses heavily on management processes, evaluative capacity, and evidence-based decision making in its review. Quality improvement plans and other processes are assessed through on-site and off-site evaluations. Based on these plans, performance on other NCQA requirements, and performance on national measures of quality care and consumer satisfaction, organizations are assigned one of six accreditation levels that range from excellent to denied. Status is made publicly available on the NCQA Health Plan Report Card, and, those awarded a lower status must undergo review within eighteen months to demonstrate improvement.[67]

Similarly, since the 1990s, Joint Commission accreditation requires that hospitals have a well-defined quality improvement plan that combines required and hospital-selected standards, is supported by a locally developed data collection process, and incorporates data into organizational decision making.[68] Hospitals are assessed through unannounced on-site visits that include observations, interviews, document review, and care-tracing methods, and can receive a variety of levels of accreditation, which are reported publicly.[69] The Public Health Accreditation Board, an independent public health department accreditor supported by the national Center for Disease Control (CDC) uses a similar process of review.[70] The NCQA, the Joint Commission, and the CDC all provide organizations with resources to support quality improvement and to meet accreditation standards, and, arguably, the graded and combined assessment of national standards and management processes promotes comparability, accountability, and self-regulated improvement.

The evidence of effectiveness of quality improvement plans and methods is mixed and difficult to measure, but trends positive. A number of studies have found that continuous quality improvement efforts to reduce length of stay and patient charges for various procedures have been effective, and reviews of quality improvement evaluations find evidence of effectiveness for improving type 2 diabetes, asthma, and hypertensive care; preventing health-care associated infections; and reducing inappropriate treatment with antibiotics.[71] Other reviews have found that interventions that use multiple strategies for promoting change and quality improvement, and that focus on behavioral and organizational change (e.g., educational efforts, changes in team roles, changes in work flow) are most effective.[72]

However, many reviews of the literature have concluded that evaluations of quality improvement strategies are hard to construct, and that results are difficult to generalize because they apply to varied populations, interventions, outcomes, and contexts.[73] Few studies use randomized controlled trials or quasi-experimental design, and some reviews have found that evaluations systematically use poorly validated measurement instruments. Other reviews have emphasized the importance of contextual factors such as leadership, organizational culture, data infrastructure, and duration of involvement as important in determining the effectiveness of quality improvement strategies and initiatives. Authors of these studies argue that improved evaluations would focus on how and under what circumstances effective quality improvement methods were applied, rather than focusing solely on outcomes, empirical generalizability, or the relative efficacy of a single methodology.[74] These concerns regarding implementation and context, as well as organizational capacity for implementing improvement plans, are often built into accreditation assessments (in healthcare and other fields), and are crucial components of effective management-based regulation.

International School Inspections

Australia, New Zealand, and several European countries use elements of a management-based approach in K-12 school inspection to promote organizational learning, continuous improvement, and accountability. These systems pay less attention to test scores and more attention to processes and practices than the K-12 accountability system in the U.S., and most use low-stakes models that encourage self-regulation.[75] A comparative review of these systems has found that that inspectorate systems that evaluate both processes and outcomes, use risk-based approaches, and publicly report findings have the greatest impact on school improvement because they promote self-regulation, set expectations, allow for peer-benchmarking, and provide higher-risk institutions with information and support for improvement.[76]

In New Zealand’s inspectorate system, The New Zealand Education Review Office (ERO), which is independent of the Ministry of Education, uses broad outcome and process indicators to evaluate schools’ strategic planning processes, encourage evidence-based decision making, and build organizational evaluative capacity. Outcome indicators measure student performance on academic learning outcomes and evidence of culturally responsive schooling. Though no standardized test is used to measure student learning outcomes, the ERO equips schools with assessment tools that provide national benchmarks for performance on these indicators. This incentivizes schools to build their own capacity to evaluate quantitative outcomes against a baseline, and among contextual factors that might explain irregularities.

Process indicators focus on “organizational influences on student outcomes,” and evaluate schools on six domains related to leadership, curriculum, teaching, governance, assessment, and evaluation.[77] Outcome and process indicators are used in both an internal self-study and an external review by professional evaluators, who produce a report on school performance. In a relatively low-stakes model, reviews and reports are used primarily to help schools identify areas for improvement, and there are rarely punitive consequences for schools that perform poorly (though reports are made publicly available).

The ERO adopted this approach in 2003, replacing a process that had focused more on defined procedures, and led to mechanistic reviews that did not necessarily foster better learning outcomes. Surveys of principals and teachers have found that the new system has affected culture change at schools and prompted a greater focus on professional development and action plans for improved student performance.[78] The combined focus on broad process and outcomes indicators, as well as the provision of tools for national benchmarking, might be joined with more rigorous consequences and monitoring for an improved system for higher education in the U.S.

The Netherlands’ school inspection system also aims to promote self-regulation and quality improvement, while ensuring accountability and comparability. The Dutch Inspectorate inspects schools using risk-based differentiation, so that those schools that exhibit lower indications of quality in a preliminary document review receive more intensive inspections, including site-visits. Reviews focus on processes and conditions within the school, including the use of student evaluations and improvement plans, responsiveness to student needs, and the adequacy of counseling and mentoring, and teaching practices. Inspectors distribute reports on evaluation to the schools, the school boards, and the school minister, and also make reports publicly available online. Though the Inspectorate cannot shut down poor-performing schools, an institution found to provide an education of insufficient quality is subject to a quality improvement inspection within two years, during which it must implement quality improvement recommendations devised by the school board and inspectorate.[79] There are no minimum standards, and risk is defined through contextually poor performance as indicated through student results, accountability documents, and other reports by students, parents, or in the media.

Finally, some Austrian provinces have used approaches to inspection that give substantial discretion to school leaders in developing plans for improvement. For example, in the Austrian province of Styria, inspectors gather evidence from a site visit and document review to produce a report that is shared with school staff. School management works with inspectors to draw conclusions and to formulate objectives and measures for further development in a School Development Plan. After one to three years, inspectors return to the school to assess the implementation of the School Development Plan, though no negative sanctions are tied to school performance on or adherence to the plan.[80] Evaluations have found that this low-stakes approach prompts school leaders to be more attentive to inspection feedback, but also elicits fewer development and self-evaluation activities than higher stakes approaches.[81]

Nursing Home Regulation in Australia and Ireland

Both Australia and Ireland have taken approaches to the regulation of aged care facilities that incorporate management-based quality assurance methods and prompt self-regulation and organizational learning. For example, Australia’s Aged Care Quality Agency, which accredits residential aged care facilities, uses self-study and peer assessment to assess facilities on a broad set of standards related to management systems; staffing and organizational development; health and personal care; the care recipient lifestyle; and the physical environment. Outcomes for each standard focus on processes and planning, and accommodate provider-specified objectives. If facilities fail to meet these standards, they are put on a tailored “timetable for improvement,” and the Quality Agency monitors their progress towards addressing deficiencies. If deficiencies are not addressed, accreditation may be revoked. In a comparison of Australia’s system to the United States’ and the United Kingdom’s, researchers found that Australia uses a much narrower list of broader outcomes‐focused, aged care standards than the U.S. or U.K. The Australian system, the authors argue, places more emphasis on local accountability and responsibility for determining the most appropriate means for achieving standards within a given service delivery context, and, despite challenges, leads to less ritualism than a compliance-based approach to nursing home regulation.[82]

Using a similar approach, Ireland adopted the National Quality Standards for Residential Care Setting for Older People in Ireland in 2009. The Health Information and Quality Authority assesses aged-care centers on these standards at least every three years, but does so more frequently for centers that represent greater risk. The National Quality Standards are broad and patient-oriented, and are broken into seven categories—rights, protection, health and social care needs, quality of life, staffing, the care environment, and governance and management. Center managers are given discretion regarding the best mechanisms to meet most standards.[83]

When a center is assessed, inspectors from the Health Information and Quality Authority visit the center and conduct observations, document review, and interview managers, staff, residents, and relatives, to produce a draft inspection report on all 32 standards. Center managers use the draft report to create an action plan that details how they will address the requirements for change and inspector recommendations, and outline a time frame for doing so. Both the action plan and a finalized inspection report are made publicly available on the HIQA webpage, and if a center does not meet or improve upon requirements, HIQA may limit its operations or close it entirely. Reviews of initial inspection reports and follow-up reports reveal evidence of improvement, and some managers have reported that the new standards have been effective in prompting changes in practices and culture towards person-centered care. However, reviews of inspection reports and interviews with managers also reveal dissatisfaction with the extra time and paperwork that the new standards require, as well as concerns that some of this paper work might distract from the provision of care.[84]


Though varied in their approaches and effectiveness, these diverse quality assurance schemes provide a number of takeaways for higher education. First, many of the systems reviewed combine assessment on standardized outcome measures with assessment on program-defined outcomes or processes. This allows for the establishment of minimum standards to guide regulator action, as well as peer benchmarking, which can lead to more contextualized reviews and increase a provider’s self-regulatory capacity. Second, a number of the systems reviewed here focus heavily on organizational capacity for self-evaluation and continuous improvement by requiring quality improvement plans for accreditation and having organizations take the lead in assessment or implementation of recommendations. While this approach is not entirely different from the current self-study process in higher education accreditation, in many sectors, the implementation of self-assessment and a quality improvement plan is accompanied by greater guidance or oversight by a regulator, especially for higher-risk organizations. Third, reviews are differentiated to allow for more than bimodal outcomes; this allows for risk-based approaches to regulation so that more resources are devoted to poorer-performing actors, lowers the stakes for ratings that do not result in full accreditation, provides organizations with more formative guidance and consequences for improvement, and gives consumers more meaningful information.

A Path Forward?

Drawing on the lessons of these management-based approaches to quality assurance, a few key design principles emerge for reforming quality assurance in U.S. higher education:

  • As in the current system, initial approval should focus on the quality and coherence of a provider’s well-articulated plans and outcome goals and an external assessment of the program’s value proposition for students and areas of risk and development. The provider’s track record, its relationship with existing high-quality providers, and its exit strategy in the event of failure should also be considered. Approval should be followed by a probationary period of more-frequent review focused on a rigorous follow-through on the plans, the areas of risk and development, and whether the program is meeting minimum thresholds in outcomes for students.
  • The scope of review should include both organizational efficacy and student outcomes, with greater coordination around how to measure student learning.
  • Efficacy and outcomes should be evaluated by both common measures for programs of a particular type and by program-defined measures; both should be based on comparative benchmarks for peer institutions. Failure to meet minimum standards should trigger closer review and tailored support or consequences.
  • An annual review should focus on a small set of student outcome and financial stability measures that are standard for a peer set of programs and appropriately account for conditions of operation.
  • In addition, programs should be assessed every three years on evidence-based, provider-defined goals for planning, implementation, and effectiveness of core educational processes, with a focus on processes identified as areas for improvement in prior years.
  • Quality assurance results should be differentiated, with specific areas flagged for improvement, and should be conveyed in detail (mainly for institutions, peers, and regulators) and in accessible, summary form.
  • Results of reviews, including performance on minimum threshold standards and user friendly reports should be made publicly available to students, families, and taxpayers.
  • High-performing institutions should receive designations of excellence and/or extended periods between reviews. Institutions that fail to meet benchmarks, implement improvement plans, or repeatedly fail to achieve improvement should receive tailored supports for organizational learning, and may be subject to more-frequent and/or more in-depth review, externally imposed goals, loss of funds, or loss of accreditation for some or all programs.

These management-based design principles address a number of the shortcomings of the current system as well as those of performance-based reform efforts. By incorporating benchmarked outcomes in addition to organizational capacity, and using both shared and institution-defined measures, the approach is more rigorously attuned to efficacy than the current approach, and more holistic and sensitive to different institutional goals and contexts than performance-based quality assurance. This combination of flexibility and rigor creates pathways for innovative providers to enter the market, while enhancing monitoring of those new entrants. A requirement to reference peer programs in setting benchmarks creates an opportunity for learning across programs.

Differentiated results provide better consumer information, better formative feedback, and allow differentiated consequences. More frequent and focused review means that programs will have greater motivation to take feedback seriously, while also having less burdensome review processes. Greater frequency also makes it easier for reviewers to take action to address problems as they arise and before they accumulate.

Based on the evidence presented here, the logic of the approach, and the relationship with the existing system, we believe these design principles represent both a feasible and meaningful improvement in higher education quality assurance. But how likely is it that this shift will take place? And by what mechanism would it occur?

There is a federal legislative pathway to these reforms. With the White House and both houses of Congress now under the same party, the prospects for a reauthorization of the Higher Education Act may have improved. Moreover, the design approach we have outlined here has the benefit of representing something of a compromise position between defenders of the current system and advocates of performance-based systems. It increases accountability for outcomes and efficiency while also preserving flexibility in the process for differences in institutional context, and avoiding direct regulation by the federal government. To effect these changes, Congress could amend Title IV of the Higher Education Act to define eligibility to receive federal financial aid to require program accreditation according to these design principles–as opposed to accreditation that addresses the 10 minimum standards identified in the 1992 amendments. Amendments could also relax some of the other requirements for financial aid eligibility, such as the minimum program hours requirement, that inhibit innovation, while dedicating more regulatory resources to emerging actors to ensure accountability. Finally, legislation could shift the focus of the Education Department’s review and recognition of accreditors from the current emphasis on how they monitor compliance with various, specific, administrative requirements to accreditors’ demonstrated capacity to assess and reinforce educational quality and financial stability.

While a federal legislative approach, tied to financial aid eligibility, would yield the most comprehensive shift, it may also be possible—and more feasible in the current political environment—to pursue many of these changes without the involvement of the federal government. Existing accreditors could adjust their processes to align to the design principles, and should build on previous joint efforts to coordinate changes and share practices. Several accreditors have already begun to experiment with more-frequent review of a subset of standards and common measures of student outcomes, and all seven regional accreditors have expressed intent to accommodate competency-based approaches. New entities could emerge to offer a process like the one described, seek recognition as accreditors, and recruit non-traditional providers to participate. States may also amend their licensing and approval processes to align to the design principles, or seek recognition as accreditors themselves.

Simply requiring accreditation aligned to the design principles is not guaranteed to improve institutional processes or student outcomes. One risk is that a poor implementation of the design principles would not be different from the current system. To mitigate the risk of poor implementation, there must be meaningful oversight (by the federal government or another authority) of the accreditors or other third-party quality assurance entities. Such oversight could, perhaps, be augmented with a peer validation process among accreditors, coordinated by the oversight authority, to promote identification and sharing of best practices. Another risk, even if the design principles are implemented well, is that greater transparency around shortcomings and improvement, and the dynamic nature of the process itself, could undermine confidence in the sector or in the process. That risk warrants careful consideration of which information is reported publicly versus shared with providers on a formative basis. Finally, accreditors and institutions and their stakeholders must approach the accreditation process, and other regulatory actions, as opportunities for learning and capacity building, rather than exercises in compliance. This would represent a substantial shift in perspective for many participants in the process, who are entrenched in the way things have always been. Creating opportunities for early, small “wins,” in which a revised process demonstrates some value to all parties involved, will be critical.


Quality assurance for higher education in the United States is a topic of much debate. The urgency of increasing student outcomes, decreasing costs, and accommodating changing instructional models requires that quality assurance and accreditation schemes adopt new practices while keeping bad actors out of the sector. In order to accomplish this, we believe that much of the current system of quality assurance in the United States, including the roles played by the federal government and regional accreditors; the process of self-evaluation, site-visits, and peer reviews; and the focus on institution-specific processes should be maintained. But, we have also identified a number of ways in which this system could be improved to balance greater rigor and transparency with greater flexibility and incentives for institutional improvement and learning. Grounded in a management-based theory of regulation, the design principles aim to reinforce institutional capacity for continuous improvement in processes and outcomes, backstopped by minimum standards. Ultimately, we would expect this system to provide students with access to a diverse set of postsecondary experiences that are more consistent in improving the quality of learning and the likelihood of earning a valuable credential.

Appendix: Summary of Recommendations

Convening on the Future of Higher Education Quality Assurance

Summary of Recommendations – May 2017

February 16-17, 2017 | Penn Law School, Philadelphia

Organizers: Martin Kurzweil, Wendell Pritchett, Jessie Brown

In February 2017, Ithaka S+R and the Penn Program on Regulation of Penn Law School convened a group of 30 experts, leaders, accreditors, policy-makers, and other stakeholders in Philadelphia to review and elaborate on a set of design principles for reforming the system of assuring quality in U.S. higher education. This summary reflects the takeaways of the organizers, who benefitted greatly from the discussion with and input by the participants; it does not necessarily reflect the views of any other participant.

The Challenge

The current system of quality assurance, centered on but not limited to institutional and program accreditation, creates significant, input-focused barriers to entry, and yet fails to differentiate among institutions and is too slow to intervene with underperforming providers. Infrequent review, lack of benchmarking, and non-public reports create inadequate or poorly aligned motivation and offer ineffective guidance for improvement.

Objectives for Reform

Reforming the quality assurance system should aim to accomplish several objectives:

  • Increase opportunities for innovative new programs.
  • Provide students, parents, and taxpayers with better information about provider learning and other outcomes.
  • Support providers’ organizational learning.
  • Signal to regulators which providers require enhanced attention.
  • Respond to poor performance with tailored, escalating support and consequences, including an orderly exit from the market for providers that fail to meet minimum standards.


Accreditor Recognition

  • Reduce barriers to entry for new accreditors.
  • Focus accreditor review and recognition on the accreditor’s demonstrated capacity to assess and reinforce educational quality and financial stability.
  • Eliminate requirements for accreditors to monitor programs’ compliance with specific administrative regulations unassociated with educational quality and financial stability.

Approval of New Programs

  • Promote partnerships between new entrants and existing providers.
  • Assess new entrants on:
    • Provider track record;
    • Demonstrated understanding of the student market, and alignment of credentials and curriculum with market demand;
    • Partnerships with or sponsorships by existing providers;
    • Feasibility, sustainability, adaptability;
    • Clarity and transparency of program-specific target outcomes and evaluative capacity;
    • Plans for growth;
    • Policies for credit transferability; and
    • Exit strategy (e.g. financial guarantee, teach out).
  • Establish differentiated “tracks” for new entrants, depending on initial performance on entrant standards.
  • Expand Title IV eligibility, shifting the focus of approval from gatekeeping to formative evaluation.

Ongoing Review of Existing Programs

  • Assess programs annually on a small set of student outcome and financial stability measures that are standard for a peer set of programs and appropriately account for conditions of operation.
  • Assess programs every three years on evidence-based, provider-defined goals for planning, implementation, and effectiveness of core educational processes, with a focus on processes identified as areas for improvement in prior years. Core educational processes include:
    • Curricular coherence;
    • Teaching and learning effectiveness;
    • Transparent and effective use of learning outcomes assessment;
    • Use of research-based practices for improvement of learning;
    • Use of research-based student support and advising practices; and
    • Alignment of faculty, staff, and administrator incentives to planning.
  • Benchmark provider’s performance on these standards against peers’ performance.

Consequences and Reporting

  • Accreditors’ annual reports on standard student outcome and financial stability measures and periodic reports on planning and effectiveness of core educational processes should be public, easy to access, and understandable to students and families.
  • A program’s failure to meet minimum standards for student outcome and financial stability measures should trigger an investigation by the provider’s accreditor and regulators, with tailored support and consequences based on findings of the investigation.
  • Accreditors’ periodic reports on core educational processes should clearly identify areas in need of improvement and providers’ plans for addressing them; the report should address the efficacy of improvement plans identified in prior reports.
  • Providers that meet rigorous, evidence-based, institution-defined goals for core educational processes or that excel in standard student outcome and financial stability measures should be recognized with designations of excellence and/or an extended review period on a particular area of strength.
  • Consequences for failure to meet standard or institution-defined goals should be tailored to the particular circumstances of the provider, and should include:
    • Heightened scrutiny, including more intensive and frequent reviews;
    • Publicly disclosed probationary status;
    • Graduated levels of accreditation that entail partial loss of access to funding;
    • Removal of accreditation or license.
  • Regional and program accreditors should coordinate on the standard measures, graduated consequences, and the definitions and format used in reporting.

Convening Participants

  • Wally Boston, American Public University System
  • Barbara Brittingham, New England Association of Schools and Colleges
  • Jessie Brown, Ithaka S+R (organizer)
  • Richard Chait, Harvard Graduate School of Education
  • Cary Coglianese, Penn Law School
  • Art Coleman, Education Counsel
  • Michelle Cooper, Institute for Higher Education Policy
  • David Dill, University of North Carolina, Chapel Hill
  • Peter Ewell, QA Commons
  • Paul Gaston, Kent State University
  • Catharine Bond Hill, Ithaka S+R
  • Debra Humphreys, Lumina Foundation
  • Jonathan Kaplan, Walden University
  • Robert Kelchen, Seton Hall University
  • Marvin Krislov, Oberlin College
  • Martin Kurzweil, Ithaka S+R (organizer)
  • Paul LeBlanc, Southern New Hampshire University
  • Michale McComis, Accrediting Commission of Career Schools and Colleges
  • Michael McPherson, Spencer Foundation
  • Ted Mitchell, former Under Secretary of Education
  • Adam Newman, Tyton Partners
  • Susan Phillips, University at Albany, SUNY
  • Wendell Pritchett, Penn Law School (organizer)
  • Scott Ralls, Northern Virginia Community College
  • Terrel Rhodes, Association of American Colleges and Universities
  • Louis Soares, American Council on Education
  • Jamienne Studley, Beyond 12 and Mills College
  • Betty Vandenbosch, Kaplan University
  • Ralph Wolff, QA Commons


  1. U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS), “Completions Survey” (IPEDS-C: 94); and Fall 2005 and Fall 2015, Completions component. See Digest of Education Statistics 2016, table 318.40.
  2. Jordan Weissmann, “America’s Wasteful Higher Education Spending, In a Chart,” The Atlantic (September 30, 2013),
  3. Data refers only to full-time, first time in college students, who have higher completion rates than part time students. See “Graduation Rates,” National Center for Education Statistics,
  4. Sandy Baum et al. “Trends in Student Aid 2016,” College Board (2016), tables 1a, 1b.
  5. This paper greatly benefited from comments by and discussion with participants in a convening on the Future of Quality Assurance in U.S. Higher Education, hosted by Ithaka S+R and the Penn Program on Regulation at Penn Law School on February 16 and 17, 2017, and supported by the Spencer Foundation. Participants reviewed drafts of the paper both before and after the convening. A summary of the recommendations that emerged from the convening and the list of convening participants are included in the appendix.
  6. There are many proposals for new systems or changes to quality assurance for higher education, many of which share elements with our own. For example, for a proposal for a risk-based approach, see Arthur L. Coleman, Teresa E. Taylor, Bethany M. Little, Katherine E. Lipper, “Getting Our House in Order: Transforming the Federal Regulation of Higher Education as America Prepares for the Challenges of Tomorrow,” Education Counsel (March 2015),; for a proposal for an alternative system with common minimum standards for student and financial outcomes, see Ben Miller, David A. Bergeron, Carmel Martin, “A Quality Alternative: A New Vision for Higher Education Accreditation,”
  7. “Fast Facts,” National Center for Education Statistics, This includes only institutions that participate in the federal student aid program. There are thousands of additional providers of postsecondary education that do not participate in federal student aid.
  8. For a full text of the HEA, see
  9. See Peter Ewell, “Transforming Institutional Accreditation in U.S. Higher Education,” National Center for Higher Education Management Systems (March 2015).
  10. See “2016-2017 Directory of CHEA-Recognized Organizations,” Council for Higher Education Accreditation (May 2017).
  11. “Comments on Sen. Alexander’s Accreditation Policy Paper,” American Council on Education (April 30, 2015),
  12. Doug Lederman, “Accreditors as Federal ‘Gatekeepers,’ Inside Higher Ed (January 30, 2008),; A. Lee Fritschler, “Accreditation’s Dilemma: Serving Two Masters-Universities and Governments,” Council for Higher Education Accreditation (September 22, 2008); Ben Miller, David Bergeron, and Carmel Martin, “A Quality Alternative: A New Vision for Higher Education Accreditation,” Center for American Progress (October 2016),
  13. See Andrew P. Kelly et al., “Inputs, Outcomes, Quality Assurance: A Closer Look at State Oversight of Higher Education,” American Enterprise Institute (August 2015),
  14. For notable examples of critiques and proposals for reforms, see Ewell, “Transforming Institutional Accreditation in U.S. Higher Education;” The National Task Force on Institutional Accreditation, “Assuring Academic Quality in the 21st Century,” American Council on Education,
  15. Higher Education Amendments of 1992, Pub. L. 102-325, 106 Stat. 458 (1992).
  16. Doug Lederman, “No Love, But No Alternative,” Inside Higher Ed, September 1, 2015,
  17. “A Test of Leadership: Charting the Future of U.S. Higher Education. A Report of the Commission Appointed by Secretary of Education Margaret Spellings,” U.S. Department of Education (September 2006),
  18. Arne Duncan, Secretary of U.S. Dept. of Education, Toward a New Focus on Outcomes in Higher Education, Remarks at Univ. of Maryland-Baltimore County (July 27, 2015),
  19. For a critique of these limitations as they relate to competency-based education, see Amy Latinen, “Cracking the Credit Hour,” New America Foundation and Education Sector (September 2012), For the experimental sites initiatives, see Ted Mitchell, “The Competency-Based Education Experiment Expanded to Include More Flexibility for Colleges and Students,” Homeroom: The Official Blog of the U.S. Department of Education. Regional accreditors have also begun to create systems that accommodate innovation, particularly as it relates to competency-based programs that do not fit within the typical credit-hour model. In June 2015, in conjunction with ED’s experimental sites initiative to offer federal aid to competency-based programs, the seven regional accreditation commissions issued a joint statement to define competency-based education and establish common processes for evaluating competency-based and direct assessment programs for evaluation. These evaluation considerations emphasize institutional capacity to offer and assess programs, external references of defined competencies, opportunities for regular interaction between faculty and students, and competencies’ alignment with institutional degree requirements.
  20. Tom LoBianco, “Rubio: College ‘cartels’ need busting in new economy,” CNN (July 7, 2015),
  21. “Voluntary System of Accountability,” Association of Public & Land-Grant Universities.
  22. Natsha Jankowski et al., “Transparency & Accountability: An Evaluation of the VSA College Portrait Pilot,” A Special Report from the National Institute for Learning Outcomes Assessment for the Voluntary System of Accountability (March 2012),; Doug Lederman, “Public University Accountability 2.0.” Inside Higher Ed, (May 6, 2013),
  23. “Voluntary Framework of Accountability,” American Association of Community Colleges.
  24. “Performance-Based Funding for Higher Education,” National Conference of State Legislatures,
  25. See the Tennessee Higher Education Commission’s “2015-2020 Outcomes Based Funding Formula” web page at and the North Carolina Community Colleges report 2014 Performance Measures for Student Success at
  26. Performance Based Funding Evaluation Report, University System of Ohio Board of Regents (December 31, 2014),
  27. Kysie Miao, “Performance-Based Funding of Higher Education: A Detailed Look at Best Practices in 6 States,” Center for American Progress (August 2012),
  28. Nicholas Hillman, “Why Performance-Based College Funding Doesn’t Work,” The Century Foundation (May 25, 2016),; Hana Lahr et al., “Unintended Impacts of Performance Funding on Community Colleges and Universities in Three States,” Community College Research Center Working Paper No. 78 (November 2014),
  29. “National and Programmatic Accreditors: Summary of Student Achievement Standards (January 2017),” available at Some have critiqued these standards as “insufficiently ambitious.” See Miller, et al. “A Quality Alternative.”
  30. See “Core Competency FAQs,” WASC Senior College and University Commission,; “Situating WASC Accreditation in the 21st Century: Redesign for 2012 and Beyond,”
  31. Doug Lederman, “Raising the Bar on Quality Assurance,” Inside Higher Ed (November 18, 2011),
  32. “Gainful Employment,” U.S. Department of Education,
  33. “Education Department Releases Final Debt-to-Earnings Rates for Gainful Employment Programs,” U.S. Department of Education (January 9, 2017),
  34. “Court Upholds New Gainful Employment Regulations,” Edvisors Blog (June 2015),
  35. See Danielle Douglas-Gabriel, “Embattled for-profit Corinthian Colleges closes its doors,” The Washington Post (April 26, 2015),
  36. “Important Information on the Derecognition of ACICS,” U.S. Department of Education,
  37. Megan Slack, President Obama Explains His Plan to Combat Rising College Costs (Aug. 22, 2013), 
  38. Pres. Barack Obama, Remarks by the President on College Affordability at the State Univ. of New York Buffalo (Aug. 22, 2013),
  39. Michael Stratford, “Staking Out Positions,” Inside Higher Ed, (Nov. 21, 2013).
  40. Doug Lederman et al., “Rating (and Berating) the Ratings,” Inside Higher Ed (February 7, 2014),
  41. Stratford, “Stacking Out Positions.”
  42. Lederman et al., “Rating (and Berating) the Ratings.”
  43. Letter from Molly Corbett Broad, President, Am. Council on Ed., to Richard Reeves, Nat’l Ctr. for Ed. Statistics (Jan. 31, 2014),
  44. “Fact Sheet: Providing Students and Families with Comprehensive Support and Information for College Success,” The White House Office of Press Secretary (September 28, 2016),
  45. Press Release, White House Office of the Press Secretary, Fact Sheet: Empowering Students to Choose the College that is Right for Them (Sept. 12, 2015),; “Better Information for Better College Choice and Institutional Performance,” U.S. Department of Education (January 2015),
  46. For a detailed explication of management-based regulation, see Cary Coglianese and David Lazer, “Management-Based Regulation: Prescribing Private Management to Achieve Public Goals,” Law and Society Review 37, no. 4 (November 2004) 691-730, available online at
  47. Accreditors do have flexibility in differentiating their review process depending on institutional risk and performance, though consequences of reviews remain bimodal. See “Flexibility in Application of Accrediting Agency Review Processes; and Emphases in Review of Agency Effectiveness,” letter from Ted Mitchell (April 22, 2016), available at
  48. See American Board for Engineering and Technology (ABET): Accreditation,;
  49. Lisa R. Lattuca et al., “Engineering Change: A Study of the Impact of EC2000,” ABET, Inc. (2006),
  50. “Fact Sheet: ED Launches Initiative for Low-Income Students to Access New Generation of Higher Education Providers,” U.S. Department of Education (April 16, 2016),
  51. Applications with quality assurance plans are available at: “Educational Quality through Innovative Partnerships (EQUIP),” Office of Educational Technology, U.S. Department of Education,
  52. “Regional accreditors announce expanded review of institutions with low graduation rates,” Council of Regional Accrediting Commissions Press Release (September 21, 2016),; Andrew Kreighbaum, “Tougher Scrutiny for Colleges with Low Graduation Rates,” Inside Higher Ed (September 21, 2016),
  53. See for example, Ewell, “Transforming Institutional Accreditation in U.S. Higher Education.”
  54. See William Massy, “Auditing Higher Education to Improve Quality,” The Chronicle of Higher Education (June 20, 2003),; David Dill, “Quality Assurance in Higher Education: Practices and Issues,” University of North Carolina (June 2007),
  55. See the AQA website:; see also, “External Review Report: Academic Quality Agency for New Zealand Universities,” Academic Quality Agency (September 2015),
  56. “Quality assurance arrangements” The New Zealand Qualifications Framework, The New Zealand Qualifications Authority (May 2016),
  57. See David Dill, “Capacity Building as an Instrument of Institutional Reform: Improving the Quality of Higher Education through Academic Audits in the UK, New Zealand, Sweden, and Hong Kong,” Journal of Comparative Policy Analysis 2, no. 2, pp. 211-234,; Mahsood Shah, “Ten years of external quality audit in Australia: evaluating its effectiveness and success,” Assessment & Evaluation in Higher Education 37, no 6 (September 2012), pp 761-772,; Mahsood Shah and Leonid Grebennikov, “External Quality Audit as an Opportunity for Institutional Change and Improvement,” Proceedings of the Australian Universities Quality Forum 2008 (2008),
  58. See Denise Bradley et al., “Review of Australian Higher Education: Final Report,” Australian Government (December 2008), For reasons for changes in the UK, see “Revised operating model for quality assessment,” Higher Education Funding Council for England (March 2016),,2014/Content/Pubs/2016/201603/HEFCE2016_03.pdf.
  59. Australian Government Tertiary Education Quality and Standards Agency,; Mahsood Shah and Lucy Jarzabkowski, “The Australian higher education quality assurance framework: from improvement-led to compliance-driven,” Perspectives: Policy and Practice in Higher Education 17, no 3 (November 9, 2013), pp. 96-106,
  60. “Revised operating model for quality assessment,” Higher Education Funding Council for England (March 2016),,2014/Content/Pubs/2016/201603/HEFCE2016_03.pdf; Peter T. Ewell, “Troubling Times for Quality Assessment in the United Kingdom: The Demise of QAA from the States,” provided to authors by Peter Ewell.
  61. Christian Ganseuer and Petra Pistor, “From tools to a system: The effects of internal quality assurance at the University of Duisburg-Essen,” International Institute for Educational Planning (2016).
  62. Ibid. On surveys, faculty knowledge and involvement in quality work was dependent on their proximity to its implementation, as well as the length of time for which the quality assurance component in question had been implemented.
  63. “Continuous Quality Improvement (CQI) Strategies to Optimize your Practice” The National Learning Consortium and Health Information Technology Research Center (April 30, 2013),
  64. “How to Improve,” Institute for Healthcare Improvement,; Michael J. Taylor et al., “Systematic review of the application of the plan-do-study-act method to improve quality in healthcare,” BMJ Quality and Safety23 (2014),
  65. Shirley Y. Coleman, “Six Sigma in Healthcare,” in Statistical Methods in Healthcare ed. Frederick W. Faltin et al., (Chichester, UK: John Wiley and Sons, 2012),; Jami L. DelliFraine et al., “Assessing the Evidence of Six Sigma and Lean in the Health Care Industry,” Quality Management in Health Care 19, no. 3 (2010), pp. 211-225,
  66. “Quality Improvement,” Health Resources and Services Administration; “Quality Improvement Organizations,” Centers for Medicare & Medicaid Services,
  67. “Health Plan Accreditation,” National Committee for Quality Assurance,
  68. For more information on this transition, see DS O’Leary, MR O’Leary, “From quality assurance to quality improvement. The Joint Commission on Accreditation of Healthcare Organizations and Emergency Care, Emergency Medicine Clinics of North America 10, no. 3 (1992),
  69. “Accreditation,” The Joint Commission,
  70. “Accreditation Process,” Public Health Accreditation Board,
  71. SM Shortell et al., “Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress,” Milbank Quarterly 76, no. 4 (December 1998), pp. 593-624,; see the Agency for Healthcare Research and Quality’s “Closing the Quality Gap” series, available at
  72. Kathryn M. McDonald et al., “Through the Quality Kaleidoscope: Reflections on the Science and Practice of Improving Health Care Quality,” Agency for Healthcare Research and Quality (February 2013),; Kaveh G. Shojania and Jeremy M. Grimshaw, “Evidence-based quality improvement: the state of the science,” Health Affairs 24, no. 4 (January
  73. See, for example, McDonald et al., “Through the Kaleidoscope;” Dionne S. Kringos et al., “The influence of context on the effectiveness of hospital quality improvement strategies: a review of systematic reviews,” BMC Health Services Research 15 (2015),
  74. Heather C. Kaplan et al., “The Model for Understanding Success in Quality (MUSIQ): building a theory of context in healthcare quality improvement,” BMJ Quality and Safety 21(2012), pp. 13-20,; Taylor et al., “Systematic Review of the application of the plan-do-study-act method to improve quality in healthcare;” Kieran Walshe and T Freeman, “Effectiveness of quality improvement: learning from evaluations,” Quality Safety Health Care 11 (2002), pp. 85-87,; Kieran Walse, “Understanding what works—and-why—in quality improvement: the need for theory-driven evaluation,” International Journal for Quality in Health Care 19 no. 2 (2007), pp. 57-59, 
  75. See Helen F. Ladd, “Education Inspectorate Systems in New Zealand and the Netherlands,” Education Finance and Policy 5, no. 3, pp. 378-392,
  76. M. Ehren et al., “Impact of school inspections on improvement of schools—describing assumptions on causal mechanisms in six European countries,” Educational Assessment, Evaluation and Accountability 25, no. 1, pp. 3-43,
  77. “School Evaluation Indicators: Effective Practice for Improvement and Learner Success,” Education Review Office (July 2016),
  78. Catherine Wylie, “School governance in New Zealand—how is it working?” New Zealand Council for Educational Research (2007),
  79. See Ladd, “Education Inspectorate Systems in New Zealand and the Netherlands;” “Inspection,” Inspectorate of Education, Ministry of Education, Culture, and Science,
  80. “School Inspections in Austria,” EU School Inspections,;
  81. Herbert Altrichter and David Kemethofer, “Does Accountability Pressure through School Inspections Promote School Improvement,” School Effectiveness and School Improvement 26 no. 1, pp. 32-56,
  82. See John Braithwaite et al., Regulating Aged Care: Ritualism and the New Pyramid (Northampton: Edward Elgar Press, 2007),
  83. See “National Quality Standards for Residential Care Setting for Older People in Ireland,” Health Information and Quality Authority (2009),; “Quality and Standards in Human Services in Ireland: Residential Care for Older People,” National Economic & Social Development Office (August 2012),
  84. See “Quality and Standards in Human Services in Ireland: Home Care for Older People.”