The past two decades have seen increasing pressure for greater transparency about student learning from both within and outside higher education. Internally, there is a desire to understand and improve the efficacy of curriculum, pedagogy, and student support. Externally, there is a desire to hold institutions—particularly public institutions—accountable. As a result, in the early 2000s the major higher education accreditors began to review colleges’ processes for setting student learning outcomes, assessing those outcomes, and responding to the results.[1]

While there is general agreement that understanding student learning is important, many institutions have struggled to make assessment work. Too often, the assessment process is undertaken solely for the purpose of demonstrating compliance with the accreditation standard during a decennial review. Even when a central administration makes a serious effort to develop an ongoing process, faculty participation is often pro forma.

The University of Pittsburgh is a notable exception to this pattern. Pitt is a state-related university with 35,000 students on five campuses in Western Pennsylvania.[2] Over the past 20 years, Pitt has demonstrated significant improvement in student outcomes and academic reputation.[3] One of the factors that Pitt faculty members and administrators credit for its success is a sustained effort over the past decade to pay systematic attention to student learning.[4]

Since 2006, Pitt has been engaged in an ambitious, university-wide initiative to assess student learning and use the results in an iterative process of revising curriculum and instruction. Each of the university’s 350 undergraduate and graduate degree and certificate programs, as well as undergraduate general education, has a set of three to five learning outcomes. Every three to five years, each of those learning outcomes is assessed in a standardized way. Program committees are responsible for developing and setting targets for the assessments, interpreting results, and describing a plan for addressing them.

To learn more about Pitt’s unique and apparently robust assessment system, I spent two days in October 2014 meeting with 20 Pitt faculty members and administrators.[5] I also reviewed documents related to Pitt’s assessment system, including the university’s 2012 Middle States accreditation review self-study, which focused on assessment.[6]

I found evidence of a widespread and sincere commitment to the assessment of student learning, and to using the assessment information to improve program structure, student support, curriculum, and instruction. This culture change, as much as the assessment system itself, is the subject of this case study. How has Pitt been able to engage its faculty in an ongoing process of student learning assessment and planning, when so many other, similar efforts have not taken hold?

As I discuss below, the most important factor in the development of Pitt’s culture of assessment was its decentralized, yet accountable, approach. University leaders established a timeline and general framework for assessment, offered feedback, designated degree and certificate programs as the units of assessment, and, most significantly, left the details to faculty responsible for those programs. This combination of broad oversight and localized management has fostered a sense of ownership among faculty, who have made assessment an important driver of program improvement.

Origins and Operation of Pitt’s Assessment System

James Maher, Pitt’s provost from 1994 to 2010, and his vice provost for undergraduate and graduate studies and eventual successor, Patricia Beeson, began discussions with deans and faculty about systematic assessment of student learning in 2004. According to administrators I interviewed, Maher and Beeson were motivated in part by new requirements from the Middle States Commission on Higher Education and the desire to prepare for Pitt’s 2012 review. In 2002, Middle States began requiring institutions under review to describe “the knowledge, skills, and competencies that students are expected to exhibit upon successful completion of a course, academic program, co-curricular program, general education requirement, or other specific set of experiences,” and to demonstrate that it is “[a]ssessing student achievement of those learning outcomes” and “[u]sing the results of those assessments to improve teaching and learning and inform planning and resource allocation decisions.”[7]

Assessment of student learning was seen as a “natural extension” of the program planning process that would provide better evidence of program strengths and shortcomings and support continuous improvement.

More significant, however, was the university’s by-then decade-old practice of program planning. In this annual process, each school and department (including student services and the library) develops a five-to-ten-year strategic plan, annually assesses progress against the goals in the plan, and adjusts its plan as necessary for the following year. Assessment of student learning was seen as a “natural extension” of the program planning process that would provide better evidence of program strengths and shortcomings and support continuous improvement. The prior, positive experience with student learning assessment of particular departments, most notably the Swanson School of Engineering, also encouraged the administrators to focus on assessment.[8] In September 2006, Maher appointed an ad hoc committee to develop a proposal for a university-wide assessment system.

Pitt’s Council of Deans formally initiated the university-wide assessment program in November of 2006 with the adoption of a set of Assessment Guidelines.[9] The Guidelines designate “programs” as the units of assessment. A “program” is defined as “all degree or certificate granting programs listed in the graduate and undergraduate bulletins.” Each major at a degree level (e.g., bachelor of science in mathematics or doctor of philosophy in English) must be assessed; dual- and joint-degree programs must be assessed separately if the goals are different from the component programs’ goals. School- and campus-level general education curricula are also “programs” subject to assessment. Although not mentioned in the Guidelines (and not the main focus of this case study), student support services—advising, the health center, etc.—also instituted the assessment system.

The Guidelines make “program faculty” responsible for the “development and administration” of assessment of their programs consistent with criteria described below. Depending on the scope of the curriculum, the guidelines designate department chairs, deans, or campus presidents as responsible for assessment of general education programs. Deans, directors, and campus presidents are also required to report annually to the university provost on the assessment activities and results for programs under their purview.

For each program, those responsible must document the following aspects of the assessment process in a standard-form “matrix”:[10]

Program goals:

  • Three to five educational outcomes;
  • Methods of assessing those outcomes, including at least one assessment providing “direct evidence”;
  • Targets for the results of each assessment; and
  • A process for reviewing results and actions taken or planned based on that review.

“Educational outcomes” and “direct evidence”—concepts that are core to the assessment scheme—merit further explanation. Educational outcomes (also referred to as “learning outcomes”) are statements that reflect the concrete information and skills that students should know or be able to do upon successful completion of the program. In the early rounds of Pitt’s assessment system, some programs set learning outcomes that simply reflected program requirements; these were not proper learning outcomes and the programs were advised to amend their matrices. Some examples of well-constructed learning outcomes are:[11]

  • “Students will be able to interpret events and processes in a transnational context; as part of the global movements of ideas, people, and commodities; or as examples of patterned sociocultural interactions.” (MA, History Education, Dietrich School of Arts and Sciences)
  • “Students will demonstrate the ability to create maps and charts based on the proper acquisition, interpretation, and presentation of geographic information.” (BA, Geography, University of Pittsburgh at Johnstown)
  • “By the time of graduation, students will be able to develop a written persuasive argument for a clinical intervention based upon a critical analysis and review of a supporting body of clinical research and a reflection on its potential impact on a subject of intervention.” (BSN, School of Nursing)

“Direct evidence” demonstrates the desired skill or knowledge itself, in contrast to “indirect evidence,” which demonstrates student or faculty perceptions or outcomes that may be related to possessing the skill or knowledge. Assessments that provide direct evidence include common, embedded questions in the exams for all sections of a foundational or capstone course; rubric-based review of a sample of papers or projects; or standardized exams. Assessments that provide indirect evidence include student or faculty surveys about their experience in a course or in the program as a whole; course grades; or graduation or job placement rates.

The Guidelines include several recommendations that seem intended to avoid overburdening program faculty or students with assessment-related tasks. Programs are encouraged to stagger their assessments, focusing on only one or two learning outcomes each year. The Guidelines also specify that assessments may be based on a sample of students. Finally, programs can request permission to substitute a professional accreditation process for the standard assessment protocol by showing how the two are related.

Although assessment is left primarily in the hands of program faculty, administrators provide a variety of supports and resources. The provost’s office created a website with documentation clarifying aspects of the assessment system, illustrative materials, and links to external resources. During the first several years of the assessment process, administrators in the provost’s office reviewed each program’s assessment matrix and provided detailed feedback. Department chairs and deans often did (and continue to do) the same for the programs within their departments or schools. If a program is struggling with a particular aspect of its assessment process, administrators often share the assessment matrix of a program that have done well with that aspect, or connect the two programs’ assessment coordinators. Pitt’s Center for Instructional Development & Distance Education (CIDDE) has assessment specialists available to provide individualized guidance and support to program assessment coordinators. And each year since 2012 the provost’s office has held an assessment conference, in which assessment coordinators and administrators from each program participate in workshops and panel discussions with an outside keynote speaker and their colleagues.

Assessment is an annual process. After finalizing their initial assessment matrices (with outcomes, assessments, and targets) in 2007, programs designated outcomes for assessment on a three to five year cycle, with one to two outcomes assessed each year. Assessments occur at various times during the academic year, depending on the nature of the assessment. When the assessments for an outcome are complete, the program assessment coordinator produces a report of the results. Program faculty discuss the report at a regular or special meeting and determine actions to take in response to the findings. The interpretation of results and responsive actions are recorded in the matrix, and program or departmental leaders set about making the identified changes.

Many of the Pitt faculty members … cited the review of student work and the discussion of results as the most valuable aspects of the assessment process.

Many of the Pitt faculty members with whom I spoke cited the review of student work and the discussion of results as the most valuable aspects of the assessment process. It is during these discussions that debates about what students should learn and how best to measure it, revelations about what students are learning, and insights about how the program’s curriculum and instruction are meeting or failing to meet program goals arise. By demonstrating the value of the assessment process, these discussions also motivate program faculty to take assessment seriously, including by identifying ways in which the process could be improved.[12]

The process is not meant to be punitive, and Pitt does not attach specific consequences to assessment outcomes or to the quality of a program’s assessment outcomes. The impact on programmatic structure is largely self-motivated. However, a program’s assessment process and its assessment results are considered during periodic program reviews by deans and the university administration. There is no formula by which assessment translates into resources, but as a senior university administrator told me, a program that has taken assessment seriously, and therefore has solid evidence of its students’ learning, is able to make a more convincing case for the resources it seeks.[13]

Evidence of Impact

There is strong evidence that assessment of student learning has been widely and sincerely adopted at Pitt and that it has had an important impact on curriculum and instruction. Every program required to have an assessment matrix has one. For its accreditation review self-study, a Pitt working group on student assessment sampled 10 percent of these matrices and determined that each contained “well-developed statements of learning outcomes that are appropriate to their specific aims,” and employ “a variety of discipline-appropriate methods of collecting direct evidence.”[14]

Nearly every program has amended its learning outcomes and assessment instruments over time, as experience with the system revealed aspects that could be improved.

These well-developed assessment plans did not emerge fully formed but were built over time by the program faculty through iterative improvement. Nearly every program has amended its learning outcomes and assessment instruments over time, as experience with the system revealed aspects that could be improved. One recent example is the decision to de-emphasize results on comprehensive exams in the assessment of a history PhD program at the Oakland campus. Initially, the program included a sample of students’ comprehensive exams, re-graded by a faculty committee, to assess two learning outcomes.[15] But after the first round, the program faculty felt that the results were duplicative of other findings. Furthermore, faculty were concerned that placing so much weight on the comprehensive exams in the assessment created pressure for faculty and students to put more effort into the exams than their educational and job market value justified. The program committee is now working on an alternative assessment methodology, as well as considering changes to the comprehensive exams policy.

In another example, from the Johnstown campus, initial results on a calculus assessment were well below the benchmark set by the program faculty. The faculty reviewed its curriculum and made some adjustments, but also determined that the assessment was not well-aligned to the learning outcome and that the learning outcome itself was not well-specified. The faculty adjusted both the language of the learning outcome and the assessment instrument to achieve a better fit, and are now more confident in the results. Such ongoing refinement of the assessment process—several years after the accreditation review and after the provost’s office ceased to closely review matrices—is important evidence that it is taken seriously: faculty members care about the quality of information that assessment provides.

While hardly a representative sample, the comments of administrators and faculty members I interviewed also provide some insight into how assessment is perceived at the university. One administrator indicated that, after some initial resistance, the faculty in his unit came to see assessment as “good for the institution,” that it would “help them improve” and provide an opportunity to “show what they’re doing.” A faculty member who served on the self-study working group explained that what she learned about the assessment system through that review made her “proud of Pitt and confident in the leadership.” As noted above, several faculty members described the review of assessment results as “eye-opening” and highly valuable.

The most salient evidence of the impact of assessment on education at Pitt is the large number of programmatic changes stemming from the assessment process.

The most salient evidence of the impact of assessment on education at Pitt is the large number of programmatic changes stemming from the assessment process. The accreditation self-study working group identified 310 instances in which interpretation of assessment results led to changes in program content and structure between 2007 and 2012.[16] My interview subjects cited numerous additional changes since 2012. Curricular changes include new or substantially revised courses, new or substantially revised majors, additions to major requirements, the elimination of tracks within majors, and revised course sequences. For instance, program faculty for the history bachelor’s degree at the Oakland campus doubled the number of required writing seminars and differentiated the seminars into a scaffolded sequence after poor results on the assessment of writing skills. The faculty responsible for the bachelor’s degree in creative writing at the Oakland campus eliminated a track in newspaper writing after the assessment revealed poor writing skills and a lack of connection between the curriculum of the track and the broader learning outcomes of the major. In some cases, programs hired new faculty members to cover topic areas in which assessments revealed student weaknesses. For example, the Bradford campus prioritized the hiring of a management professor after the assessment for the business major revealed weaknesses in students’ knowledge of management theory. Assessment led department leaders to make changes outside the classroom, as well: a dozen programs revised their advising structure in response to assessment findings. Some, but not all, of these changes have led to improvements in assessment results; if improvement is not sufficient, program faculty try something else.

These characteristics impressed the Middle States evaluation team in 2012. The team found that “there is a genuine and evolving ‘culture of assessment’ at the University” and that assessment is “meaningfully integrated into the process of shaping curricula and courses within units and departments.”[17] More specifically, the evaluation team concluded that:
“Evidence indicates that assessment of educational program outcomes is pervasive throughout the institution, including undergraduate, graduate, and professional programs. These assessment activities are planned and ongoing. Most faculty perceive the beneficial value of assessment processes within their academic disciplines and use the results of student assessment to guide decisions regarding curriculum and pedagogy.”[18]

Success Factors

My conversations with members of the Pitt community underscore several factors that help explain the university’s successful establishment of its assessment system.

Focusing on the program as the unit of assessment

The decision to assess programs, as opposed to departments or courses, had a number of salutary aspects. First, a program is by its very nature a set of requirements that, in combination, are supposed to lead to learning outcomes; it is therefore a good fit for assessment. Second, a program is far enough removed from the classroom for assessment not to implicate a particular faculty member, but coherent enough for assessment to yield practical, curricular action steps. In other words, it is a meaningful unit for assessment, yet unlikely to wound any individual’s pride. Third, focusing on programs requires faculty to coalesce around a unit that is somewhat oblique to the traditional departmental governance structure, encouraging new relationships and a fresh perspective. Finally, program structure is quite important to student learning outcomes, but without prompting, it is rarely the focus of faculty, who tend to pay more attention to their own courses or departmental organization.

Vesting responsibility for the details of assessment with faculty

Program faculty decide what the learning outcomes for the program should be, how to measure them, and how to respond to results….One faculty member explained that because so much responsibility is vested in program faculty, assessment “does not feel imposed, it feels useful and productive.”

It is common for university assessment systems to take a centralized approach, with an “office of assessment” that creates and interprets assessments for academic programs. By contrast, Pitt’s assessment system is highly decentralized. Program faculty decide what the learning outcomes for the program should be, how to measure them, and how to respond to results—the only requirement is that they have a process in place and produce a matrix that is coherent. Many faculty members cited these features as critically important for developing a sense of ownership in the assessment system. One faculty member explained that because so much responsibility is vested in program faculty, assessment “does not feel imposed, it feels useful and productive.” Another described the assessment process as “thorny, but with a rose attached—faculty get to drive the process.” As another explained, the initial request from the provost was for program faculty to determine what the learning objectives for their program are; it was a request to do something, not something done to them. And when faculty engaged in the initial conversations about learning objectives, they realized the value of the process and were motivated to take it up. In addition to encouraging faculty buy-in, Pitt administrators believe that the decentralized process yields learning outcomes and assessments that are tailored to the needs of the program because it allows disciplinary experts to create them.[19]

Making clear that university leadership is committed to assessment

Notwithstanding the decentralized ownership of the assessment process, the chancellor, the provost, and senior vice provosts have been deeply involved in making it successful. At the outset, the provost, with support from the chancellor, made clear that assessment was a priority and was “going to happen.” Assessment was added to the agenda of multiple committees, and was emphasized in individual conversations with deans. Once the system was initiated, the provost’s office reviewed and offered detailed feedback on every assessment matrix, and provided resources and clarification to support the faculty in their work. In addition to helping program faculty improve the quality of their assessment processes, this intensive review effort reinforced to faculty the importance that university leadership attached to assessment.

Drawing on the reservoir of trust held by the long-serving chancellor and provost

While decentralization and prioritization set conditions that were conducive to the acceptance of assessment by faculty, for a number of faculty members I interviewed, fully engaging in the process still required a leap of faith. Many cited the confidence they had in Chancellor Mark Nordenberg and Provost James Maher as critical to their willingness to take that leap. Both Nordenberg and Maher had been in office for over a decade by the time they launched the assessment system. During that period they had earned the trust of faculty through a number of successful initiatives and steady and visible improvement in Pitt’s resources, reputation, and outcomes. The respect faculty continue to have for these leaders was apparent during my interviews, several years after Maher stepped down and several months after Nordenberg stepped down. The Middle States evaluation team also went out of its way to note the “extraordinarily talented and beloved leadership team,” whose connection to the university’s claims to excellence “could not be overstated.”[20] That said, although the respect faculty have for individual leaders was a critical factor in initiating assessment, perhaps the greatest testament to the leadership of Nordenberg, Maher, and others is that they established a system that has continued to flourish once they were no longer involved.

Linking assessment to a pre-existing, systematic program planning process

A number of my interview subjects described the assessment of student learning as an extension of the program planning process that had been in place since the mid-1990s. University administrators consciously framed the new assessment requirements in this way, emphasizing that it was a refinement and systematization of what faculty were already doing. This framing both mitigated the sense that assessment was a new requirement that came out of the blue, and also positioned the assessment process as a piece of a coherent whole, using the legitimacy of each process to support the other.

Remaining Challenges

There is, of course, room for improvement in Pitt’s assessment system. Indeed, the decentralized approach that has been so crucial to authentic faculty engagement with the assessment system is also the source of some of the most significant challenges. Because each program’s assessment approach is different, it is not possible to make apples-to-apples comparisons of results across programs. Some programs have struggled more than others with developing and maintaining their assessment processes, and the decentralized approach may have allowed those problems to linger for longer than they might have with a more interventionist central administration.

In some ways, decentralization has tended toward atomization, with limited collaboration or sharing of information between programs that have similarities and would benefit from it …

In some ways, decentralization has tended toward atomization, with limited collaboration or sharing of information between programs that have similarities and would benefit from it (although the annual assessment conference is an important counterpoint to that general characterization). Even beyond simply not collaborating, there is a protectiveness of assessment materials that is somewhat surprising in light of the principle of transparency underlying the whole assessment system. Students generally are not aware of their programs’ goals and learning outcomes, or of the assessment process itself. Few programs publish their learning outcomes, let alone their assessment matrices, on their websites.

In addition, developing an assessment process for general education has been particularly challenging. Those responsible have not yet settled on assessment criteria that adequately account for the cross-departmental nature of the general education curriculum and the diverse array of courses students can take to satisfy requirements. For example, the Dietrich School of Arts and Sciences’ initial approach was to assess five of 15 general education subject areas each year, focusing only on the four to five most popular courses in each area. The departments offering those popular courses were responsible for assessing them. Because the departments did not know in advance whether one of their courses would be assessed, they had little incentive to focus on their general education courses until they were identified for assessment, undermining the assessment goal of iterative improvement. This approach also meant that many courses that contributed to the general education program were never assessed. Moreover, it was difficult to use the results to draw conclusions about a general education subject area as a whole because each course’s assessment was different. After several years, the Dietrich School’s Undergraduate Council developed a single set of learning outcomes for each subject area. It is now in the process of switching to an approach in which each department is responsible for assessing one area of general education each year. Still, those involved are expecting further revisions as they work to get the process right.

Conclusion

The remaining problems are solvable. Indeed, they are rendered more tractable by the fact that faculty are engaged seriously in the assessment process. Because they care about assessment, faculty will be more open to making changes that might improve the process. Having built the process themselves, faculty might now welcome a more robust central role. In fact, one interview subject suggested the creation of an office of assessment—that bugbear of the top-down approach—to provide technical support while leaving design decisions and interpretation to the program faculty. And the assessment cycle of goal-setting, target-setting, testing, and responding is a relentless confessor of its own flaws, as well as a mechanism for addressing them.

Eight years in, Pitt’s assessment system is established and humming along. A decentralized approach, pitched at the level of the program and reinforced by committed and respected leaders, seems to have engendered a sense of ownership among the faculty. There is ample evidence that faculty are engaged, that they are working to improve the way they carry out assessment, and that they are using assessment results to modify their program structure, curriculum, and instruction. Some of my interview subjects caution that assessment is not so deeply embedded as to be second-nature, and that it still requires effort to maintain. That is surely right, but it also seems clear that Pitt has built a solid foundation for evidence-based, continuous improvement of student learning.

 

Appendix

I conducted interviews with the following Pitt administrators and faculty members on October 28 and 29, 2014:

  • David Bartholomae, Professor of English, Dietrich School of Arts and Sciences
  • Mary Besterfield-Sacre, Associate Professor, Director of Engineering Education Research Center, Swanson School of Engineering
  • Kathy Blee, Associate Dean for Graduate Studies and Research, Dietrich School of Arts and Sciences
  • Shawn Brooks, Associate Dean of Students, Director of Student Life, University of Pittsburgh-Johnstown
  • Lisa Brush, Professor of Sociology, Dietrich School of Arts and Sciences
  • Jennifer Creamer, Associate Director, University Center for International Studies
  • Laura Dice, Assistant Dean and Director of Freshman Programs, Dietrich School of Arts and Sciences
  • Cynthia Golden, Director, Center for Instructional Development & Distance Education
  • Janet Grady, Vice President of Academic Affairs and Chair, Nursing and Health Sciences Division, University of Pittsburgh-Johnstown
  • Steve Hardin, Vice Provost of Academic Affairs, University of Pittsburgh-Bradford
  • Kathy Humphrey, Vice Provost and Dean of Students
  • Kathleen Kelly, Vice Chair, Department of Physical Therapy, School of Health and Rehabilitation Sciences
  • Laurie Kirsch, Vice Provost for Faculty Affairs and Faculty Development
  • Juan Manfredi, Vice Provost for Undergraduate Studies
  • Elizabeth Matway, Senior Lecturer in English, Chair of College Writing Board Dietrich School of Arts and Sciences
  • Lara Putnam, Chair, Department of History, Dietrich School of Arts and Sciences
  • Steve Robar, Associate Dean of Academic Affairs, University of Pittsburgh-Bradford
  • Alberta Sbragia, Vice Provost for Graduate Studies
  • Larry Shuman, Senior Associate Dean for Academic Affairs, Swanson School of Engineering
  • John Twyning, Associate Dean for Undergraduate Studies, Dietrich School of Arts and Sciences
  1. For example, the Middle States Commission on Higher Education introduced its assessment focus in 2002. See Middle States Commission on Higher Education, Assessing Student Learning and Institutional Effectiveness: Understanding Middle States Expectations (2002), http://www.msche.org/publications/Assessment_Expectations051222081842.pdf .
  2. The Pittsburgh campus in the Oakland neighborhood of Pittsburgh is one of the top research universities in the United States, with Carnegie classification RU/VH. The Bradford, Greensburg, and Johnstown campuses are four-year colleges that offer bachelor’s degrees. The Titusville campus offers primarily associate’s degrees.
  3. Undergraduate retention from freshman to sophomore year increased from 82% for the Fall 1994 cohort to 93% for the Fall 2013 cohort. The undergraduate six-year graduation rate has increased from 61% for the Fall 1994 cohort to 82% for the Fall 2008 cohort. Research expenditures more than tripled from $230 million in 1995 to $759 million in 2013. During this time period, Pitt has risen through various rankings. Pitt’s U.S. News ranking increased from the second tier (51st-115th) of public research universities in 1995 to 18th place in 2015. In the 2013 Times Higher Education World University Rankings, Pitt placed 78th in the world and 17th among U.S. public universities.
  4. Pitt’s assessment system first came to Ithaka S+R’s attention in the context of a study of technology-enhanced instruction commissioned by the Public Flagship Network. See http://www.sr.ithaka.org/research-publications/technology-enhanced-education-public-flagship-universities . A number of Pitt administrators interviewed for the study pointed to student learning assessment as a critical component of all programmatic work, including planning for technology-enhanced instruction.
  5. I am deeply grateful to the members of the Pitt community who took the time to speak with me. In particular, I thank Juan Manfredi and Alberta Sbragia, Vice Provosts for Undergraduate and Graduate Studies, respectively, who graciously coordinated my visit to Pittsburgh and provided valuable information, insight, and context about the university and the assessment system. The Appendix lists all of the interviews I conducted on October 28 and 29, 2014.
  6. Pitt maintains a website where it makes available many of its core documents related to assessment: http://www.academic.pitt.edu/assessment/index.html . Universities are not required to publish their accreditation review materials, but Pitt has chosen to do so: http://www.middlestates.pitt.edu/ . Interview subjects provided additional, non-public documents to me, including examples of assessment results.
  7. Ibid, p. 3.
  8. The Swanson School of Engineering began to develop a process for assessing student learning in the late 1990s after its accreditor, ABET, shifted to outcomes-based accreditation criteria in 1996. See ABET, “Engineering Change” (2006), http://www.abet.org/uploadedFiles/Publications/Special_Reports/EngineeringChange-executive-summary.pdf , for a discussion of the shift to outcomes-based criteria.
  9. http://www.academic.pitt.edu/assessment/pdf/assessment_guidelines.pdf .
  10. Pitt’s assessment matrix is available at http://www.academic.pitt.edu/assessment/pdf/matrix.pdf . It is adapted from a similar template created by the University of Virginia’s Office of Institutional Assessment and Studies.
  11. University of Pittsburgh, “Using a University-wide Culture of Assessment for Continuous Improvement: A Self-Study Submitted to the Middle States Commission on Higher Education,” April 2012 [“Self-Study”], p. 54, fig. 4, http://www.middlestates.pitt.edu/sites/default/files/middle_states_efinal.pdf .
  12. A typical comment came from a member of the College Writing Board, the group responsible for assessing writing across the undergraduate general education curriculum in the Dietrich School of Arts and Sciences: the first afternoon the group spent reading and discussing a sample of student papers was when assessment “stopped being a requirement that was imposed and started being meaningful.”
  13. A former chair in a humanities department made a similar point: with declining enrollments and increasing financial and policy focus on STEM fields, assessment results provided humanities departments with evidence to define and advertise their value.
  14. Self-Study, p. 53.
  15. The two learning outcomes are: “Students will acquire expert knowledge about historical processes in a specific region of the world. They will master a field of scholarship related to their region of interest.” and “Students will be able to analyze events and processes in a transnational historical context: as part of global movement of ideas, people, and commodities or as examples of patterned socio-cultural interactions. They will master a field of scholarship related to a comparative or connective theme of interest.”
  16. Ibid, p. 56 & fig. 6.
  17. “Report to the Faculty, Administration, Trustees, Students of the University of Pittsburgh by an Evaluation Team representing the Middle States Commission on Higher Education” (2012), pp. 4-5, http://www.middlestates.pitt.edu/sites/default/files/middlestatesfinalreport1.pdf .
  18. Ibid, p. 9.
  19. The Middle States evaluation team credited Pitt’s decentralized approach with its success:The University of Pittsburgh wisely has decentralized the manner in which assessment is done, thereby allowing units to develop methods of assessment suitable to their context while insisting nonetheless that the measures developed be rigorous, meaningful and tied to goals. Thus, rather than having a separate office of assessment, each unit is responsible for assessing outcomes and progress toward its stated goals; the evidence produced in the unit is then evaluated through documented reporting processes and the linking of planning, assessment, and budgeting — in other words, the assessment has consequences that matter. This decentralized approach has generated an impressive sense of ownership of the process, even among those who initially were skeptical about it; at the same time, the evaluative process ensures its use to further institutional goals.Report to the Faculty, Administration, Trustees, Students of the University of Pittsburgh by an Evaluation Team representing the Middle States Commission on Higher Education” (2012), p. 4, http://www.middlestates.pitt.edu/sites/default/files/middlestatesfinalreport1.pdf .
  20. Ibid, p. 4.