During the course of the past year, I have had a chance to speak with many assessment librarians, library deans, and others in academic libraries about the types of tools they are using, or considering, for their planning and assessment projects. We typically connect because they have fielded, or are thinking about fielding, one or more of the Ithaka S+R Local Surveys, which primarily focus on the ways in which students and faculty members learn, teach, and perform research. They often are also considering a variety of other assessment approaches as well. Perhaps chief among these are library service quality surveys, which generally center on the importance of and satisfaction associated with various aspects of the library. We often are asked about what these library service quality tools cover and how they are administered, and to this end, have undertaken a high-level side-by-side comparison of these tools.

The tools outlined below — LibQUAL+, LibSat, and MISO — are some of the most frequently utilized within academic libraries in the United States. The table below includes a summary of each tool, which I have fact checked with the provider of each.

LibQUAL+
LibSat
MISO
Parent organizationARLCounting OpinionsBryn Mawr
Core thematic areas covered

 
Minimum service levels, desired service levels, and perceived service performance of three library dimensions (affect of service, information control, and library as place)Satisfaction with and importance of aspects of in-library and online library services, policies, facilities, equipment, and resourcesImportance of, satisfaction with, and frequency of use of library (place-based, online, and in-person) and computing services
Additional topics coveredInformation literacy outcomes, library use, and general satisfactionLikelihood to recommend, services used, and information-seeking preferencesCampus communications, tools used, and levels of skill
Ability to customizeParticipants can add up to five additional local questions

Participants can field “lite” version with reduced number of questions
Participants can localize questions and prompts to convey local terminology and remove or add questions

Respondents may be able to select survey length upon beginning the survey (regular, or in-depth versions)
Participants can include or exclude any items in the survey

Participants can include additional locally developed questions
Survey administrationAdministration handled by the participating institutionContinuous feedback gathered via library website, email distribution, staff intercepts and/or paper-based response; respondents may also volunteer to receive invitations for annual survey follow-upsAll participants on the same timeline for administration
Response ratesNot reported (ARL does not collect this information; libraries are not required to provide these data to ARL)If sample is defined, response rates can be generated based on identified time periods, as responses are continuously gathered over an extended period of timeMost institutions see rates of 50%+
Institutional participants3,000+ surveys fielded to date by 1,390 institutions

109 fielded in 2018

Includes participation within and outside of North America

Nearly all participants are higher education institutions
All participants within North America

Many participants are not higher education institutions
149 institutions to date

40 participated in 2018-19

All participants within North America

All higher education institutions
Pricing$3,200 to participate with discounts for repeat participation within one or two yearsAnnual subscription relative to size of institution; quotes available on request$2,200 for three core populations (faculty, undergraduates, and staff)
PlatformIndependently hosted platformIndependently hosted platformQualtrics for survey administration

Independently hosted platform for reporting
Deliverables for standard participation PDF report with aggregate and stratified results by user group

Raw data in csv and SPSS formats

Real-time access to comments

Radar charts within platform
Reports available within platform; can view, segment, and export in real-time as responses are collected

Ability to route open-ended comments to persons of responsibility

Raw data in XML, tab-delimited, or csv formats
Aggregate results provided in Excel and PDF formats for each campus population

Raw data in csv and SPSS formats
Ability to benchmark and compare resultsCan compare results with other institutions that participated in the same year, analyze results by user group and discipline, and download data tables and radar charts; annual subscription available for expanded access to data for all participating institutions from 2003-presentCan compare results within platform with aggregate pooled results from all other institutions and across respondents from multiple librariesCan compare results within platform with any individual or combination of other participating institutions and across populations (e.g. faculty versus students)

I recognize that many libraries are developing their own in-house surveys, and while these cannot be compared in the same fashion as the tools above, perhaps institutions can compare on an individual basis the scope of their home-grown instruments with these options. I hope that this summary provides useful information for transparent, evidence-based decision-making and look forward to hearing how it is used when libraries consider their options for measuring service quality.